It’s hard to go a day without reading some kind of commentary about artificial intelligence (AI), and what it means for the future of work, or indeed, the future of humanity. So, it will come as no surprise that AI is a hot topic debated internally at OSINT Combine, as we indulge our enthusiasm to understand and adopt new technologies and engage with a healthy degree of skepticism so as not to carelessly discard the old for the new. Personally, I love this intellectual tension, I am, after all, an intelligence analyst at heart, and intelligence analysis is my favorite contact sport! With that, I introduce you to this first installment in a new blog category for OSINT Combine – “Perspectives” – a category intended to bring readers informed opinions candidly tackling the hot topics at the forefront of technology, OSINT and intelligence.
Jane van Tienen, Chief Intelligence Officer – OSINT Combine
Friends, Frenemies, or Just Complicated?
In intelligence analysis, Artificial Intelligence (AI) has emerged as a knight in shining armor for analysts striving to make sense of an ever-growing sea of data. As appealing as AI’s potential may be, it’s worth stepping back and asking the question at the tip of every analyst’s tongue - Are AI and the intelligence analyst destined to be friends, frenemies, or something in between?
Consistent with the reflections I shared back in September at the Australian OSINT Symposium 2024’s panel session Trust, Tradecraft and Global Security: - OSINT and Strategic Decision-Making in the Modern World, I find myself torn between my enthusiasm for AI's potential and caution about its limitations. I confess, I’m on the fence. There’s a nuance to this relationship – AI and the Analyst – that can be equally captivating and challenging in our desire to deliver useful intelligence to the decision maker.
AI as a Friend: The Data Dynamo
Squarely in the pro’s column, one of AI’s undeniable strengths is its ability to process and analyze vast quantities of data at speeds that would be humanly impossible. In this respect, AI is the ultimate bestie, helping the analyst navigate the overwhelming volumes of information that define modern intelligence work. I’m optimistic about AI's ability to expand human capacity to manage this data deluge at pace.
However, this friendship isn't without conditions. While AI offers analysts a leg up in data processing, it still relies on the analyst’s oversight to ensure the insights generated are accurate and meaningful. Curiosity plays a critical role here—it shapes how the analyst interacts with AI. These decisions are shaped by a combination of lived experience, training, and creativity. This enables the analyst to fully understand the customer’s intelligence needs and ask AI the right questions through a process called prompt engineering, where queries are structured to maximize AI’s output. Simply put, you don’t ask, you don’t get.
While curiosity drives how we interact with AI, trust is equally essential. Like any high-stakes relationship, trust demands validation and verification. AI might speed up the work, but analysts must always apply their expertise and judgment to ensure the quality of outputs. As one of my fellow panelists at Symposium said, ‘crap inputs = crap outputs.’ Never a truer word.
The Unhelpful Friend: Navigating Tradecraft and Trust
Despite AI’s strengths, there are aspects of the intelligence work where its role is far more complicated. This is where the "unhelpful friend", or, the “frenemy” dynamic comes into play. AI can be both helpful and frustrating, particularly when it comes to tasks requiring critical thinking, creativity, nuanced understanding, and contextual analysis. Critical thinking, nuanced understanding and context are the human analyst’s love language. AI often falls short.
During the panel, I emphasized that tradecraft—critical thinking, evaluating sources, and applying analytical rigor—remains a fundamental skill set that AI cannot yet replicate. Challenges like lack of transparency, mis- and dis-information and so-called “hallucinations” or bias (including a neuro-normative bias) are problematic for AI, underscoring the need for human oversight to maintain the credibility and accuracy of intelligence products. It’s this tension that leads me to feel “on the fence” about AI’s role today in the broader intelligence process. While AI can be helpful, its limitations mean that it can sometimes feel like a partner who creates more questions than answers. Sorry AI, maybe the analyst is just not that into you. At least, not yet.
It's Complicated: The Future of Human-AI Teaming
In the future, the relationship between AI and the analyst is bound to become even more complicated, even if it does give you butterflies. AI is not a one-size-fits-all solution, nor is it ready to replace the analyst’s role. Instead, it’s about finding the right balance—a partnership that leverages AI’s strengths in data finding, processing and pattern analysis while allowing human analysts to guide, interpret, make sense of the information, and pitch their finished intelligence product appropriately to the human audience. To quote a recent article from the New Yorker, “effort during the writing process doesn’t guarantee the end product is worth reading, but worthwhile work cannot be made without it”.
This rapidly evolving partnership was a key theme during the Australian OSINT Symposium. Presenters spoke about the best composition of human-machine teaming, including how analysts can integrate AI into their workflows without compromising the core elements of analytical tradecraft. In the end, AI has the potential to be a powerful tool, but its success hinges on the analyst’s ability to wield it effectively, question its outputs, and ensure that the final intelligence product is both accurate and actionable. With or without AI, at its core, intelligence must always be useful.
Conclusion: Are We Friends, Frenemies, or Just Complicated?
So, where does that leave us? Is AI the friendly sidekick, the resident mean girl, or just complicated? Of course, our consideration of AI deserves to be far more involved than attributing a fun label. The point is, for the human-AI teaming relationship in intelligence, the answer isn’t clear-cut, and that’s the reality of working with emerging technology in intelligence.
For now, I think AI is more of a complicated partner—one that offers tremendous value but also requires careful handling and a critical eye. As we continue to integrate AI into the intelligence process, it's crucial to maintain the principles of curiosity, tradecraft, creativity, trust, and human oversight. This balance will ensure that AI remains a valuable ally in our intelligence efforts, rather than becoming an unhelpful friend.
Whether you’re ready to go steady with AI or maintain something more casual for the time being, one thing is certain: this relationship is shaping the future of intelligence in ways we’re only beginning to understand.
If you’d like to explore this topic more, I recommend The Impact of Artificial Intelligence on Traditional Human Analysis (September 2024) – a recent deliverable from the US Department of Homeland Security’s Public-Private Analytic Exchange Program’s (AEP).
If you’d like to hear more from Jane, why not check out OSINT Combine’s podcast, Open Secret: The Power of Open-Source Intelligence, available for listening where you find all your good podcasts, or from our website.