
It's not just about deepfakes and synthetic content. At the 2026 AI in Defence Summit, a harder problem came into focus — what AI believes, not just what it generates.
Most discussions about AI and disinformation focus on the output end of the problem: synthetic content, deepfakes, AI-generated news. The concern is about what artificial intelligence produces. At the 2026 AI in Defence Summit, a different and more troubling dimension of the same problem came into focus — not what AI generates, but what AI believes.
The shift in framing came from two directions during the day: a keynote from Michael Galkovsky, NATO and Defence CTO at Oracle Cloud Infrastructure, and a panel on hybrid warfare and counter-disinformation that brought together practitioners, researchers, and forensic experts. Taken together, their arguments describe an information environment that is qualitatively different from the one European institutions have been preparing for.
The most striking data point of the morning came from Galkovsky's keynote. His research estimated that 33 percent of chatbots are now parroting coordinated Russian disinformation back to users who ask them questions.
The mechanism is not complex. Large language models learn from the text they are trained on. If the corpus of available text on a given topic — news articles, forum posts, social media threads, online commentary — has been systematically contaminated with false narratives, the model learns those narratives alongside everything else. It does not distinguish between accurate reporting and coordinated fabrication. It weights by volume and consistency.
The implications are significant. Policy analysts who use AI tools to survey coverage of a given issue, journalists who use language models to assist with research, military operators who consult AI-powered briefing systems — all of them are exposed to outputs that may have been shaped by systematic contamination upstream.
Detection, as the panel made clear, is tractable. The harder problem is attribution — identifying not just that a disinformation campaign exists but who is running it, through what infrastructure, with what intent.
Nadia Vasileva, a visiting professor at MIT with direct experience of both the Ukrainian and European defence ecosystems, described research she had led jointly with Stanford into Chinese information operations around the Ukraine war. The question the research set out to answer was whether the observed shift in Chinese social media coverage — from broadly neutral to markedly pro-Russian — represented a coordinated government directive or something more organic.
The answer, which emerged after nine months of tracing sources, was more complex than either hypothesis. A significant portion of Chinese social media users who had shifted their framing were not following government instruction. They were following Russian-language media sources, because Russian was a language many of them spoke and Ukrainian and European sources were not accessible in it. The narrative shift had propagated through a linguistic bottleneck — not through central coordination, but through the structural imbalance of available information.
This has direct implications for how European institutions think about counter-disinformation strategy. Attribution errors — responding to organic amplification as if it were directed activity, or vice versa — produce misallocated responses that can be worse than no response at all.
Marine Marcus, an AI practitioner at Capgemini Netherlands with a decade of experience in information operations, offered a framing that the panel found useful and somewhat uncomfortable: the conceptual tools for understanding what is happening today are not new.
Her argument was not that the problem is familiar and therefore manageable. It was that Europe spent thirty years after the Cold War deprioritising the capabilities — narrative, cultural, communicative — that had previously allowed it to win information contests. The rebuilding of those capabilities, accelerated and scaled with AI tools, is the task now in front of European institutions.
The economic logic of the problem is also worth examining directly. Producing disinformation at scale has become cheap. Detecting it, attributing it accurately, and countering it effectively remains expensive. As long as that asymmetry holds, the rational incentive for adversaries is to keep producing. The response architecture Europe is building needs to address that asymmetry, not just the individual outputs it generates.
The panel's practical conclusions were direct. For anyone building AI systems that will be used in defence, national security, policy analysis, or journalism:
The disinformation session at the 2027 AI in Defence Summit will build directly on these foundations — with a specific focus on model integrity, attribution tools, and the governance frameworks that European institutions are developing in response.