AI in Defence Summit
A Revolution in How Armies Think: The Historical Case for Why AI Is Different
All ArticlesAI Policy

A Revolution in How Armies Think: The Historical Case for Why AI Is Different

Belgian Defence Minister Théo Franken didn't argue from capability at the 2026 summit. He argued from history — and what history says about military revolutions should make European institutions uncomfortable.

AI in Defence Summit Editorial
17 June 2026
8 min read

Most arguments for treating AI as a serious defence priority focus on capability. AI enables faster targeting, better surveillance, more effective logistics, superior information processing. These are important arguments, and the 2026 AI in Defence Summit covered all of them in detail.

The opening keynote from Belgian Minister of Defence Théo Franken took a different approach. Rather than arguing from capability, he argued from history — specifically, from a reading of how military revolutions actually work and what that reading implies about the one now unfolding.

What Makes a Military Revolution

Minister Franken's argument begins with a distinction. When people think about military revolutions, they tend to think about weapons: the iron sword, the cannon, the tank, the atomic bomb. Each of these did change warfare. But Franken argued that this focus is misleading, because some of the most consequential military revolutions were not material but conceptual.

His examples were chosen carefully. The emergence of cartography did not just give armies a new tool. It abstracted space in a way that made territory thinkable at scale, and that changed how empires were built and defended. The railway did not just accelerate troop movement. It transformed the strategic variable of distance, made war planning inseparable from industrial organisation, and shifted strategy from the battlefield to ministries and rail hubs. Radar and cryptography transformed not what armies could do physically but what they could know — and in warfare, what you know matters as much as what you can do.

His argument places AI in this second category — not primarily a new weapon but a new way of organising perception, knowledge, and decision-making. The transformation is cognitive. AI changes how space and time are experienced by military institutions. It changes the management of uncertainty, which Clausewitz identified as the fundamental characteristic of war. It changes how attention is organised, how patterns are recognised, and how the future moves of an adversary can be anticipated.

The Significance of Time

One element of Franken's argument deserves particular attention: the acceleration of time. He described AI analysis as approaching real time — closing the gap between event and response to a degree that changes the operational character of decision-making.

This is not abstract. The 2–4 week innovation cycle described by Ukrainian battlefield participants later in the summit is one expression of it. The Landsat drone described by Michael Galkovsky — capable of completing a kill chain in two to four minutes, ignoring jamming, operating without GPS — is another. The volume and speed of drone deployment that Irene Benito Rodríguez described as creating cognitive overload in human operators is a third.

In each case, the problem is not that AI is doing something new. It is that AI is doing existing things so much faster that the human institutions built around older timescales cannot process what is happening quickly enough to respond. That gap — between the speed of AI-enabled action and the speed of human institutional response — is one of the defining challenges of the current moment.

The Question Franken Left Open

The minister closed his remarks with a question that resonated through the rest of the day's programme. It is worth being precise about what he was and was not saying. He was not arguing for fatalism. He was not suggesting that the expansion of destructive power is inevitable or desirable. He was making a more specific point: that the historical pattern of military revolutions is not reassuring, and that European institutions need to engage with what is coming with clear eyes rather than comfortable assumptions about managed risk.

The implication he drew — and that the EU Commissioner for Defence and Space, Andrius Kubilius, expanded on in his own remarks — is that Europe needs to treat its AI industrial and defence base not as a nice-to-have but as a strategic priority at the level of the railway or the nuclear deterrent. Not because AI is exciting, but because the alternative to European capability in this space is strategic dependence during a period of fundamental change.

Why the Historical Frame Matters

The value of Franken's historical argument is not primarily rhetorical. It is analytical. By placing AI in the category of conceptual military revolutions — alongside the map, the railway, radar, cryptography — rather than in the category of weapons, he provides a framework for understanding why the defence AI challenge is not primarily a procurement problem or a technology investment problem.

It is an institutional problem. The question is not whether European institutions can acquire AI systems. It is whether they can transform the way they think — the way they process information, allocate attention, manage uncertainty, prepare decisions — at the speed the technology requires and before adversaries who are already inside that transformation establish an irreversible advantage.

That is the level of ambition the 2027 AI in Defence Summit is designed to engage with.