.webp)
The world's most advanced real-world laboratory for AI in combat is on Europe's eastern border. The lessons are available. Most of Europe isn't applying them fast enough.
There is a recurring pattern in the history of military technology adoption: the countries that develop a capability in wartime learn things that cannot be replicated in exercises, simulations, or white papers. The countries that observe from a distance take years — sometimes decades — to internalise the same lessons.
The AI dimension of the war in Ukraine is generating that kind of asymmetric learning right now.
Since 2022, Ukraine's defence forces have become the world's most experienced practitioners of AI-enabled warfare. Not in the theoretical sense — in the operational one. AI systems are being used for target identification, drone navigation, electronic warfare countermeasures, logistics routing, and real-time battlefield intelligence fusion. Some of these applications are Ukrainian-built. Some are NATO-supplied. Some are improvised from commercial technology under conditions that no defence acquisition programme anticipated.
The learning is happening at a pace that institutional defence establishments struggle to match. By the time a lessons-learned report completes the standard review and approval cycle in a European defence ministry, the operational reality it describes has often already changed.
"We are not running experiments. We are fighting with AI systems today — systems that were built in weeks, tested in days, and deployed in conditions no procurement manual anticipated. What we have learned about what works and what fails under real electronic warfare conditions is directly relevant to every European defence force. The question is whether they are willing to listen before they need to find out themselves."
Brigadier General O. Romanov — Head of Research and Development Department, Defence Forces of Ukraine
The most effective AI systems in active use in Ukraine are not purpose-built military systems. They are adaptations of commercial technology — computer vision models, logistics optimisation tools, communications platforms — that have been rapidly modified for military application. The implication is counterintuitive: in many domains, the most capable AI available to a defence force is not in the defence procurement catalogue. It is on the commercial market, available today, and requires integration rather than development.
AI systems that perform well in controlled testing fail in degraded electromagnetic environments. GPS spoofing, communications jamming, and sensor interference all degrade AI performance in ways that are difficult to model without operational experience. Ukrainian forces have developed countermeasures and redundancy architectures that most European forces have not had to consider. This knowledge is directly applicable to European defence planning — and it is not adequately represented in current NATO doctrine.
The question of how much autonomy to grant AI systems in kinetic contexts is not primarily a technical question. It is an operational and institutional one. Ukrainian experience demonstrates that the most effective outcomes occur not when AI is maximally autonomous but when human decision-making is optimally supported. The design of that human-AI interface — who sees what, who decides what, in what timeframe — is the variable that determines operational effectiveness.
The most strategically significant AI application in Ukraine is not weapons guidance or autonomous systems. It is intelligence fusion — the combination of satellite imagery, electronic signals, human intelligence, and open-source data into real-time operational pictures at speeds that were previously impossible. The organisations that have this capability make better decisions, faster, at every level of command.
None of this information is secret. Ukrainian defence officials have been remarkably open about operational lessons, particularly in the context of NATO and EU partner discussions. The challenge is not information availability. It is institutional absorption.
European defence establishments are not designed for rapid learning cycles. Doctrine development, procurement processes, training programmes, and command structures all operate on multi-year timelines. The rate of operational learning in Ukraine is faster than the rate of institutional adaptation in most European defence organisations. Closing that gap requires deliberate effort — and it requires the people who have the operational knowledge to sit in the same room as the people who have the institutional authority to act on it.
By 2027, Ukraine will have been operating AI-enabled systems at scale for five years. The institutional knowledge is maturing. The most experienced practitioners are increasingly available for structured knowledge transfer — not just bilateral conversations, but the kind of multi-party working sessions where doctrine and procurement decisions get shaped.
At the 2026 AI in Defence Summit, Brigadier General Romanov delivered one of the most substantive briefings on operational AI use available outside classified channels. The 2027 programme will go further.
The 2027 AI in Defence Summit brings Ukrainian operational experience directly into European defence planning conversations. 1 March, Brussels. aidefencesummit.eu