
Ukrainian battlefield innovation is measured in weeks, not years. At the 2026 summit, the operators and engineers inside that cycle explained how — and what European partners need to understand.
One of the most disorienting things about attending the 2026 AI in Defence Summit was the proximity of the evidence. The summit took place in Brussels on 2 February 2026. The battlefield innovation being described — drone swarms, autonomous guidance systems, AI-assisted targeting, electronic warfare countermeasures — was not being described in the past tense. The panel participants were working on it the week before and would return to it the week after.
The session titled Lessons from the Battlefield included remarks from Brigadier General Romanov, Head of the Research and Development Department of Ukraine's Defence Forces, and Lieutenant Colonel Kurt Berkall, Deputy Commanding General of the Third Army Corps. Both spoke with the compression of people who understand that operational necessity is the actual driver of the development cycle they are inside.
When Lieutenant Colonel Berkall was asked directly what drives the pace of AI development in Ukraine — why new capabilities are being developed and deployed in weeks rather than years — his answer was brief.
That purpose produces a specific operational tempo. New approaches to drone guidance, electronic warfare countermeasures, autonomous navigation, and targeting assistance are developed, tested in the field, and iterated in cycles measured in weeks. The feedback loop between operators who identify problems and engineers who build solutions is compressed to the point where a useful product and a useless one can be distinguished in days.
Boris, co-founder of Softset Labs — a company developing specialised AI models for defence and national security — gave this context from the engineering side. He described how what is now called AI in defence is not, for the most part, new technology. The underlying methods — Kalman filters, signal processing, computer vision, small language models — have been in use for decades. What is genuinely new is the large language model layer, and even that has found limited application in kinetic defence contexts so far. What drives the innovation is not the novelty of the tools but the relentlessness of the operational pressure to improve them.
The panel identified electronic warfare as one of the principal accelerants of AI development on the Ukrainian battlefield. The logic is straightforward. Russian electronic warfare systems jam the communication link between a drone and its operator. When that link is disrupted, a drone that depends on human remote control becomes useless.
The response is to build drones that do not depend on a communication link for their core functions — that can navigate autonomously in denied communication environments, identify targets using onboard vision systems, and maintain flight stability without GPS. This is not autonomy chosen because it is technically elegant. It is autonomy required because the alternative is operational failure.
Irene Benito Rodríguez, Co-Chair of NATO's Autonomy Task Force, confirmed that this operational imperative is shaping NATO's policy work. She described the shift from individual unmanned assets operated by a dedicated crew to swarm configurations where one operator commands a group of assets through autonomous coordination. The practical constraint is not the technology — it is the availability of trained operators to manage swarms of drones at the scale being deployed.
Lieutenant Colonel Berkall's remarks included a point that is easy to misread as a constraint and is better understood as a doctrine. He was explicit that AI does not and should not replace the human decision-maker in the chain of command.
He described the specific functions where AI is deployed: automating the detection phase so that a human operator receives a cued recommendation rather than having to monitor raw sensor feeds continuously; assisting in the targeting process so that the human decision is better informed rather than replaced; enabling autonomous navigation so that assets can operate in denied environments. In each case, the human remains the decision-maker. The AI narrows the problem.
This is not a theoretical position adopted to satisfy EU compliance requirements. It is a practical one, grounded in the requirements of international humanitarian law and in the operational reality that current AI capability cannot reliably distinguish military from civilian objects across the full range of conditions encountered in the field.
The panel's message to the European innovation ecosystem was consistent across speakers. The most important thing Western partners can do is close the feedback loop between developers and operators. Products built in a laboratory without access to operational conditions — real electronic warfare environments, real communication denial, real field testing — will not meet the standard required.
The implication for the European defence AI ecosystem is that the procurement and partnership models that separate development from deployment create a structural disadvantage. The organisations moving fastest are the ones that have eliminated the distance between the people identifying capability needs and the people building solutions. The 2027 summit's autonomy session is designed to work through precisely what that means for European institutions that have not yet made the transition.