Yaroslav Azhnyuk's previous professional life was defined by the domestic and the benign. As the co-founder of Petcube, he built a business around smart cameras that allowed owners to monitor their pets from afar. But the Russian invasion of Ukraine in 2022 forced a radical pivot. Azhnyuk, a self-described liberal engineer without military training, eventually stepped down from his CEO role to address a more existential problem: how to defend a nation against a numerically superior aggressor through the rapid iteration of autonomous systems.

The trajectory is not unique to Azhnyuk. Since the early months of the war, Ukraine's technology sector has undergone a broad reorientation, with software developers, hardware engineers, and startup founders redirecting their skills toward defense applications. What distinguishes the current phase is the nature of the technology being developed: not merely remote-controlled drones operated by human pilots, but autonomous systems capable of independent navigation, target identification, and coordinated action without continuous human input.

From remote control to algorithmic conflict

The vision Azhnyuk now describes is one of layered, algorithmic conflict. He envisions a near future where swarms of autonomous drones carry secondary drones to protect against yet more drones — a recursive loop of interception and counter-interception managed by AI agents. In this scenario, the human general is no longer a tactical operator but a high-level overseer of a self-executing fleet. The shift represents an inflection point in modern warfare, moving away from remote-controlled devices toward truly independent machines.

This evolution is born of necessity. Since the early days of the conflict, Ukrainian troops have relied on off-the-shelf components and consumer-grade hardware to bridge the gap in conventional arms. First-person-view drones, assembled from commercially available parts and piloted over video links, became a defining feature of the battlefield. But as the theater of war becomes increasingly saturated with electronic warfare and signal jamming, the reliance on human pilots is becoming a liability. Jamming severs the communication link between operator and drone, rendering the device useless. Autonomous navigation — where the drone follows a pre-programmed or AI-generated flight path without requiring a live signal — offers a direct countermeasure to that vulnerability.

The transition, then, is not just a technological leap but a strategic requirement for survival on a battlefield where the window for human reaction is rapidly closing. When opposing forces deploy their own electronic countermeasures within seconds, a system that depends on a human operator thousands of meters away faces an inherent latency disadvantage.

The governance gap

The technical feasibility of autonomous drone swarms raises questions that extend well beyond the Ukrainian theater. International humanitarian law, including the Geneva Conventions and their additional protocols, was drafted in an era when a human decision preceded every act of lethal force. The concept of "meaningful human control" — a term that has become central to arms-control discussions at the United Nations — sits uneasily alongside a system designed to operate faster than any human can intervene.

Historical precedent offers limited guidance. Landmines and autonomous naval mines prompted international treaty efforts decades after their widespread deployment, not before. The pattern suggests that governance frameworks tend to lag behind the operational reality of new weapons categories. The speed at which drone autonomy is advancing in an active conflict zone compresses that timeline further.

For Ukraine, the calculus is immediate and existential: autonomous systems offer a way to offset asymmetries in manpower and conventional materiel. For the broader defense and technology sectors, the war is functioning as an accelerated proving ground, generating operational data on autonomous coordination, electronic warfare resilience, and AI-driven targeting at a pace that no peacetime testing program could replicate.

What remains unresolved is the threshold. At what point does the speed and complexity of autonomous engagement make human oversight not merely impractical but structurally impossible? And if that threshold has already been crossed in certain tactical contexts, the debate over whether to permit autonomous weapons may be less about policy design than about acknowledging a fait accompli. The tension between operational necessity and normative constraint is unlikely to resolve neatly — and the front lines of eastern Ukraine are where that tension is being tested in real time.

With reporting from IEEE Spectrum Robotics.

Source · IEEE Spectrum Robotics