Skip to main content
🎤 Speaking at KubeCon EU 2026 Lessons Learned Orchestrating Multi-Tenant GPUs on OpenShift AI View Session
🎤 Speaking at Red Hat Summit 2026 GPUs take flight: Safety-first multi-tenant Platform Engineering with NVIDIA and OpenShift AI Learn More
AI

The Rise of AI Warfare: Is Human Control Disappearing?

Luca Berton 5 min read
#ai#defense#ethics#autonomous-systems#governance
Play

The Disappearing Human

Most of us interact with AI through productivity tools — writing assistants, code generators, search engines. But the same underlying technology is being deployed in contexts where the stakes are incomparably higher: modern military operations.

For 18 years, I have advised organizations on technology adoption. The pattern I see in defense mirrors what I see in enterprise: AI does not replace humans overnight. It gradually absorbs tasks — first the routine ones, then the complex ones — until the human role shifts from decision-maker to decision-approver.

In an enterprise context, that shift means someone rubber-stamps a recommendation engine’s output. In a military context, it means something very different.

How AI Is Reshaping Military Operations

The modern military AI stack is built around a concept called the OODA loop — Observe, Orient, Decide, Act. AI is compressing every stage:

Observe

AI-powered sensors, drones, and satellites collect and analyze data streams at volumes no human team could process. Machine learning filters noise, identifies anomalies, and flags items of interest.

Orient

Algorithms analyze historical data, model adversary behaviors, and create digital representations of the operational environment. What used to take teams of analysts days now happens in minutes.

Decide

AI simulates multiple scenarios, provides probabilistic risk assessments, and ranks courses of action. The human commander receives a curated set of options — pre-analyzed, pre-scored, pre-prioritized.

Act

In some systems, the action phase is moving toward reduced human intervention — from “human in the loop” (human decides) to “human on the loop” (human supervises) to, in some cases, “human out of the loop” (fully autonomous).

This progression is where the fundamental questions arise.

The Tension Between Speed and Oversight

The strategic advantage of AI in military operations is clear: faster, more comprehensive analysis leads to better-informed decisions. The concern is equally clear: as the speed of operations increases, the window for meaningful human judgment shrinks.

Consider the practical reality:

  1. An AI system processes thousands of data points and generates a targeting recommendation
  2. The recommendation arrives at a human operator’s screen
  3. The operator has seconds — not minutes, not hours — to evaluate and approve
  4. The operator likely lacks the technical ability to audit the AI’s reasoning in real time
  5. Operational pressure and time constraints favor approval over rejection

This is not a hypothetical. Multiple reports describe operational tempos where AI-assisted analysis has dramatically reduced the number of personnel needed for complex targeting operations — from thousands to dozens.

The Corporate Ethics Battlefield

An underreported dimension of AI in warfare is the tension between defense procurement and corporate AI safety policies.

Technology companies building frontier AI models face a choice: how much should their safety frameworks restrict military applications? Some companies maintain strict ethical guardrails — preventing their models from being used in fully autonomous lethal systems. Others take a more permissive approach, allowing any lawful use.

This creates a market dynamic where the companies with fewer restrictions may be preferred for defense contracts. The implications are significant:

  • Safety-focused companies risk being excluded from government procurement
  • Permissive companies set the de facto standard for acceptable AI use in warfare
  • The global AI safety ecosystem is shaped by defense procurement decisions

Lessons from Automated Targeting

We do not need to speculate about the risks of AI-assisted targeting. Published reports and investigations have documented cases where automated or semi-automated systems were used in real conflicts:

  • Scoring systems that ranked individuals by likelihood of being combatants, with reported error rates that would be unacceptable in any other domain
  • Decision compression where the time between AI recommendation and human approval was measured in seconds rather than the minutes or hours that meaningful review requires
  • Accountability gaps where it remained unclear whether responsibility for errors lay with the system designers, the operators, the commanders, or the political leadership

These are not edge cases. They represent the central challenge of integrating AI into life-and-death decisions.

What Meaningful Control Requires

If “human control” is to be more than a checkbox, it requires structural changes:

Explainability

Every AI-generated recommendation must come with an accessible explanation of why it was generated. Not a confidence score — an actual reasoning chain that a human can evaluate.

Time for Deliberation

Operational protocols must build in sufficient time for human review. If the system operates faster than humans can meaningfully evaluate, the “human in the loop” is a fiction.

Independent Audit

AI-assisted decisions — especially those with lethal consequences — must be auditable after the fact by independent parties. This is analogous to how aviation accidents are investigated: every data point, every decision, every input must be reconstructable.

International Standards

National ethical principles are necessary but insufficient. Without international frameworks — something analogous to the laws of armed conflict, but specifically addressing autonomous systems — there is no consistent baseline for responsible use.

Accountability Chains

Clear legal and ethical responsibility must attach to AI-assisted decisions. The concept of “moral crumple zones” — where a low-level operator absorbs blame for a system-driven outcome — must be explicitly addressed in both military doctrine and international law.

The Bigger Picture

The integration of AI into military operations reflects a broader pattern in how AI is changing decision-making across every domain. The same questions about explainability, accountability, and human oversight that arise in healthcare, criminal justice, and financial services are amplified in the defense context.

The difference is that in warfare, the consequences of getting it wrong are measured in human lives.

As technologists, citizens, and policymakers, we have a responsibility to engage with these questions — not after the systems are deployed, but while the frameworks are still being designed.

Watch the full analysis: The Rise of AI Warfare

Share:

Luca Berton

AI & Cloud Advisor with 18+ years experience. Author of 8 technical books, creator of Ansible Pilot, and instructor at CopyPasteLearn Academy. Speaker at KubeCon EU & Red Hat Summit 2026.

Luca Berton Ansible Pilot Ansible by Example Open Empower K8s Recipes Terraform Pilot CopyPasteLearn ProteinLens TechMeOut