Skip to main content
🎤 Speaking at KubeCon EU 2026 Lessons Learned Orchestrating Multi-Tenant GPUs on OpenShift AI View Session
🎤 Speaking at Red Hat Summit 2026 GPUs take flight: Safety-first multi-tenant Platform Engineering with NVIDIA and OpenShift AI Learn More
AI

AI in Combat: When Machines Decide Faster Than Humans

Luca Berton 5 min read
#ai#defense#ethics#autonomous-systems#governance
Play

The Speed Gap

There is a growing gap between how fast AI systems can process information and how fast humans can meaningfully evaluate that information. In commercial applications — recommendation engines, fraud detection, content moderation — this gap creates inconveniences and occasional errors. In military applications, it creates a fundamental challenge to the concept of human control.

I have spent 18 years advising organizations on technology adoption. The lesson that applies most directly to military AI is this: when you automate the analysis, you inevitably shape the decision. The person who sees only the AI’s output — filtered, ranked, scored — is not making the same decision they would make with raw, unprocessed information.

In warfare, that distinction matters enormously.

How Fast Is Too Fast?

Modern AI-enabled military systems can process data at speeds that make traditional human-led analysis look glacial:

PhaseTraditional TimelineAI-Assisted Timeline
Intelligence collectionHours to daysContinuous, real-time
Data fusion and analysisDaysMinutes
Target identificationHoursSeconds
Course of action developmentHours to daysMinutes
Decision and authorizationMinutes to hoursSeconds to minutes

When the entire cycle from observation to action compresses from days into minutes, the human role fundamentally changes. You are no longer analyzing — you are reacting. And reacting is not the same as deciding.

The Architecture of AI-Assisted Warfare

Modern military AI is not a single system. It is a stack of interconnected capabilities:

Layer 1: Sensor Networks

Satellites, drones, ground sensors, signals intelligence receivers, and open-source intelligence monitors feed continuous data streams into centralized platforms.

Layer 2: Data Fusion

AI platforms like those built by Palantir integrate these data streams into a unified operational picture. Machine learning algorithms identify patterns, correlations, and anomalies that would be invisible to human analysts working with the same data.

Layer 3: Analysis and Recommendation

Large language models and specialized analytical tools process the fused data to generate assessments, identify potential targets, simulate scenarios, and rank courses of action by probability of success and risk.

Layer 4: Human Decision Point

A human operator — theoretically — reviews the AI’s output and makes the final decision. But the quality of this review depends entirely on:

  • Time available — seconds vs. minutes vs. hours
  • Technical literacy — can the operator understand and question the AI’s reasoning?
  • Operational pressure — is there institutional bias toward approving rather than rejecting?
  • Information asymmetry — does the operator have access to information the AI did not consider?

Layer 5: Execution

Depending on the system, execution may involve human-piloted assets, remotely operated systems, or — increasingly — autonomous platforms that carry out the approved action.

Three Scenarios for Human Control

The future of AI in military operations will likely unfold along one of three paths:

Scenario 1: Meaningful Human Control

AI provides analysis and recommendations. Humans have sufficient time, training, and authority to genuinely evaluate, question, and override AI outputs. International frameworks establish minimum standards for human involvement in lethal decisions.

Likelihood: Possible if the international community acts proactively to establish norms before fully autonomous systems are widely deployed.

Scenario 2: Nominal Human Control

AI drives the operational tempo. Humans remain “in the loop” but primarily serve an approval function. The speed of operations makes genuine deliberation impractical. Accountability becomes diffuse.

Likelihood: This is arguably where many advanced military operations already are. The human is present but the system sets the pace.

Scenario 3: Autonomous Operations

AI systems operate independently for defined categories of decisions, with human oversight limited to setting parameters, monitoring outcomes, and intervening in exceptional cases.

Likelihood: Already reality for some defensive systems (e.g., missile defense interception). The question is how far this model extends into offensive operations.

The Governance Challenge

Existing international law — including the laws of armed conflict, international humanitarian law, and the UN Charter — was written for a world where humans made military decisions at human speed. These frameworks are not obsolete, but they need interpretation and extension to address AI-assisted and autonomous operations.

Key governance gaps include:

Attribution: When an AI-assisted strike causes unintended harm, current legal frameworks struggle to assign responsibility clearly. Was the error in the data, the algorithm, the operator’s review, or the commander’s authorization?

Proportionality assessment: International humanitarian law requires that military actions be proportional to the military advantage gained. Can an AI system make this inherently contextual, value-laden judgment?

Distinction: The obligation to distinguish between combatants and civilians requires understanding context, intent, and circumstances that may not be fully captured in the data an AI system processes.

Precaution: The duty to take precautions to minimize civilian harm requires active, informed human judgment — not passive approval of an automated recommendation.

What Technologists Can Do

As people who build and deploy AI systems, we have a particular responsibility:

  1. Advocate for explainability — AI systems used in high-stakes decisions must be interpretable, not just accurate
  2. Design for auditability — every AI-assisted decision should leave a complete, reviewable trail
  3. Resist “moral crumple zones” — system designers must not create architectures where operators absorb blame for system-level failures
  4. Support international governance — technology companies and engineers should actively participate in developing international norms for AI in warfare
  5. Maintain ethical boundaries — the commercial pressure to remove safety guardrails for defense contracts must be balanced against the long-term consequences of normalizing unrestricted AI in lethal systems

The Question We Must Answer

The integration of AI into military operations raises a question that extends far beyond defense policy:

As a society, are we comfortable with decisions about human life being made at a speed that exceeds our capacity for reflection?

If the answer is no, then the time to establish frameworks, standards, and safeguards is now — before the technology outpaces our ability to govern it.

If the answer is yes, then we need to be honest about what we are accepting: a world where the most consequential decisions are driven by systems we built but may not fully understand.

Neither answer is comfortable. But avoiding the question is not an option.

Watch the full discussion: AI in Combat — Full Analysis

Share:

Luca Berton

AI & Cloud Advisor with 18+ years experience. Author of 8 technical books, creator of Ansible Pilot, and instructor at CopyPasteLearn Academy. Speaker at KubeCon EU & Red Hat Summit 2026.

Luca Berton Ansible Pilot Ansible by Example Open Empower K8s Recipes Terraform Pilot CopyPasteLearn ProteinLens TechMeOut