Skip to main content
🎤 Speaking at KubeCon EU 2026 Lessons Learned Orchestrating Multi-Tenant GPUs on OpenShift AI View Session
🎤 Speaking at Red Hat Summit 2026 GPUs take flight: Safety-first multi-tenant Platform Engineering with NVIDIA and OpenShift AI Learn More
AI

War at the Speed of Software: Is AI Reshaping the Kill Chain?

Luca Berton 6 min read
#ai#defense#ethics#autonomous-systems#governance
Play

When Software Sets the Pace of War

For 18 years, I have advised organizations on adopting technology responsibly. But the shift happening in modern defense is unlike anything I have seen in the enterprise world. The same AI models that help engineers write code and analysts summarize reports are now being explored for military applications — from intelligence assessment to operational planning.

The U.S. Department of Defense uses a phrase that captures this transformation: accelerating the kill chain. In plain terms, it means compressing the time between identifying a potential target and acting on it — from hours or days down to minutes or seconds.

This is not science fiction. It is happening now.

Decision Dominance: The New Battlefield

Modern military strategy increasingly revolves around decision dominance — the ability to make better decisions, faster than an adversary. The OODA loop (Observe, Orient, Decide, Act) has been a military framework for decades, but AI is compressing each stage dramatically.

AI-enabled platforms can fuse satellite imagery, drone footage, and signals intelligence, then surface patterns and possible targets in near real-time. Key initiatives include:

  • Project Maven — computer vision for analyzing surveillance footage
  • JADC2 (Joint All-Domain Command and Control) — integrating AI across all military domains
  • Predictive analytics — anticipating adversary actions before they happen

Reports from recent operations describe strike tempos that would have been logistically impossible just a decade ago. Tasks that previously required thousands of intelligence analysts can now be supported by small teams augmented with AI-driven analysis tools.

The Technology Stack

The AI systems involved in modern military operations are not single tools — they are platforms that combine multiple capabilities:

  • Sensor fusion — combining data from satellites, drones, signals intelligence, and open sources into a unified operational picture
  • Pattern recognition — identifying anomalies, movements, and potential targets across massive datasets
  • Scenario simulation — modeling multiple courses of action and their likely outcomes
  • Natural language processing — summarizing intelligence reports, drafting assessments, and supporting decision-makers with contextual information

Companies like Palantir have built platforms specifically designed for this kind of multi-source intelligence integration. And notably, large language models — the same technology behind commercial AI assistants — have been explored for tasks like intelligence summarization and simulation.

Important distinction: AI supporting analysis is not the same as AI making autonomous lethal decisions. But the line between “supporting” and “driving” decisions can become blurry when the pace of operations accelerates.

The Core Problem: Decision Compression

Here is where the real concern lies — not in dramatic AI-goes-rogue scenarios, but in something far more subtle.

When a system generates a targeting recommendation in seconds, the human who is supposed to provide meaningful human control gets a very small window to approve or reject. At that point, the question becomes: is the human making a deliberative judgment, or simply approving a queue?

This phenomenon is called decision compression, and it raises fundamental questions:

  • Can a human meaningfully evaluate a recommendation they received 30 seconds ago?
  • Does “human in the loop” become a formality when the loop moves faster than human cognition?
  • Who bears responsibility when an AI-assisted decision leads to unintended consequences?

The Ethics and Governance Gap

The U.S. Department of Defense has published AI ethical principles:

  • Responsible — personnel will exercise appropriate judgment and care
  • Equitable — steps will be taken to minimize unintended bias
  • Traceable — relevant personnel will possess an appropriate understanding of the technology
  • Reliable — AI capabilities will have explicit, well-defined uses
  • Governable — AI systems will be designed to fulfill intended functions while possessing the ability to detect and avoid unintended consequences

These are important principles. But principles without enforceable, transparent standards — especially standards that work across international borders — remain aspirational rather than operational.

What the international community needs:

  1. Explainability — if a system flags something, we need to understand why and be able to audit the reasoning
  2. Auditability — decision paths and data chains must be reviewable after the fact
  3. Accountability — clear responsibility chains that prevent what scholars call “moral crumple zones,” where the system drives a decision but a low-level operator absorbs the legal and ethical consequences
  4. International frameworks — something analogous to the Geneva Conventions, but for autonomous and AI-assisted weapons systems

The Vendor Tension

There is an emerging tension between military procurement priorities and technology company ethics policies.

Different AI vendors set different safety boundaries. Some build in stronger restrictions to reduce the chance their tools enable fully autonomous lethal use. Others emphasize broader flexibility, as long as use is lawful and authorized.

This creates real friction: militaries want maximum capability and speed; companies worry about misuse, accountability, and reputational harm. When those incentives clash, procurement choices shift — and the companies with fewer restrictions may gain an advantage.

This is not just a business problem. It is a governance problem that affects how AI safety standards evolve globally.

What History Teaches Us

We have seen what happens when automated systems are deployed without adequate oversight. Reports of semi-automated targeting tools — systems that score or rank individuals by “likelihood” or “risk” — have raised serious concerns about error rates and civilian harm.

A system with even a small error rate in a commercial context means a bad recommendation or a lost sale. In a military context, the same error rate means something fundamentally different.

This is why the conversation about AI in defense cannot be separated from the broader conversation about AI governance, explainability, and human oversight that the technology industry is already having.

Looking Forward

The integration of AI into military operations is not going to slow down. The strategic advantages are too significant for any major power to unilaterally step back. But how we integrate these systems — with what safeguards, what transparency, and what accountability — will define whether AI makes conflict more precise or simply more frequent.

Key questions for policymakers, technologists, and citizens:

  • Should there be an international treaty governing autonomous weapons systems?
  • What does “meaningful human control” actually require in practice?
  • How do we ensure that the speed of AI-assisted operations does not outpace our ability to assess their consequences?
  • Can we build AI systems that are fast enough for military advantage but transparent enough for democratic accountability?

War is one of the most consequential things humans do. If we accelerate it beyond our capacity for reflection, we do not just make it more efficient. We make it easier to do — more often — and with less deliberation.

That is not a technology problem. It is a human one.

Watch the full video breakdown: AI in Warfare — Full Analysis

Share:

Luca Berton

AI & Cloud Advisor with 18+ years experience. Author of 8 technical books, creator of Ansible Pilot, and instructor at CopyPasteLearn Academy. Speaker at KubeCon EU & Red Hat Summit 2026.

Luca Berton Ansible Pilot Ansible by Example Open Empower K8s Recipes Terraform Pilot CopyPasteLearn ProteinLens TechMeOut