The rapid integration and deployment of AI within military systems has triggered ethical debates and controversies. The integration of AI into military systems complicates legal accountability. The tragic loss of civilians during the military campaign in Iran reveals that it is extremely difficult to exercise complete oversight and control over AI systems.
The AI-assisted manoeuvres displayed in the two operations, namely Operation Absolute Resolve in Venezuela and Operation Epic Fury in Iran, showcased elements of the ‘AI Acceleration Strategy’ unveiled by the US Department of War on January 12, 2026[1]. This doctrine integrated Frontier AI, which refers to the most advanced, large-scale foundational AI models that can perform a wide variety of tasks and often match or exceed human capabilities in specific cognitive areas, into military systems, to make the US military an ‘undisputed AI-enabled fighting force’.
The doctrine aimed to shift from human-dependent decision cycles to autonomous algorithmic kill chains. This AI-first approach was achieved through seven targeted Pace Setting Projects (PSPs) that were designed to change the execution standards of modern warfare. These initiatives included ‘Swarm Forge’, which introduces AI enabled capabilities in swarm systems; ‘Agent Network’ supporting battle management and decision making from planning to kill chain execution; ‘Ender’s Foundry’, which allows for tactical simulation and integration; and ‘Open Arsenal’ that cuts down the time of converting technical intelligence into kinetic capabilities from years to hours.
At the enterprise level, ‘GenAI.mil’ allowed for department-wide access to AI generative models like XAI’s Grok and Google’s Gemini; ‘Logi-Link’, which automates predictive logistics and sustainment in high-threat zones; and ‘Aegis Shield’, which provides autonomous defensive measures against adversarial electronic and cyber interference[2].
Most importantly, the doctrine does not include social or political variables in the algorithmic decision cycles, thereby prioritising decision superiority and lethality alone. The deployment of Anthropic’s Claude LLM through DoD’s Palantir platforms – especially the Maven Smart System and AI Platform (AIP) – allowed commanders to seek operational assistance, question data sets and derive instant tactical guidance for efficient decision making[3].
The ‘AI Acceleration Strategy’ found its operational proof of concept in Israel’s recent military campaigns, which analysts identify as examples of automated warfare.[4] AI-based Decision Support Systems (DSS) identified structural targets at a rate previously impossible for human analysts. While a human intelligence officer might identify 50 targets a year, ‘The Gospel’ can generate 100 ‘target recommendations’ per day, effectively functioning as a ‘target factory’.[5]
The US conducted Operation Absolute Resolve on 3 January 2026, targeting Venezuela’s capital, Caracas. It involved a multi-domain military strike operation to capture Venezuelan President Nicolas Maduro and his spouse, Cilia Flores, on charges of narcoterrorism[6]. During this operation, the US government heavily relied on AI for strategic advisory, intelligence fusion, and predicting the battlespace. Military strategists used AI to process vast datasets, perform multi-model intelligence, map kinetic footprints, security structure and protocols. Low-observable platforms such as the RQ-170 Sentinel unmanned aircraft were used to provide real-time telemetry across the region[7].
Palantir’s tracking algorithm enabled continuous telemetry. It combined signals intelligence (SIGINT) from the National Security Agency and high-resolution satellite imagery from the National Geospatial-Intelligence Agency. The most crucial component of this phase is the integration of Anthropic’s generative AI model, Claude, into military systems. It helped analyse thousands of hours of audio intercepts in Persian and Spanish to identify fractures, communication latencies, and loyalties within the Venezuelan military command[8]. Strategists engaged with advanced AI models to run through various options, ground scenarios backed by complex game theory models. The AI system generated possible rupture points and infiltration vectors, and modelled the effects of potential cyber blinding operations on the power grid.
The raid served as a benchmark example of AI integration and synchronisation across multiple domains. The operation included approximately 150 aerial platforms launched from 20 different points spread across the entire Western Hemisphere[9]. This required smooth temporal coordination, which AI routing systems provided. Stealth fighters like the F-35 and F-22, along with EA-18G Growlers, worked to flood the electromagnetic spectrum, blinding radar systems. At the same time, US Cyber Command and Space Command attacked the local power grid, resulting in a complete blackout and hindering defence communications[10].
The grid failure damaged at least three electrical substations and demonstrated the efficiency of the US hybrid strategy. This strategy seamlessly blended offensive cyber operations with physical strikes, a campaign that involved months of effort to source cyber targets, days to select kinetic targets, and just one night to integrate them into a successful operational plan fully[11]. Under this algorithmically secured roof, a tactical extraction force comprising 160th Special Operations Aviation Regiment (Airborne), also known as the Night Stalkers, Delta Force Operators and the FBI Hostage Rescue Team, penetrated the Fuerte Tiuna military systems to attack the targets.
Operation Epic Fury was unleashed at 1:15 am on 28 February 2026, with American and Israeli strikes targeting Islamic Revolutionary Guard Corps (IRGC) headquarters, ballistic launch sites, integrated air defence systems and naval infrastructure[12]. In the initial 24 hours, approximately 1,000 Iranian military infrastructure targets were targeted, which eventually expanded to 5,500 targets[13].
The most striking feature of this campaign was ‘decision compress’ made possible by AI tools, which allowed top leaders to navigate through piles of datasets and make efficient decisions swiftly. US Central Command, in particular, used Claude 4 Opus, integrated through the Palantir Foundry platform, to examine and process large volumes of battlefield data into quick, actionable options[14]. The machine speed of this process helped reduce days of human effort with a large margin of error to seconds of work with minimal fallout.
For a successful decapitation strike of the Supreme Leader, the US forces processed 2.3 petabytes of multi-layered data through AI. This data included 120 million satellite images, day-to-day life patterns, and signals intelligence. To master this kind of data and derive such results, at least 100 days and 328 human analysts would have been required[15].
In addition, the deployment of AI significantly curbed human involvement in the application of lethal force. While during the Venezuelan campaign, AI was involved at advisory stages for human executed extraction, the events in Iran relied heavily on algorithmic autonomy with minimum human intervention, only to the point of taking the final lethal step. This also triggered debate around the human-in-the-loop approach and how this structure would forever change the nature of warfare.
In Operation Epic Fury, the US military also deployed the Low-Cost Uncrewed Combat Attack Systems (LUCAS) for the first time in war. LUCAS has been reverse-engineered from the Iranian Shahed-136 kamikaze drone, which was also used by Russian forces against Ukraine under the designation Geran 2. The LUCAS systems cost around $35,000 per unit, a fraction of the cost of conventional precision-guided munitions to which US forces were accustomed. For comparison, a single Patriot PAC-3 interceptor exceeds $3 million, while a Tomahawk cruise missile costs around $2 million. Therefore, bringing the cost associated with military systems down while increasing efficiency[16].
The firm SpektreWorks designed LUCAS. It leverages Starlink terminals, AI-enabled autonomy for navigation, satellite data links, and terminal guidance. For operations in Iran, these were launched from the ground by a specially designated Task Force Scorpion Strike (TFSS), which was constituted specially to counter Iran’s asymmetric capabilities in drone warfare[17].
LUCAS swarms were positioned systematically to saturate radar coverage of Iran and create “digital smokescreens” by stressing defensive systems, allowing advanced and expensive strike platforms to perform freely[18]. They used mesh networking for autonomous, cooperative tactics and dynamic target acquisition, forcing air defence systems to track numerous potential targets simultaneously. This doctrine, therefore, directly exploits the vulnerability of expensive military hardware by fundamentally altering logistics sustainability and attrition calculus of prolonged engagements[19].
The rapid integration and deployment of AI within military systems has triggered ethical debates and controversies, as seen in the US military’s integration and reliance on Anthropic’s Claude AI model[20].
Merely hours before Operation Epic Fury, Anthropic CEO Dario Amodei turned down US Secretary of War Pete Hegseth’s request to remove the company’s “red line” constraints on the deployment of the Claude model. Amodei demanded a non-negotiable agreement that its model would never be used for mass surveillance or integrated into weapon systems without any human oversight[21]. He argued that complete reliance on such a system shifts the burden of responsibility onto the software and risks catastrophic disaster and loss of human life. This caution aligns with the broader understanding that, in most trial runs, AI suggested using nuclear weapons in simulated war games, underscoring the inherent volatility of unrestricted machine logic[22].
Anthropic’s explicit rejection of the Pentagon’s demand forced President Trump to issue an executive order which blacklisted the firm and labelled its leadership as “radical left”. It further banned all federal agencies from using any of its products. Secretary Hegseth went on to formally designate Anthropic as a national security “supply chain” threat, a classification often given to hostile foreign entities[23].
Despite the public fallout and political rupture, the operational reality made it impossible to remove the Claude model from military systems without causing severe damage. The Claude model was structurally embedded in the Palantir Maven System, the JADC2 network and larger intelligence architecture[24]. As a result, the US Central Command started to use blacklisted AI to continue coordinating bombardments in Iran. This also created a window of opportunity for Anthropic’s rival, OpenAI, which swiftly stepped into Claude’s role and signed a defence contract lacking a specific ethical clause[25].
The integration of AI into military systems complicates legal accountability. The current International Humanitarian Law (IHL) framework is ambiguous regarding a faulty AI-suggested target that is human-rubber-stamped. The criminal liability in this case does not essentially rest with anybody. There are no hard law enforcement mechanisms governing the introduction of AI into lethal operations, thereby leaving a major gap in international jurisprudence.
‘Autonomous bias arises when human operators are overwhelmed by the volume and velocity of data generated by machines, leading them to trust the system’s output without verification.
The speed at which Operation Epic Fury was carried out suggests that commanders had little time to review and decide on the AI agents’ recommended strikes. During such incidents, the human oversight and control devolve into performative ’rubber stamp’approval, shifting the cognitive burden of lethal actions from humans to machines.
However, the shortcomings and consequences of such offloading are severe. Generative AI models suffer from “hallucinations,” that is, producing data drawn from faulty sources and exhibiting lower accuracy when deployed in novel, fog-of-war scenarios outside their training purview. This ethical dilemma and consequence were evident during the first day of operation when a missile strike hit near a school in southern Iran, resulting in 165 civilian casualties, including several children[26]. This incident highlighted the risk of taking human verification for granted during crucial military operations[27].
On the one hand, the introduction of low-cost drone systems like LUCAS has curbed the cost of offensive strikes by creating a favourable cost-exchange ratio. However, the first 100 hours of Operation Epic Fury alone consumed $3.7 billion, of which $3.5 billion was for the replacement of unbudgeted munitions. Moreover, the rapid depletion of sophisticated systems such as PAC-3 missiles, SM-6 interceptors, and Tomahawk cruise missiles exposed vulnerabilities in the US supply chain. Given the asymmetric nature of the conflict, the US had to use a $3 million Patriot missile to intercept a $35,000 drone[28].
The introduction and integration of AI into military systems have fundamentally altered the dynamics of military operations by compressing planning and decision cycles from months to minutes, replacing kinetic mass with autonomous precision and transferring the cognitive burden of lethal decisions from humans to machines. Likewise, the introduction of low-cost systems like LUCAS and Affordable Mass signals a necessary evolution ensuring logistical sustainability in times where expensive traditional platforms are extremely vulnerable.
However, these tactical victories obscure severe strategic, ethical and legal vulnerabilities. The tragic loss of civilians during the campaign in Iran reveals that it is extremely difficult to exercise complete oversight and control over AI systems. The friction that surfaced due to Anthropic fallout with the US government further signals the fact that unified, enforceable guidelines with more human involvement are crucial.
Lastly, the economic burn rate and growing discontentment amongst allies suggest that while AI can achieve great results on the battlefield, it simultaneously ruptures the global order. The nation that can monopolise operational data and possess the most advanced neural networks rules the global security environment, operating in an unrestricted domain where algorithms, rather than established laws of warfare, determine the outcome and the future of the international order.
Views expressed are of the author and do not necessarily reflect the views of the Manohar Parrikar IDSA or of the Government of India.
[1] Mostafa Ahmed, “Between Maduro and Khamenei: Has Artificial Intelligence Replaced Human Intelligence?”, AL Habtoor Research Centre, 3 March 2026.
[2] “Artificial Intelligence Strategy for the Department of War”, Secretary of War, 9 January 2026.
[3] “AI Integration in Operation Epic Fury and Cascading Effects”, The Soufan Center, 03 March 2026.
[4] Yuval Abraham, “Lavender’: The AI machine directing Israel’s bombing spree in Gaza”, +972 Magazine, 3 April 2024; Harry Davies and Bethan McKernan, “Top Israeli spy chief exposes his true identity in online security lapse”, The Guardian, 5 April 2024.
[5] Bethan McKernan and Harry Davies, “The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets”, The Guardian, 3 April 2024.
[6] CK Smith, “Maduro captured during overnight U.S. strikes on Venezuela’s capital”, Salon, 3 January 2026.
[7] Joseph Trevithick, “Lockheed Confirms RQ-170 Sentinel Spy Drones Took Part In Maduro Capture Mission”, TheWarZone, 29 January 2026.
[8] Ryan et. al., “Imagery from Venezuela Shows a Surgical Strike, Not Shock and Awe”, CSIS, 9 January 2026.
[9] Bradley Peniston, “How ‘Absolute Resolve’ harnessed 150 aircraft and more to launch a regime change in Venezuela”, Defence One, 3 January 2026.
[10] Dr Louise Marie Hurel, “Layered Ambiguity: US Cyber Capabilities in the Raid to Extract Maduro from Venezuela”, RUSI, 14 January 2026.
[11] Cynthia Brumfield, “The Caracas operation suggests cyber was part of the plan – just not the whole operation”, CyberScoop, 19 February 2026.
[12] Frederic Lemieux, “Algorithmic Warfare in the Iran Conflict: Operation Epic Fury and Dawn of the AI Battlefield”, Homeland Security, 6 March 2026.
[13] Jon Harper, “Centcom commander touts use of AI in fight against Iran during Operation Epic Fury”, DefenseScoop, 11 March 2026.
[14] Parmy Olson and Bloomberg Opinion, “How Anthropic’s Claude AI Helped US Bomb Iran”, NDTV, 4 March 2026.
[15] Larisa Brown, “How AI helps 20 US troops do the work of 2,000 in Iran war”, The Times, 3 March 2026.
[16] “U.S. Conducts First Combat Use of LUCAS Kamikaze Drone During Operation Epic Fury Against Iran”, TenderNews International, 2026.
[17] Howard Altman, “U.S. Military Has Used Long-Range Kamikaze Drones In Combat For The First Time”, TWZ, 28 February 2026.
[18] Brandi Vincent, “After first combat appearance, LUCAS drones ‘remain ready’ for future Epic Fury strikes against Iran”, Defensescoop, 2 March 2026.
[19] “Iran’s Shahed Shock: Pentagon Forced Into Drone Rethink as Low-Cost UAV Warfare Exposes U.S. Missile Stockpile Vulnerability”, Defence Security Asia, 14 March 2026.
[20] Ryan Tantalo, “When AI Ethics Collide with National Security: Anthropic Challenges Pentagon Blacklisting”, Law Review, 19 March 2026.
[21] “Anthropic CEO Dario Amodei to Pentagon: This is the ‘Red Line’, will not accept these two demands under any circumstance – “, The Times of India, 28 February 2026.
[22] “King’s study finds AI chose nuclear signalling in 95% of simulated crises”, Kings College London, 27 February 2026.
[23] Justin Hendrix, “A Timeline of the Anthropic-Pentagon Dispute”, Tech Policy Press, 19 March 2026.
[24] Parmy Olson, “Claude AI helped bomb Iran. But how exactly?”, The Economic Times, 4 March 2026.
[25] Harry Booth, “The Most Disruptive Company in the World”, Time, 11 March 2026.
[26] Tess McClure, “Death toll from school bombing in southern Iran reportedly rises to 165”, The Guardian, 1 March 2026.
[27] “Pentagon assessment finds U.S. at fault for strike on school in Iran”, Here and Now, 12 March 2026.
[28] Jakob, “LUCAS: US $35,000 Kamikaze Drone Based on Reverse-Engineered Iranian Shahed-136”, TrendingTopics, 20 March 2026.