Claude's Role in the Recent Military Strike: A Controversial AI Tool

The article explores the implications of AI technology, particularly Claude, in military operations following a recent strike in Iran.

Claude’s Role in Military Operations

A sudden airstrike in Tehran thrust a Silicon Valley AI company into the spotlight. On February 28, 2026, the U.S. and Israel launched military strikes against Iran, which retaliated by targeting multiple U.S. military bases in the Middle East. Within 24 hours of the military actions, reports emerged that Iran’s Supreme Leader Khamenei had died in the strikes. By the night of March 1, Iranian military commanders confirmed multiple casualties, including former President Ahmadinejad.

Amid these events, a detail surfaced from a Wall Street Journal report: despite President Trump’s order for federal agencies to cease using products from the AI company Anthropic just hours before the airstrikes, the U.S. Central Command still utilized Claude, a model developed by Anthropic, for intelligence assessment, target identification, and operational scenario simulation.

This sensitive timing has led to speculation, culminating in a speculative article titled “Deep Dive: How Claude and Palantir Killed Khamenei?” which, lacking authoritative facts, spun a narrative of “AI killing humans” into a technical rumor. However, the dramatic outcome of “banning while using” has unveiled a glimpse into the real role of AI in modern warfare, making the Pentagon’s ban on Anthropic particularly sensitive.

The Ongoing Use of Claude Amidst Controversy

Before the outbreak of conflict in the Middle East, tensions between the Trump administration and Anthropic had persisted for months. The conflict began on January 9, when Defense Secretary Hegseth issued a memo calling for the extensive integration of AI in the military and demanding unrestricted technical support from partner companies, necessitating a renegotiation of contracts.

Anthropic maintained two core red lines: AI could not be used for mass surveillance of U.S. citizens, nor integrated into fully autonomous lethal weapon systems. The company expressed concerns that the previously unfeasible large-scale surveillance was becoming possible with AI advancements, often referred to as “Skynet”.

The crux of the dispute centered on commercial data: Anthropic was willing to allow its technology to be used for classified materials collected by the NSA under the Foreign Intelligence Surveillance Act, but it sought legally binding commitments from the Defense Department to ensure that non-classified commercial data involving U.S. citizens (such as location data and browsing history) would not be used. The U.S. government ignored these requests, asserting that “U.S. combat personnel will never be held hostage by the ideological whims of large tech companies.”

Anthropic’s hesitance, coupled with interference from its competitor OpenAI, further angered the Trump administration. Hegseth issued a final ultimatum to Anthropic: failure to compromise would result in a $200 million contract cancellation, designation as a “supply chain risk,” and potential enforcement of compliance under the Defense Production Act. This designation had previously only been applied to foreign companies.

Trump expressed his anger via social media, announcing an immediate halt to all federal agencies’ use of Anthropic’s technology. However, a six-month transition period was established for agencies like the Department of Defense.

Yet, just hours after Trump’s announcement, the U.S. military launched its airstrikes against Iran. Insiders confirmed to the Wall Street Journal that Central Command continued to utilize Claude. However, the military declined to comment on which systems were employed in the Middle Eastern military actions.

Anthropic CEO Dario Amodei confirmed that the company had previously developed a customized version of Claude for the military, which was one to two generations ahead of the civilian version, significantly enhancing the military’s operational objectives.

The Role of AI in Military Actions

What role does Claude play in military operations? Reports indicate that despite the Defense Department contracting multiple tech companies to develop AI technologies or integrate them into military systems, Anthropic remains unique as the only AI model permitted for use in classified military systems.

Claude has been deployed within classified networks to provide services to military users via Palantir’s Gotham system, a combination referred to as “the brain and nervous system of the war machine.” A report from Dongfang Securities noted that Palantir’s Gotham platform had received investment from the CIA’s venture capital arm as early as 2005, with core capabilities in integrating various physical world information to enhance decision-making efficiency and quality.

Claude’s integration elevates this capability to new dimensions. Insiders revealed that Claude was employed for three core tasks during the recent military action: intelligence assessment, potential target identification, and operational scenario simulation. Earlier reports indicated that Claude was also used in U.S. military actions against Venezuela.

Experts from the Council on Foreign Relations suggested that AI’s role likely centers around open-source intelligence analysis. “My guess is it was used to analyze maps or monitor Venezuelan media sources, such as real-time social media information streams, providing more information to the U.S. military.”

The pressing question remains: Did Claude actually “kill” Khamenei during this military strike? As of now, no reliable details have been disclosed. However, this question itself points to the subtle distinction between the roles AI is allowed to play in warfare and those it is actually playing.

According to the PLA Daily, the U.S. Department of Defense released an “AI Acceleration Strategy” earlier this year, clearly stating the core objective of “accelerating the U.S. military’s dominance in AI” and proposing a comprehensive plan to build an “AI-first” combat force. This strategy emphasizes concepts such as “speed wins” and “wartime posture,” sending strong signals of readiness to engage in combat, raising significant international concern.

In combat scenarios, AI focuses on capability upgrades, including supporting command and decision-making intelligence through the “proxy network” project; in the intelligence domain, it aims to compress the cycle of transforming intelligence into operational capability from “years” to “hours.”

The Wall Street Journal reported that the U.S. military utilizes AI systems to analyze vast amounts of intelligence fragments, narrowing target location error margins, simulating strike plans, and directly integrating into the joint all-domain command and control system, synchronizing tactical parameters across all operational units.

In other words, while AI does not literally pull the trigger, it plans the location, timing, and method of pulling the trigger.

This is precisely the “unknowns” that Amodei worries about. “I worry about many unknowns,” he said in a recent media interview. “That’s why we try to predict every possible outcome. We are considering the potential for misuse.”

The Divide in Silicon Valley

In the aftermath of the explosion in Tehran, Silicon Valley AI companies find themselves at a crossroads. On one side is Anthropic. Amodei unexpectedly garnered a wave of support on social media, with users urging to “cancel ChatGPT subscriptions and switch to Claude,” leading to Claude’s downloads soaring to the top of the App Store’s free chart the day after the airstrike.

These users may not agree with all of Anthropic’s positions, but they clearly do not want their everyday chatbot to become part of a war machine.

Following the comprehensive ban, Anthropic CEO Dario Amodei appeared haggard during an interview, explaining, “We are patriotic Americans. Everything we do is for this country.”

In reality, as mentioned earlier, Anthropic was one of the first AI companies to gain permission for classified military systems due to its superior reasoning capabilities and longstanding ties with the Pentagon. The controversy lies in the Pentagon’s desire for unrestricted access to fully automated weapons, which touches on the red lines set by Anthropic from its inception, leading to the company’s hesitance in the negotiations.

However, Amodei also clarified that Anthropic is not fundamentally opposed to such weapons but believes that “current reliability is not sufficient” and wants to discuss regulation and oversight.

The rapid breakdown of relations between the two parties has provided an opportunity for OpenAI, which suddenly entered the fray. In January, OpenAI removed explicit bans on “military and warfare” from its usage policy. Two weeks prior, it partnered with California-based weapons company Anduril to jointly develop AI weapon systems. On February 28, it officially signed a contract with the Pentagon.

When asked why the Pentagon chose OpenAI, procurement chief Michael’s response was succinct: “As long as it is legal, we want to treat it like any other technology.”

However, just as Sam Altman announced securing the Department of Defense contract, his employees were signing a petition and submitting resignations, while online, a wave of backlash against ChatGPT surged.

Whether Claude actually “killed” Khamenei may only be a temporary question, with researchers bluntly pointing out that tech companies’ hesitance often stems not just from moral concerns but from the belief that the technology is not yet ready for real combat. The day when it is “ready” is likely accelerating towards us.

Was this helpful?

Likes and saves are stored in your browser on this device only (local storage) and are not uploaded to our servers.

Comments

Discussion is powered by Giscus (GitHub Discussions). Add repo, repoID, category, and categoryID under [params.comments.giscus] in hugo.toml using the values from the Giscus setup tool.