Pentagon Investigates AI Role in Missile Strike That Killed 175 at Iranian School

2 Sources

Share

The U.S. military is investigating whether AI played a role in a Tomahawk missile strike on an Iranian elementary school that killed at least 175 people, mostly children. Preliminary findings point to outdated targeting data, but questions persist about Anthropic's Claude AI, which the Pentagon uses for target selection despite designating the company a supply chain risk over its refusal to remove guardrails against autonomous weapons.

Pentagon Investigation Centers on Outdated Targeting Data and AI Role

The U.S. military is conducting an investigation into a Tomahawk missile strike on February 28 that hit Shajarah Tayyebeh elementary school in Minab, Iran, killing at least 175 people, most of them children

1

. Preliminary findings suggest that outdated targeting data provided by the Defense Intelligence Agency led U.S. Central Command to create target coordinates for what was believed to be part of an Iranian military base

2

. However, the school building had been separated from the Islamic Revolutionary Guards Navy base roughly 15 years ago, with a fence erected between 2013 and 2016

1

.

Source: Futurism

Source: Futurism

Crucially, the Pentagon is investigating whether AI models, including Anthropic's Claude, contributed to the targeting failure

2

. When asked directly, the Pentagon refused to confirm or deny whether AI had any role in the Iranian school bombing

2

. Anonymous officials told the New York Times that AI was "unlikely" to be the cause, suggesting instead it was probably human error

1

.

Anthropic's Claude and the Maven Smart System Under Scrutiny

The investigation has brought attention to AI in military applications, specifically how Claude works with the National Geospatial-Intelligence Agency's Maven Smart System to "identify points of interest for military intelligence officers"

2

. Investigators are examining how the National Geospatial-Intelligence Agency, which analyzes satellite imagery for the military and intelligence community, may have been involved in transmitting faulty targeting information

1

.

Open source investigations raise questions about the failure. Colorful murals at the school were visible from Google Earth at least eight years ago, calling into question how anyone with access to satellite imagery could have mistaken the school for a legitimate military target

1

. Regardless of the investigation outcome, officials maintain it was ultimately human error to bomb the school, regardless of how the target was selected

2

.

Pentagon Autonomous Weapons Dispute Escalates with Supply Chain Risk Designation

The tragedy unfolds against a backdrop of escalating tensions between the Pentagon and Anthropic. Defense Secretary Pete Hegseth designated Anthropic as a supply chain risk because the AI company refused to drop guardrails that prohibit Claude from being used for mass domestic surveillance and for fully autonomous weapons

1

. This marks the first time the Pentagon has designated a U.S. company as a supply chain risk

1

.

Despite the designation, the U.S. military has continued using Claude during operations, with a planned phase-out over six months

1

2

. Anthropic has filed suit against the designation, and the company's decision has caused other government contractors to reconsider their relationships with Anthropic, with some characterizing the Trump administration's moves as attempted corporate pressure

1

.

Implications for Military AI Deployment and Civilian Safety

The incident raises urgent questions about accountability when AI systems assist in targeting decisions that result in civilian deaths. While officials attribute the strike to outdated data and human error, the investigation into whether AI models contributed to the tragedy highlights the opacity surrounding AI in military applications. More than 1,800 people have died since the start of the conflict, including 7 U.S. service members

1

.

President Donald Trump initially attempted to blame Iran for hitting the school, claiming "We think it was done by Iran. They're very inaccurate, as you know, with their munitions"

1

. However, even Hegseth refused to back up these claims, stating only that the matter was under investigation and insisting the U.S. does not target civilians

1

.

As Anthropic plans to open a permanent office in Washington D.C., observers see little prospect for resolution unless the company agrees to remove its guardrails against autonomous weapons and mass surveillance

1

. The investigation's findings will likely shape future debates about the role of AI in targeting decisions and the balance between technological capabilities and ethical constraints in military operations.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo