Pentagon reveals how Military AI chatbots accelerate targeting decisions in Iran operations

Reviewed byNidhi Govil

7 Sources

Share

Defense officials confirm the US military is using AI chatbots like Claude to analyze intelligence and prioritize targets in Iran, striking 1,000 targets in the first 24 hours. The technology adds a conversational layer to Project Maven, but lawmakers are calling for stricter oversight as concerns grow about human judgment in war and the reliability of AI-powered decision support systems.

Pentagon Deploys AI Chatbots for Targeting Decisions in Iran

The US military has integrated AI chatbots into its combat operations, using them to accelerate AI targeting decisions during recent strikes in Iran. A defense official revealed to MIT Technology Review that generative AI systems could receive a list of possible targets, analyze the information, and rank priorities while accounting for factors like aircraft locations

1

. Humans remain responsible for checking and evaluating the results, though the official would not confirm whether this represents current operational use. The military struck 1,000 targets in the first 24 hours of the Iran campaign and reached 5,000 targets within 10 days, according to US Central Command

4

. OpenAI and xAI recently reached agreements for their models to be used by the Pentagon in classified settings, while Anthropic's Claude AI has been integrated into existing Military AI systems

1

.

Source: Bloomberg

Source: Bloomberg

Project Maven Adds Generative AI Layer to Combat Operations

Since 2017, the US military has developed Project Maven, a big data initiative using computer vision to analyze drone footage and identify targets

1

. Palantir serves as the primary contractor behind the Maven Smart System, which is managed by the National Geospatial Intelligence Agency and accessible to the Army, Air Force, Space Force, Navy, Marine Corps, and US Central Command

2

. The system applies computer vision algorithms to satellite imagery and automatically detects objects likely to be enemy systems.

Source: The Conversation

Source: The Conversation

Now, generative AI in military operations is being added as a conversational chatbot layer, allowing personnel to more quickly find and analyze data when making decisions about which targets to prioritize

1

. Military officials can use Claude to sift through large volumes of intelligence, according to sources familiar with the matter

2

. Maven features include an AI Asset Tasking Recommender that can propose which bombers and munitions should be assigned to specific targets

2

.

Source: MIT Tech Review

Source: MIT Tech Review

AI in Combat Operations Raises Questions About Human Judgment in War

The use of AI-powered decision support systems in Iran and Venezuela has intensified scrutiny over the role of human judgment in war. Over 1,300 civilians have been killed in airstrikes on Iran, including more than 175 at a girls school, according to Iranian officials

4

. The New York Times reported that preliminary investigations found outdated targeting data partly responsible for the school strike

1

. Lawmakers are demanding greater oversight of military AI. Rep. Jill Tokuda stated that "human judgment must remain at the center of life-or-death decisions," while Rep. Sara Jacobs warned that "AI tools aren't 100% reliable -- they can fail in subtle ways and yet operators continue to over-trust them"

5

. The concern centers on whether generative AI outputs, which are easier to access but harder to verify than traditional Maven interfaces, could lead to mistakes in AI for war planning

1

.

Pentagon Clash With Anthropic Highlights Autonomous Weapon Systems Debate

In late February, the Pentagon labeled Anthropic a supply chain risk after the company refused to grant unconditional access to its Claude models, insisting they should not be used for mass surveillance of Americans or fully autonomous weapons

2

. Defense Secretary Pete Hegseth described Anthropic's stance as an attempt to "seize veto power over the operational decisions of the United States military"

4

. Anthropic CEO Dario Amodei stated his AI systems are not ready to produce fully autonomous weapons reliable enough to be safe

4

. Despite the blacklisting, Claude continues to be used for intelligence analysis and administrative processes in strikes, according to people familiar with the situation

4

. The military is working to reach the point where it can identify 1,000 targets not in a day but in a single hour, and is developing capabilities to put AI directly into one-way attack drones for navigation and target location even when communications are severed

4

.

Long-Term Implications for AI War Planning and National Security

The effective use of automated systems depends on extensive infrastructure and skilled personnel built over decades, according to scholars studying strategic technology

3

. While large language models enable faster intelligence processing, success or failure in war typically depends on the people using the technology rather than the machines themselves. Adm. Brad Cooper, leader of US Central Command, stated that AI systems help "sift through vast amounts of data in seconds" so leaders can "make smarter decisions faster than the enemy can react," but emphasized that "humans will always make final decisions on what to shoot"

5

. The Pentagon is pursuing other companies to provide access to AI useful in targeting, with OpenAI announcing a deal to provide services on the same day the Anthropic blacklisting was announced

4

. The demand for oversight of military AI continues to grow as lawmakers call for ethical guardrails and transparency about how much control is ceded to the technology in life-or-death decisions

5

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo