3 Sources
3 Sources
[1]
US military leans into AI for attack on Iran, but the tech doesn't lessen the need for human judgment in war
The U.S. military was able "to strike a blistering 1,000 targets in the first 24 hours of its attack on Iran" thanks in part to its use of artificial intelligence, according to The Washington Post. The military has used Claude, the AI tool from Anthropic, combined with Palantir's Maven system, for real-time targeting and target prioritization in support of combat operations in Iran and Venezuela. While Claude is only a few years old, the U.S. military's ability to use it, or any other AI, did not emerge overnight. The effective use of automated systems depends on extensive infrastructure and skilled personnel. It is only thanks to many decades of investment and experience that the U.S. can use AI in war today. In my experience as an international relations scholar studying strategic technology at Georgia Tech, and previously as an intelligence officer in the U.S. Navy, I find that digital systems are only as good as the organizations that use them. Some organizations squander the potential of advanced technologies, while others can compensate for technological weaknesses. Myth and reality in military AI Science fiction tales of military AI are often misleading. Popular ideas of killer robots and drone swarms tend to overstate the autonomy of AI systems and understate the role of human beings. Success, or failure, in war usually depends not on machines but the people who use them. In the real world, military AI refers to a huge collection of different systems and tasks. The two main categories are automated weapons and decision support systems. Automated weapon systems have some ability to select or engage targets by themselves. These weapons are more often the subject of science fiction and the focus of considerable debate. Decision support systems, in contrast, are now at the heart of most modern militaries. These are software applications that provide intelligence and planning information to human personnel. Many military applications of AI, including in current and recent wars in the Middle East, are for decision support systems rather than weapons. Modern combat organizations rely on countless digital applications for intelligence analysis, campaign planning, battle management, communications, logistics, administration and cybersecurity. Claude is an example of a decision support system, not a weapon. Claude is embedded in the Maven Smart System, used widely by military, intelligence and law enforcement organizations. Maven uses AI algorithms to identify potential targets from satellite and other intelligence data, and Claude helps military planners sort the information and decide on targets and priorities. The Israeli Lavender and Gospel systems used in the Gaza war and elsewhere are also decision support systems. These AI applications provide analytical and planning support, but human beings ultimately make the decisions. The long history of military AI Weapons with some degree of autonomy have been used in war for well over a century. Nineteenth-century naval mines exploded on contact. German buzz bombs in World War II were gyroscopically guided. Homing torpedoes and heat-seeking missiles alter their trajectory to intercept maneuvering targets. Many air defense systems, such as Israel's Iron Dome and the U.S. Patriot system, have long offered fully automatic modes. Robotic drones became prevalent in the wars of the 21st century. Uncrewed systems now perform a variety of "dull, dirty and dangerous" tasks on land, at sea, in the air and in orbit. Remotely piloted vehicles like the U.S. MQ-9 Reaper or Israeli Hermes 900, which can loiter autonomously for many hours, provide a platform for reconnaissance and strikes. Combatants in the Russia-Ukraine war have pioneered the use of first-person view drones as kamikaze munitions. Some drones rely on AI to acquire targets because electronic jamming precludes remote control by human operators. But systems that automate reconnaissance and strikes are merely the most visible parts of the automation revolution. The ability to see farther and hit faster dramatically increases the information processing burden on military organizations. This is where decision support systems come in. If automated weapons improve the eyes and arms of a military, decision support systems augment the brain. Cold War era command and control systems anticipated modern decision support systems such as Israel's AI-enabled Tzayad for battle management. Automation research projects like the United States' Semi-Automatic Ground Environment, or SAGE, in the 1950s produced important innovations in computer memory and interfaces. In the U.S. war in Vietnam, Igloo White gathered intelligence data into a centralized computer for coordinating U.S. airstrikes on North Vietnamese supply lines. The U.S. Defense Advanced Research Projects Agency's strategic computing program in the 1980s spurred advances in semiconductors and expert systems. Indeed, defense funding originally enabled the rise of AI. Organizations enable automated warfare Automated weapons and decision support systems rely on complementary organizational innovation. From the Electronic Battlefield of Vietnam to the AirLand Battle doctrine of the late Cold War and later concepts of network-centric warfare, the U.S. military has developed new ideas and organizational concepts. Particularly noteworthy is the emergence of a new style of special operations during the U.S. global war on terrorism. AI-enabled decision support systems became invaluable for finding terrorist operatives, planning raids to kill or capture them, and analyzing intelligence collected in the process. Systems like Maven became essential for this style of counterterrorism. The impressive American way of war on display in Venezuela and Iran is the fruition of decades of trial and error. The U.S. military has honed complex processes for gathering intelligence from many sources, analyzing target systems, evaluating options for attacking them, coordinating joint operations and assessing bomb damage. The only reason AI can be used throughout the targeting cycle is that countless human personnel everywhere work to keep it running. AI gives rise to important concerns about automation bias, or the tendency for people to give excessive weight to automated decisions, in military targeting. But these are not new concerns. Igloo White was often misled by Vietnamese decoys. A state-of-the-art U.S. Aegis cruiser accidentally shot down an Iranian airliner in 1988. Intelligence mistakes led U.S. stealth bombers to accidentally strike the Chinese embassy in Belgrade, Serbia, in 1999. Many Iraqi and Afghan civilians died due to analytical mistakes and cultural biases within the U.S. military. Most recently, evidence suggests that a Tomahawk cruise missile struck a girls school adjacent to an Iranian naval base, killing about 175 people, mostly students. This targeting could have resulted from a U.S. intelligence failure. Automated prediction needs human judgment The successes and failures of decision support systems in war are due more to organizational factors than technology. AI can help organizations improve their efficiency, but AI can also amplify organizational biases. While it may be tempting to blame Lavender for excessive civilian deaths in the Gaza Strip, lax Israeli rules of engagement likely matter more than automation bias. As the name implies, decision support systems support human decision-making; AI does not replace people. Human personnel still play important roles in designing, managing, interpreting, validating, evaluating, repairing and protecting their systems and data flows. Commanders still command. In economic terms, AI improves prediction, which means generating new data based on existing data. But prediction is only one part of decision-making. People ultimately make the judgments that matter about what to predict and how to use predictions. People have preferences, values and commitments regarding real-world outcomes, but AI systems intrinsically do not. In my view, this means that increasing military use of AI is actually making humans more important in war, not less.
[2]
U.S. military is using AI to help plan Iran air attacks, sources say, as lawmakers call for oversight
As the U.S. military expands its use of AI tools to pinpoint targets for airstrikes in Iran, members of Congress are calling for guardrails and greater oversight of the technology's use in war. Two people with knowledge of the matter, who requested anonymity to discuss sensitive matters, confirmed the military is using AI systems from data analytics company Palantir to identify potential targets in the ongoing attacks. The use of Palantir's software, which relies in part on Anthropic's Claude AI systems, comes as Defense Secretary Pete Hegseth aims to put artificial intelligence at the heart of America's combat operations -- and as he has clashed with Anthropic leadership over limitations on the use of AI. Yet, as AI assumes a wider role on the battlefield, lawmakers are demanding greater focus on the protections that should govern its use and increased transparency about how much control is ceded to the technology. "We need a full, impartial review to determine if AI has already harmed or jeopardized lives in the war with Iran," Rep. Jill Tokuda, D-Hawaii, a member of the House Armed Services Committee, told NBC News in response to questions about the use and reliability of AI in military contexts. "Human judgment must remain at the center of life-or-death decisions." The Defense Department and leading AI companies such as OpenAI and Anthropic have publicly stated that current AI systems should not be able to kill without human signoff. But the concern remains that relying on AI for parts of its operations or decision-making can lead to mistakes in military operations. The Pentagon's chief spokesperson, Sean Parnell, said in a post on X on Feb. 26 that the military did not "want to use AI to develop autonomous weapons that operate without human involvement." The Defense Department did not respond to questions about how the military balances its use of AI to reduce human workloads while verifying analysis and targeting suggestions are accurate. Lawmakers and independent experts who spoke to NBC News raised alarm over the military's use of such tools, calling for clear safeguards to ensure humans remain involved in life-or-death decisions on the battlefield. "AI tools aren't 100% reliable -- they can fail in subtle ways and yet operators continue to over-trust them," said Rep. Sara Jacobs, D-Calif, a member of the House Armed Services Committee. "We have a responsibility to enforce strict guardrails on the military's use of AI and guarantee a human is in the loop in every decision to use lethal force, because the cost of getting it wrong could be devastating for civilians and the service members carrying out these missions," she said. Anthropic's Claude has become a crucial component of Palantir's Maven intelligence analysis program, which was also used in the U.S. operation to capture Venezuelan President Nicolás Maduro. News of Claude's role in recent military actions was first reported by The Wall Street Journal and The Washington Post. But that role has been complicated by Anthropic's clash with Hegseth after the company sought to prevent the military from using its AI for domestic surveillance and autonomous deadly weapons. Last week, the Defense Department labeled Anthropic a threat to national security, a move that threatens to remove it from military use in the coming months. Anthropic filed a lawsuit to fight that designation. Anthropic declined to comment. Palantir did not respond to a request for comment. In a video posted to X on Wednesday, Adm. Brad Cooper, leader of U.S. Central Command, acknowledged that AI had become a key tool in helping the U.S. choose targets in Iran. "Our warfighters are leveraging a variety of advanced AI tools. These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react," he said. "Humans will always make final decisions on what to shoot and what not to shoot and when to shoot, but advanced AI tools can turn processes used to take hours and sometimes even days into seconds." The Trump administration has publicly embraced using the technology both for the military and throughout the government. Rep. Pat Harrigan, R-N.C., said that AI has already become crucial for rapidly processing military intelligence, including in Iran. "AI is a tool that helps our warfighters process enormous amounts of data faster than any human could alone, and what we saw in Operation Epic Fury, over 2,000 targets struck with remarkable precision, is a testament to how these capabilities can be used responsibly and effectively," Harrigan, who also serves on the House Armed Services Committee, told NBC News in a statement. "But no AI system replaces the judgment, the training, and the experience of the American warfighter. The human in the loop is not a formality, it is a requirement, and nothing in how our military operates suggests otherwise," he said. While no lawmakers contacted by NBC News said that AI should be completely removed from military use, some said that more oversight is needed. Sen. Elissa Slotkin, D-Mich., a member of the Senate Armed Services Committee, said that the Defense Department had not done enough to clarify how well humans are vetting AI-assisted or generated military intelligence. "It's really up to the humans, and in this case the Secretary of Defense, to ensure that there's human redundancy for the foreseeable future, and that is what we just don't have confidence in," she said. Sen. Mark Warner, D-Va., the top Democrat on the Senate Intelligence Committee, said that he is concerned about the military's use of AI to assist with identifying targets and that there are unanswered questions about how the new technology is being used. "This has to be addressed," he told NBC News. OpenAI and Anthropic, both of which have worked with the U.S. military, have said that even their most advanced systems are error prone, and the world's top AI researchers admit they don't fully understand how leading AI systems work. In an interview with NBC last month, Anthropic CEO Dario Amodei said: "I can't tell you there's a 100% chance that even the systems we build are perfectly reliable." A major OpenAI study published in September found that all major AI chatbots, which rely on systems called large language models, "hallucinate" or periodically fabricate answers. Sen. Kirsten Gillibrand, D-N.Y., called for clearer rules on how the military can use AI. "The Trump administration has already proven that it is willing to subvert American law to prosecute an unpopular war," she told NBC News. "There is little reason to trust that the DOD will be any more responsible with its use of AI without explicit safeguards." Mark Beall, head of government affairs at the AI Policy Network, a Washington D.C. think tank, and the director of AI strategy and policy at the Pentagon from 2018 to 2020, said that while AI could streamline the process of deciding where to strike, it was clear humans still need to thoroughly vet targets. "There's a lot of steps before the trigger gets pulled. AI systems are being deployed very effectively to accelerate existing workflows and allow commanders and analysts and planners to have better and faster decision making capabilities," he added. "But when it comes to actually deploying weapon systems, this technology is not ready yet." "These systems will get really, really good, and as other adversaries start using them, there will be more pressure to shorten the review of AI outputs in order to operate at useful and effective speeds," Beall said. "We have to figure out how to solve this reliability problem before we get there. No matter what you think about lethal autonomous weapons, making them safe and effective is in the interest of the entire world." Heidy Khlaaf, the chief scientist at the AI Now Institute, a nonprofit that advocates for ethical use of the technology, said she was concerned that reliance on AI to rapidly process information for life-or-death decisions could be a way for militaries to avoid accountability for mistakes. "It's very dangerous that 'speed' is somehow being sold to us as strategic here, when it's really a cover for indiscriminate targeting when you consider how inaccurate these models are," Khlaaf said.
[3]
The artificial intelligence software managing the U.S. war on Iran
During planning, Claude suggested hundreds of targets with precise coordinates and estimated strike results, reducing Iran's response ability - even though Trump has ordered a ban on Anthropic use. To strike approximately 1,000 targets within the first 24 hours of an attack on Iran, the U.S. military relied on the most advanced artificial intelligence ever deployed on the battlefield. This is an intelligent system that will be difficult for the Pentagon to give up, even after severing ties with the company that developed it. The U.S. military's Maven system, built by data-mining company Palantir, integrates Claude - the artificial intelligence model from Anthropic. According to a report first published by the Wall Street Journal, the system processes massive amounts of classified data from satellites and intelligence sources, providing real-time target scoring and prioritization. During attack planning, Claude suggested hundreds of targets, provided precise coordinates, and even estimated the strike's outcomes afterward, significantly reducing Iran's ability to respond. So far, the model has assisted in thwarting terrorist plots and in the raid to capture Venezuelan President Nicolás Maduro, but this is the first time it is managing a large-scale military operation. The irony is that this unprecedented use is taking place amid a severe conflict. Just hours before the start of the airstrikes on Iran, U.S. President Donald Trump announced a future ban on the use of Anthropic tools by government agencies, giving the Pentagon six months to completely remove them from service. The dramatic move followed a dispute with Anthropic CEO Dario Amodei over the use of these tools for mass domestic surveillance and autonomous weapons. However, military commanders are so dependent on the system that U.S. officials indicated that if Amodei halts its operation, the government will use its authority to seize the technology. "His decisions cannot cost the life of a single American," noted a source familiar with the matter. The system, integrated into the Pentagon at the end of 2024, now serves over 20,000 military personnel. In parallel with the American strikes, the Israel Defense Forces reported close cooperation for thousands of hours with the U.S. military in building an extensive target database. Experts, such as Paul Scharre from the Center for a New American Security, warn that while the system enables planning "at machine speed instead of human speed," humans must supervise it because it "sometimes makes mistakes." Now, as Claude is on its way out, giants such as xAI and OpenAI have already signed agreements to take its place at the heart of the American war machine.
Share
Share
Copy Link
The U.S. military struck approximately 1,000 targets in Iran within the first 24 hours using AI-powered decision support systems. Anthropic's Claude, integrated into Palantir's Maven system, provided real-time target scoring and prioritization. But the deployment raises urgent questions about human judgment in war as lawmakers call for strict guardrails and transparency over AI's expanding battlefield role.
The U.S. military was able to strike approximately 1,000 targets in Iran within the first 24 hours of its attack, relying heavily on artificial intelligence to plan Iran air attacks at unprecedented speed and scale
1
. Anthropic's Claude, integrated into Palantir's Maven system, processed massive amounts of classified data from satellites and intelligence sources to provide real-time target scoring and target prioritization3
. During attack planning, Claude suggested hundreds of targets with precise coordinates and estimated strike outcomes, significantly reducing Iran's ability to respond. This marks the first time such AI-powered decision support systems have managed a large-scale military operation, though the technology has previously assisted in thwarting terrorist plots and the operation to capture Venezuelan President Nicolás Maduro2
.
Source: The Conversation
Adm. Brad Cooper, leader of U.S. Central Command, acknowledged that AI had become a key tool in targeting and decision-making. "These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react," he stated, emphasizing that "humans will always make final decisions on what to shoot and what not to shoot and when to shoot"
2
. The system now serves over 20,000 military personnel after its integration into the Pentagon at the end of 20243
.
Source: NBC
As AI assumes a wider role on the battlefield, members of Congress are calling for guardrails and greater transparency about how much control is ceded to the technology. Rep. Jill Tokuda, D-Hawaii, a member of the House Armed Services Committee, told NBC News: "We need a full, impartial review to determine if AI has already harmed or jeopardized lives in the war with Iran. Human judgment in war must remain at the center of life-or-death decisions"
2
.Rep. Sara Jacobs, D-Calif, raised concerns about automation bias and reliability. "AI tools aren't 100% reliable -- they can fail in subtle ways and yet operators continue to over-trust them," she said. "We have a responsibility to enforce strict guardrails on the military use of AI and guarantee a human is in the loop in every decision to use lethal force, because the cost of getting it wrong could be devastating for civilians and the service members carrying out these missions"
2
. The Defense Department and leading AI companies such as OpenAI and Anthropic have publicly stated that current AI systems should not be able to kill without human signoff, yet concerns remain that relying on AI for parts of operations can lead to mistakes2
.Claude represents an AI-powered decision support system rather than an autonomous weapon. The distinction matters critically for understanding the military use of AI. While killer robots and autonomous weapons that select or engage targets independently dominate public imagination, most modern military applications focus on intelligence analysis, campaign planning, battle management, and data processing
1
. Claude is embedded in the Maven Smart System, used widely by military, intelligence, and law enforcement organizations, where AI algorithms identify potential targets from satellite and other intelligence data while human planners make final decisions1
.Similar systems have been deployed by other nations. The Israeli Lavender and Gospel systems used in airstrikes provide analytical and planning support through AI in combat operations, though human beings ultimately make the decisions
1
. Paul Scharre from the Center for a New American Security noted that while the system enables planning "at machine speed instead of human speed," humans must supervise it because it "sometimes makes mistakes"3
.Related Stories
The unprecedented deployment occurs amid a severe conflict between the Defense Department and Anthropic. Just hours before airstrikes on Iran began, President Donald Trump announced a future ban on the use of Anthropic tools by government agencies, giving the Pentagon six months to completely remove them from service
3
. The dramatic move followed a dispute with Anthropic CEO Dario Amodei over the use of these tools for mass domestic surveillance and autonomous weapons. Last week, the Defense Department labeled Anthropic a threat to national security, a move that threatens to remove it from military use in the coming months. Anthropic filed a lawsuit to fight that designation2
.Military commanders have become so dependent on Palantir's Maven system that U.S. officials indicated that if Amodei halts its operation, the government will use its authority to seize the technology. "His decisions cannot cost the life of a single American," noted a source familiar with the matter
3
. As Claude faces removal, giants such as xAI and OpenAI have already signed agreements to take its place at the heart of the American war machine3
. The Israel Defense Forces reported close cooperation for thousands of hours with the U.S. military in building an extensive target database during the operations3
.Summarized by
Navi
[1]
[3]
28 Feb 2026•Policy and Regulation

Yesterday•Policy and Regulation

Yesterday•Technology

1
Technology

2
Policy and Regulation

3
Business and Economy
