4 Sources
4 Sources
[1]
AI-driven warfare is here, and the Iran strikes show how fast it's advancing
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. Cutting corners: When the war in Ukraine began in 2022, it was hailed as the first conflict to utilize the full spectrum of modern technology. The war Iran, on the other hand, is the first where AI is playing an integral part, including planning bombing strikes quicker than "the speed of thought." Reports this week claim Anthropic's Claude AI model was used in early US-Israel operations against Iran, including intelligence analysis and scenario planning tied to targeting. The coverage has reignited concerns that large language models are increasingly being folded into the "kill chain," potentially accelerating decision-making and creating pressure for humans to accept machine-generated options faster than traditional oversight processes allow. Reports say that Claude was used to assist in the initial strikes on Iran on Saturday that hit a range of targets and killed its supreme leader, Ayatollah Ali Khamenei. The US military said it is looking into state media reports of a missile hitting a school in southern Iran that killed 165 people, many children. The use of Claude in Iran came just days after the Trump administration moved to label Anthropic a "supply chain risk." Trump told federal agencies and the military to stop using Anthropic's tools following a breakdown in negotiations over restrictions the company says it wanted: no mass domestic surveillance of Americans and no fully autonomous weapons. Anthropic's tool continues to be used by the military while it is being phased out in favor of models from OpenAI, which struck a deal with the Pentagon over the weekend. In 2024, Claude became part of a system developed by war-tech firm Palantir that was deployed across the US Department of War and other national security agencies. The system is designed to "dramatically improve intelligence analysis and enable officials in their decision-making processes." "The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought," Craig Jones, a senior lecturer in political geography at Newcastle University and an expert in kill chains, told The Guardian. "So you've got scale and you've got speed, you're [carrying out the] assassination-style strikes at the same time as you're decapitating the regime's ability to respond with all the aerial ballistic missiles. That might have taken days or weeks in historic wars. [Now] you're doing everything at once." In 2025, Iran claimed it was using domestically developed AI in its missile-targeting systems. However, the country's primary uses of the technology appear to be cyber operations - phishing, DDoS attacks, and other disruptive intrusion attempts against US targets - as well as propaganda campaigns. Ultimately, AI is no longer a bit player in modern warfare. It's becoming a core element of both offense and defense, shortening the time between surveillance, analysis, and action. Beyond the immediate concerns about AI's tendancy to get some things very wrong, there are worries about how this usage will escalate in the future - and what it could mean for humanity.
[2]
Iran war exposes the expanding role of AI in military strike planning
The joint U.S. and Israeli offensive on Iran has done more than escalate a volatile regional conflict. It has revealed how algorithm-based targeting and data-driven intelligence reform the mechanics of warfare. In the first 12 hours alone, U.S. and Israeli forces reportedly carried out nearly 900 strikes on Iranian targets, an operational tempo that would have taken days or even weeks in earlier conflicts. Beyond the scale and lethality of the strikes, which included hundreds of missions using stealth bombers, cruise missiles, and suicide drones, what stands out most to military analysts and ethicists is the increasing role of artificial intelligence (AI) in planning, analyzing, and potentially executing those operations. Critics warn that this trend could compress decision timelines to levels where human judgment is marginalized, ushering in an era of warfare conducted at what has been described as "faster than the speed of thought." In military terms, "shortening the kill chain" refers to collapsing the sequence from target identification and intelligence validation to legal clearance and weapons release into a much tighter operational loop. This shortening interval raises fears that human experts may end up merely approving recommendations generated by algorithms. In an environment dictated by speed and automation, the space for hesitation, dissent, or moral restraint may be shrinking just as quickly.
[3]
Iran war heralds era of AI-powered bombing quicker than 'speed of thought'
Speed and scale of US military's AI war planning raises fears human decision-making may be sidelined The use of AI tools to enable attacks on Iran heralds a new era of bombing quicker than "the speed of thought", experts have said, amid fears human Βdecision-makers could be sidelined. Anthropic's AI model, Claude, was reportedly used by the US military in the barrage of strikes as the technology "shortens the kill chain" - meaning the process of target identification through to legal approval and strike launch. The US and Israel, which previously used AI to identify targets in Gaza, launched almost 900 strikes on Iranian targets in the first 12 hours alone, during which Israeli missiles killed Iran's supreme leader, Ayatollah Ali Khamenei. Academics studying the field say AI is collapsing the planning time required for complex strikes - a phenomenon known as "decision compression", which some fear could result in human military and legal experts merely rubber-stamping automated strike plans. In 2024 the San Francisco-based Anthropic deployed its model across the US Department of War and other national security agencies to speed up war planning. Claude became part of a system developed by the war-tech company Palantir with the Pentagon to "dramatically improve intelligence analysis and enable officials in their decision-making processes". "The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought," said Craig Jones, a senior lecturer in political geography at Newcastle University and an expert in kill chains. "So you've got scale and you've got speed, you're [carrying out the] assassination-style strikes at the same time as you're decapitating the regime's ability to respond with all the aerial ballistic missiles. That might have taken days or weeks in historic wars. [Now] you're doing everything at once." The latest AI systems can rapidly analyse mountains of information on potential targets from drone footage to telecommunications interceptions as well as human intelligence. Palantir's system uses machine learning to identify and prioritise targets and recommend weaponry, accounting for stockpiles and previous performance against similar targets. It also uses automated reasoning to evaluate legal grounds for a strike. "This is the next era of military strategy and military technology," said David Leslie, professor of ethics, technology and society at Queen Mary University of London, who has observed demonstrations of AI military systems. He also warned that reliance on AI can result in "cognitive off-loading". Humans tasked with making a strike decision can feel detached from its consequences because the effort to think it through has been made by a machine. On Saturday 165 people, many children, were killed in a missile strike that hit a school in southern Iran, according to state media. It appeared to be close to a military barracks and the UN called it "a grave violation of humanitarian law". The US military has said it is looking into the reports. It is not known what AI systems, if any, Iran has embedded into its war-fighting machine, although it claimed in 2025 to use AI in its missile-targeting systems. Its own AI programme, hampered by international sanctions, appears negligible by contrast with the AI superpowers of the US and China. In the days before the Iran strikes, the US administration had said it would banish Anthropic from its systems after it refused to allow its AI to be used for fully autonomous weapons or surveillance of US citizens. But it remains in use until it is phased out. Anthropic's rival, OpenAI, quickly signed its own deal with the Pentagon for military use of its models. "The advantage is in the speed of decision-making, the collapsing of planning from what might have taken days or weeks before to minutes or seconds," said Leslie. "These systems produce a set of options for human decision makers but [they've] got a much narrower time band ... to evaluate the recommendation." "The deployment of AI is expanding," said Prerana Joshi, research fellow at the Royal United Services Institute, a defence thinktank. "It is being done across countries' defence estates ... across logistics, training, decision management, maintenance." She added: "AI is a technology that will allow decision makers, and anyone in that chain, to improve the productivity and efficiency of what they do. It's a way of synthesising data at a much faster pace that is helpful to decision makers."
[4]
'AI-first' warfare: America's algorithmic edge in Operation Epic Fury - opinion
AI may have contributed to tactical successes in Tehran and elsewhere In the rapidly unfolding conflict with Iran (known in the US as Epic Fury and in Israel as Roaring Lion), artificial intelligence has ceased to be a back-office analytical tool and has become operationally embedded in battlefield decision-making and war planning. Reports indicate that the US military deployed AI systems provided by the start-up Anthropic - specifically its large-language model "Claude" - to support intelligence analysis, target identification, and in operational simulations during recent strikes on Iranian targets, even hours after US President Donald Trump ordered a federal ban on the technology. This extraordinary sequence of events - in which AI's role in kinetic operations outpaced public policy - reflects both the deep integration of advanced models into combat systems and the Pentagon's urgent push to field AI across its mission sets. From intelligence support to operational acceleration According to reports from The Wall Street Journal and other outlets, US Central Command utilized Claude in conjunction with conventional assets - including Tomahawk missiles, stealth aircraft, and AI-driven drones - to process vast quantities of battlefield and sensor data in real time. The AI model assisted commanders by synthesizing intelligence, prioritizing high-value targets, and running "what-if" scenarios that had traditionally taken hours of human analysis. Even as the Trump administration publicly denounced Anthropic's technology and gave federal agencies six months to phase it out, the reality of its use in an actual war zone underscores the operational value military planners see in these models. Rescue and war planners reportedly resisted immediate cutoff because Claude was already deeply embedded in mission-critical workflows, including through partnerships with firms such as Palantir that integrate commercial AI into secure military systems. The tensions between technological utility and political leadership are stark. While commanders in the theater of war rely on the AI's ability to collapse sensor-to-commander timelines, civilian leadership is still grappling with the authority and ethics of accelerating such integration without clear oversight. The Pentagon's 'AI-first' directive The US Department of War (DoW) - the modern name for the Pentagon's operational arm - has formally embraced an 'AI-first' strategy, a blueprint to make AI foundational to how the US armed forces fight, gather intelligence, and organize operations across domains. The strategy memo directs the DoW to become an "AI-first warfighting force" that accelerates experimentation with frontier models, removes bureaucratic barriers to AI deployment, prioritizes asymmetric advantage in compute and data, and incorporates AI into core decision loops. Seven "Pace-setting projects" highlighted in the strategy roadmap span disciplines from tactical swarm coordination to AI-augmented battle management agents - signaling that AI isn't only for intelligence support but is being woven into how campaigns are planned and executed. In practical terms, the strategy is not an abstract wish list. The Department has already rolled out GenAI.mil, a secure AI platform designed to bring generative models and analytics into both classified and unclassified networks, expanding AI access to millions of service members and civilian personnel. Silicon Valley meets the war machine Defense's rapid adoption of AI has provoked significant industry debate. Anthropic, initially an approved provider of AI models for classified missions, has resisted Pentagon demands to remove safeguards - particularly regarding autonomous weapons and mass surveillance - arguing that such uses exceed current safe boundaries for the technology. Defense officials, meanwhile, have threatened contract cancellation and even labeling the company a "supply chain risk" to compel broader access, injecting political pressure into what was once a technical negotiation. These clashes have triggered internal tech industry pushback, including employee petitions opposing military AI use in certain domains, reflecting broader tensions over ethics, governance, and national security. The new 'rules' of war The US experience in the Iran conflict highlights a transformative moment in modern warfare: AI models are no longer confined to predictive maintenance or administrative support but are actively processed as force multipliers in combat scenarios. This shift carries profound implications for how wars are planned, fought, and governed - from tactical autonomy to strategic escalation. At the same time, scholars and policymakers caution that the rush to embed AI into lethal operations must be paired with robust ethical and legal frameworks, lest the technology outpace the norms that govern its use. The evolution of international law, rules of engagement, and accountability mechanisms will be tested as AI systems influence decisions once exclusively in human hands. The AI arms race is on The US military's deployment of AI in the Iran conflict, in the face of a political ban and amid an AI-first institutional strategy, reveals both the strategic imperatives and the dilemmas that advanced technology introduces into contemporary warfare. As AI becomes deeply woven into command cycles, intelligence synthesis, and operational planning, the United States is effectively pioneering a future where the boundary between human judgment and algorithmic decision support is continually renegotiated. The outcome of this negotiation among military planners, policymakers, industry partners, and international audiences will shape the rules of war in the AI era. The writer is the head of the Institute for Applied Research in Responsible AI at HIT and of the Deep-Tech & National Security Project at the Institute for National Security Studies (INSS). She is also a former senior director at the National Security Council (NSC).
Share
Share
Copy Link
Recent Iran strikes revealed AI's expanding role in modern warfare, with nearly 900 strikes executed in just 12 hours using Anthropic's Claude AI for intelligence analysis and target identification. The operations demonstrate how military AI is collapsing decision timelines from days to minutes, raising concerns about human oversight in combat decisions.
The recent Iran strikes have exposed a fundamental shift in modern combat: AI in warfare is no longer a supporting technology but a core operational component driving military decision-making at speeds that outpace traditional human processes. US and Israeli forces executed nearly 900 strikes on Iranian targets in the first 12 hours alone during what the US calls Operation Epic Fury, an operational tempo that would have required days or weeks in earlier conflicts
2
. The strikes, which killed Iran's supreme leader Ayatollah Ali Khamenei, relied heavily on Anthropic's Claude AI model for intelligence analysis, target identification, and scenario planning1
.
Source: Jerusalem Post
This marks the first major conflict where military AI operates as an integral element of the kill chain, fundamentally altering how wars are planned and executed. Craig Jones, a senior lecturer in political geography at Newcastle University, explained that "the AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought"
3
. The technology enabled forces to conduct assassination-style strikes while simultaneously neutralizing Iran's ability to respond with ballistic missiles, compressing what historically took days into simultaneous operations.The concept of shortening the kill chain refers to collapsing the sequence from target identification and intelligence validation to legal clearance and weapons release into a much tighter operational loop
2
. In 2024, Anthropic's Claude became part of a system developed by war-tech firm Palantir that was deployed across the US Department of War and other national security agencies, designed to "dramatically improve intelligence analysis and enable officials in their decision-making processes"1
.Source: Interesting Engineering
The latest AI systems can rapidly analyze massive volumes of information on potential targets, from drone footage to telecommunications interceptions and human intelligence. Palantir's system uses machine learning to identify and prioritize targets, recommend weaponry while accounting for stockpiles and previous performance against similar targets, and employs automated reasoning to evaluate legal grounds for strikes
3
. According to reports, US Central Command utilized Claude in conjunction with conventional assets including Tomahawk missiles, stealth aircraft, and AI-driven drones to process vast quantities of battlefield and sensor data in real time4
.The Pentagon has formally embraced an AI-first strategy, making artificial intelligence foundational to how US armed forces fight, gather intelligence, and organize operations across domains. The Department of War's strategy memo directs the military to become an "AI-first warfighting force" that accelerates experimentation with frontier models, removes bureaucratic barriers to AI deployment, and incorporates AI into core decision loops
4
. Seven pace-setting projects highlighted in the roadmap span disciplines from tactical swarm coordination to AI-augmented battle management agents, signaling that AI isn't only for intelligence support but is being woven into how campaigns are planned and executed.The Pentagon has already rolled out GenAI.mil, a secure AI platform designed to bring generative models and analytics into both classified and unclassified networks, expanding AI access to millions of service members and civilian personnel
4
. This operational integration demonstrates how AI in military strike planning has moved from theoretical capability to battlefield reality, with large-language model technology processing intelligence and generating strike recommendations faster than traditional human analysis.The accelerating role of AI-powered bombing has triggered concerns about decision compression, a phenomenon where AI collapses the planning time required for complex strikes, potentially reducing human military and legal experts to merely rubber-stamping automated strike plans
3
. David Leslie, professor of ethics, technology and society at Queen Mary University of London, warned that reliance on AI can result in cognitive off-loading, where humans tasked with making strike decisions feel detached from consequences because the effort to think through options has been made by a machine3
.Critics warn that this trend compresses decision timelines to levels where human judgment is marginalized, creating an environment where the space for hesitation, dissent, or moral restraint may be shrinking just as quickly as operational tempo accelerates
2
. On Saturday, 165 people, many children, were killed in a missile strike that hit a school in southern Iran, according to state media. The UN called it "a grave violation of humanitarian law," and the US military said it is looking into the reports1
.Source: TechSpot
Related Stories
The use of Anthropic's Claude AI in the Iran strikes came just days after the Trump administration moved to label Anthropic a "supply chain risk" and told federal agencies and the military to stop using Anthropic's tools following a breakdown in negotiations
1
. Anthropic refused to allow its AI to be used for fully autonomous weapons or surveillance of US citizens, restrictions the company maintained were necessary safeguards. Despite the ban, Anthropic's tool continues to be used by the military while being phased out, as commanders in the theater of war rely on the AI's ability to collapse sensor-to-commander timelines4
.OpenAI quickly signed its own deal with the Pentagon over the weekend, positioning itself as Anthropic's replacement for military AI applications
1
. These clashes have triggered internal tech industry pushback, including employee petitions opposing military AI use in certain domains, reflecting broader tensions over ethics, governance, and national security4
. The extraordinary sequence of events, in which AI's role in kinetic operations outpaced public policy, reflects both the deep integration of advanced models into combat systems and the Pentagon's urgent push to field AI across its mission sets.Prerana Joshi, research fellow at the Royal United Services Institute, noted that "the deployment of AI is expanding" and "is being done across countries' defence estates ... across logistics, training, decision management, maintenance"
3
. Iran claimed in 2025 to use domestically developed AI in its missile-targeting systems, though the country's primary uses appear to be cyber operations including phishing, DDoS attacks, and propaganda campaigns1
. Iran's AI programme, hampered by international sanctions, appears negligible compared to the algorithmic edge possessed by AI superpowers like the US and China3
.AI is no longer a bit player in modern warfare but a core element of both offense and defense, shortening the time between surveillance, analysis, and action. Beyond immediate concerns about AI's tendency to get some things very wrong, there are worries about how this usage will escalate in the future and what it could mean for humanity
1
. Scholars and policymakers caution that the rush to embed AI into lethal operations must be paired with robust ethical and legal frameworks, lest the technology outpace the norms that govern its use4
. The evolution of international law, rules of engagement, and accountability mechanisms will be tested as AI continues to reshape the battlefield at speeds faster than traditional oversight can match.Summarized by
Navi
[2]
28 Feb 2026β’Policy and Regulation

30 Dec 2024β’Technology

27 Jan 2025β’Technology

1
Business and Economy

2
Policy and Regulation

3
Technology
