74 Sources
74 Sources
[1]
How AI is being used in war -- and what's next
The escalating conflict between the United States, Israel and Iran has thrown a spotlight on the use of artificial intelligence in warfare. Just one day before the US-Israeli offensive began on 28 February, the US government sidelined one of its main AI suppliers as part of a disagreement that underlines ethical concerns about AI's use. And this week, academics and legal experts are meeting in Geneva, Switzerland, to discuss lethal autonomous weapons systems and the procurement of AI in the military, as part of long-running efforts to arrive at an international agreement on the ethical or legal uses of AI in warfare. Rapid technological development is outpacing slow international discussions, says political scientist Michael Horowitz at the University of Pennsylvania in Philadelphia. "The current failure to regulate AI warfare, or to pause its usage until there is some agreement on lawful usage, seems to suggest potential proliferation of AI warfare is imminent," says Craig Jones, a political geographer at Newcastle University, UK, who researches military targeting. The US military uses AI based on large language models (LLMs) for logistical and office support, intelligence gathering and analysis, and decision support on the battlefield, says Horowitz. The Maven Smart System, which uses AI for applications including image processing and tactical support, speeds up attack capabilities by suggesting and prioritizing targets, for example. The system has been used in previous conflicts and in the attacks on Iran, according to reports from the Washington Post and other news outlets. "The details are not publicly known," Horowitz says. It is possible that AI's precision targeting could help to reduce civilian casualties during war. However, the ongoing conflicts in Ukraine and Gaza -- in which AI is being used to assist target identification and drone navigation, among other things -- have seen high civilian death tolls. "There is no evidence that AI lowers civilian deaths or wrongful targeting decisions and it may be that the opposite is true," says Jones. The possibility of using AI to guide lethal autonomous weaponry without human oversight is a hugely controversial area. Armed forces might appreciate the ability to, say, use AI-powered drones to autonomously identify, find and kill enemy combatants. But existing humanitarian laws require that such weapons be able to distinguish between military and civilian targets. LLM-powered, fully autonomous weapons without human oversight are not currently reliable and not comply with international laws, Horowitz says. Future potential uses for AI are at the heart of the disagreement between the US Department of War (formerly called the Department of Defense) and Anthropic, an AI company based in San Francisco, California. Since 2024, the Maven system has been supported by Anthropic's Claude LLM as part of a US$200-million contract with the Department of War. In January, the department issued a memo declaring, among other things, that contracts procuring AI for the government must state that the AI can be put to "any lawful use", without constraints. But Anthropic refused to remove safeguards, saying that Claude can't be used for mass domestic surveillance or to guide fully autonomous weapons. On 27 February, US President Donald Trump directed the government to stop using technology from Anthropic. "This whole thing blew up as a theoretical dispute about future, possible use cases," says Horowitz. Since dropping Anthropic, the government has signed a deal with OpenAI, another AI company based in San Francisco. The company says the contract outlines how its tech will not be used for surveillance or to guide fully autonomous weapons -- which the current tech cannot do reliably, the company said. As of 5 March, Anthropic chief executive Dario Amodei is reportedly back in talks with the department. Researchers in the field have expressed deep concerns about possible negative and unethical uses of AI. Employees at Google and OpenAI are currently circulating a petition calling on the leaders of both companies to not permit the use of their AI models for domestic mass surveillance or to autonomously kill people without human oversight. Political scientist Toni Erskine at the Australian National University in Canberra led a project anticipating the future uses of AI in war that produced policy recommendations last year. Alongside known issues of bias and opacity around LLM reasoning, the project's report highlights the risk that displacing human judgement with AI might accidentally escalate conflicts. "Fully autonomous weapons systems without a human in the loop are ethically untenable and should be banned internationally," it says. Non-autonomous systems, it adds, also carry risks and require regulation. Several high-level efforts are under way to wrangle international agreement on legal or ethical uses of AI in warfare. But these attempts are being hamstrung by a lack of engagement from the main players, says Jones. "States with active AI warfare programmes, including the US, Israel and China, are generally against further regulation," he says. A specific rule against the use of AI in autonomous weaponry might emerge through the Convention on Certain Conventional Weapons (CCW). At a meeting of the CCW's expert group on lethal autonomous weapons systems in Geneva this week, experts discussed a report from the Stockholm International Peace Research Institute on responsible procurement of military AI. This report, published last month, recommends that the contracting stage -- where the US Department of War currently is with AI tech companies -- is a crucial time to be clear about "principles of responsible behaviour". All these efforts are hindered by the difficulties of defining AI-enabled fully autonomous lethal systems or delineating the use of AI, which is now integral in many computer systems, says Herbert Lin, a cybersecurity specialist with Stanford University's Center for International Security and Cooperation in California. "It's very, very, very, very complicated," says Lin, who doesn't think the world is "anywhere near" any formal agreements on lawful use of AI in warfare.
[2]
President Trump orders federal agencies to stop using Anthropic after Pentagon dispute | TechCrunch
In a post on Truth Social, President Trump directed federal agencies to cease use of all Anthropic products after the company's public dispute with the Department of Defense. The president allowed for a six-month phase-out period for departments using the products, but emphasized that Anthropic was no longer welcome as a federal contractor. "We don't need it, we don't want it, and will not do business with them again," the president wrote in the post. Notably, the post does not mention any plans to invoke the Defense Production Act or designate Anthropic as a supply chain risk. Instead, the President urged the company to be helpful during the six-month phase out period, threatening "major civil and criminal consequences" if it did not. The dispute centered on Anthropic's refusal to allow its AI models to be used to power either mass domestic surveillance or fully autonomous weapons, which Secretary of Defense Pete Hegseth found unduly restrictive. CEO Dario Amodei reiterated his stance in a public post on Thursday, refusing to compromise on the two points. "Our strong preference is to continue to serve the Department and our warfighters -- with our two requested safeguards in place," Amodei wrote at the time. "Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions."
[3]
Anthropic Hits Back After US Military Labels It a 'Supply Chain Risk'
United States Secretary of Defense Pete Hegseth directed the Pentagon to designate Anthropic as a "supply-chain risk" on Friday, sending shockwaves through Silicon Valley and leaving many companies scrambling to understand whether they can keep using one of the industry's most popular AI models. "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," Hegseth wrote in a social media post. The designation comes after weeks of tense negotiations between the Pentagon and Anthropic over how the US military could use the startup's AI models. In a blog post this week, Anthropic argued its contracts with the Pentagon should not allow for its technology to be used for mass domestic surveillance of Americans or fully autonomous weapons. The Pentagon asked that Anthropic agree to let the US military apply its AI to "all lawful uses" with no specific exceptions. A supply chain risk designation allows the Pentagon to restrict or exclude certain vendors from defense contracts if they are deemed to pose security vulnerabilities, such as risks related to foreign ownership, control, or influence. It is intended to protect sensitive military systems and data from potential compromise. Anthropic responded in another blog post on Friday evening, saying it would "challenge any supply chain risk designation in court," and that such a designation would "set a dangerous precedent for any American company that negotiates with the government." Anthropic added that it hadn't received any direct communication from the Department of Defense or the White House regarding negotiations over the use of its AI models. "Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement," the company wrote. The Pentagon declined to comment. "This is the most shocking, damaging, and over-reaching thing I have ever seen the United States government do," says Dean Ball, a senior fellow at the Foundation for American Innovation and the former senior policy advisor for AI at the White House. "We have essentially just sanctioned an American company. If you are an American, you should be thinking about whether or not you should live here 10 years from now." People across Silicon Valley chimed in on social media expressing similar shock and dismay. "The people running this administration are impulsive and vindictive. I believe this is sufficient to explain their behavior," Paul Graham, founder of the startup accelerator Y Combinator said. Boaz Barak, an OpenAI researcher, said in a post that "kneecapping one of our leading AI companies is right about the worst own goal we can do. I hope very much that cooler heads prevail and this announcement is reversed." Meanwhile, OpenAI CEO Sam Altman announced on Friday night that the company reached an agreement with the Department of Defense to deploy its AI models in classified environments, seemingly with carveouts. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," said Altman. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement." In its Friday blog post, Anthropic said a supply chain risk designation, under the authority 10 USC 3252, only applies to Department of Defense contracts directly with suppliers, and doesn't cover how contractors use its Claude AI software to serve other customers. Three experts in federal contracts say it's impossible at this point to determine which Anthropic customers, if any, must now cut ties with the company. Hegseth's announcement "is not mired in any law we can divine right now," says Alex Major, a partner at the law firm McCarter & English, which works with tech companies.
[4]
Pentagon moves to designate Anthropic as a supply-chain risk | TechCrunch
In a post on Truth Social, President Trump directed federal agencies to cease use of all Anthropic products after the company's public dispute with the Department of Defense. The president allowed for a six-month phase-out period for departments using the products, but emphasized that Anthropic was no longer welcome as a federal contractor. "We don't need it, we don't want it, and will not do business with them again," the president wrote in the post. Notably, the president's post does not mention any plans to designate Anthropic as a supply chain risk, as had been previously mentioned as a consequence. However, a subsequent tweet from Secretary of Defense Pete Hegseth made good on the threat. "In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security," Secretary Hegseth wrote. "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." The Pentagon dispute centered on Anthropic's refusal to allow its AI models to be used to power either mass domestic surveillance or fully autonomous weapons, which Secretary Hegseth found unduly restrictive. CEO Dario Amodei reiterated his stance in a public post on Thursday, refusing to compromise on the two points. "Our strong preference is to continue to serve the Department and our warfighters -- with our two requested safeguards in place," Amodei wrote at the time. "Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions."
[5]
Trump Tells Feds to Stop Using Claude AI After Anthropic Stood by Surveillance Restrictions
Expertise Artificial intelligence, home energy, heating and cooling, home technology. President Donald Trump on Friday called for US federal agencies to stop using Anthropic's Claude AI after the company refused to grant the Department of Defense permission to use it for mass domestic surveillance or for fully autonomous weapons systems. The president posted on the Truth Social platform, which he owns, that he is ordering the federal government to "IMMEDIATELY CEASE" use of Anthropic's tools, while also saying there would be a six-month phaseout for agencies like the Department of Defense. The post signaled the latest step in a showdown that escalated significantly this week between Anthropic and the federal government. Claude is used widely across the Pentagon, including in classified systems, but the Trump administration has sought to be able to use the technology for "any lawful purpose." Anthropic has insisted in its existing contract that the technology not be used for mass surveillance of Americans or in autonomous offensive weapons systems without human input. Earlier this week, Defense Secretary Pete Hegseth told Amodei that he would invoke seldom-used powers to either force Anthropic to let the Pentagon use Claude for any lawful purpose or label the company a supply chain risk -- jeopardizing its use by the government or defense contractors. Hegseth gave Anthropic a Friday deadline to comply. Anthropic CEO Dario Amodei said in a statement that the company, which was founded with a stated focus on AI safety, "cannot in good conscience accede to [the Pentagon's] request" that it remove contract provisions stating Claude cannot be used in fully autonomous weapons systems or for domestic surveillance. Read more: Amazon's Ring Cameras Push Deeper Into Police and Government Surveillance Amodei raised concerns that the law has not caught up with the potential for mass surveillance of Americans. The government can already buy information like Americans' browsing history and records of individual movements without a warrant, but artificial intelligence raises the stakes. "Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life -- automatically and at massive scale," he wrote. Michael Pastor, dean for technology law programs at New York Law School, said in an email that it's typical in contract law for those involved to seek clarity on terms. "Anthropic is right to press hard on what 'for lawful purposes' means," he said. "If the Pentagon is unwilling to clarify whether it would use Anthropic's technology for mass domestic surveillance, that raises flags Anthropic seems justified in waving." Anthropic's Claude is reportedly the most widely used AI system by the US military. Alternatives could include tools from OpenAI, Google or Elon Musk's xAI. In an internal memo reported by The Wall Street Journal Friday, OpenAI CEO Sam Altman reportedly told employees that the company has the same redlines as Anthropic -- no mass domestic surveillance or autonomous offensive weapons. Altman said he believed those guardrails could be managed through technical requirements, like requiring models to be based in the cloud. (Disclosure: Ziff Davis, CNET's parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Employees of Google and OpenAI circulated a petition calling for their companies to stand with Anthropic in refusing to allow use of AI models for domestic mass surveillance or fully autonomous lethal weapons systems. The petition said the Pentagon is "trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand." Just as in consumer technology, artificial intelligence systems have seen widespread adoption in government and military cases. These tools have seen their capabilities grow significantly just in the past few years, and that pace of change has not slowed. Regulation and oversight of AI haven't kept up. AI has magnified the potential harms of corporate or government surveillance by making it easier and cheaper. Pastor said this dispute could have significant ramifications for what leverage governments and tech companies have against each other when their views on the appropriate use of technology clash. "Anthropic may feel that yielding here opens a Pandora's box of uses for which Claude could be deployed," he said.
[6]
Trump Moves to Ban Anthropic From the US Government
US President Donald Trump announced Friday that he was instructing every federal agency to halt the use of Anthropic's AI tools effective immediately. The move comes after Anthropic and top officials from the clashed for weeks over military applications of artificial intelligence. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War," Trump said in a post on Truth Social. The Pentagon and Anthropic did not immediately respond to requests for comment. The Department of Defense has sought to change the terms of a deal struck with Anthropic and other AI companies last July to eliminate restrictions on how AI can be deployed and instead permit "all lawful use" of the technology. Anthropic objected to the change, claiming that it could allow AI to be used to fully control lethal autonomous weapons or to conduct mass surveillance on US citizens. The Pentagon does not currently use AI in these ways, and has said it has no plans to do so. However, top Trump administration officials have voiced opposition to the idea of a civilian tech company dictating military use of such an important technology. Anthropic was the first major AI lab to work with the US military, through a $200 million deal signed with the Pentagon last year. It created several custom models known as Claude Gov that have fewer restrictions than its regular ones. Google, OpenAI, and xAI signed similar deals around the same time but Anthropic is the only AI company currently working with classified systems. Anthropic's model is available through platforms provided by Palantir and Amazon's cloud platform for classified military work. Claude Gov is currently largely used for run-of-the-mill tasks, like writing reports and summarizing documents, but it is also used for intelligence analysis and military planning, according to one source familiar with the situation who spoke to WIRED who spoke on condition of anonymity because they are not authorized to discuss the matter. In recent years, Silicon Valley has gone from largely avoiding defense work to increasingly embracing and becoming full-blown military contractors. The fight between Anthropic and the Pentagon is testing the limits of that shift. This week, several hundred workers from OpenAI and Google signed an open letter supporting Anthropic and criticizing their own companies' decisions to remove restrictions on military use of AI.
[7]
Defense secretary Pete Hegseth designates Anthropic a supply chain risk
Nearly two hours after President Donald Trump announced on Truth Social that he was banning Anthropic products from the federal government, Secretary of Defense Pete Hegseth took it one step further and announced that he was now designating the AI company as a "supply-chain risk". After a week of tense negotiations over the company's acceptable use policies, the Pentagon gave Anthropic an ultimatum: agree by Friday, 5:30 PM EST, to let the Pentagon use Claude for "all legal purposes," including for autonomous lethal weapons without human oversight and mass surveillance, or be designated a supply-chain risk. The designation, which is typically used for companies with ties to foreign governments that pose national security risks to the United States, will bar any company that uses Anthropic products from working with the Department of Defense. In a tweet posted just after 5PM ET, Hegseth broadened the designation to encompass companies doing "any commercial activity with Anthropic," reiterating Trump's mandate that companies had six months to divest themselves from Anthropic products. "Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic," he wrote. "Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of 'effective altruism,' they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives." Hegseth, as Secretary of Defense, has the ability to label a company a "supply-chain risk" at his own discretion. But the decision comes after the Pentagon made several other attempts to compel Anthropic to let them use Claude as they wished, including the threat to invoke the Defense Production Act.
[8]
Trump bans Anthropic AI from federal agencies after firm refuses to unlock capabilities -- Anthropic cites risks of autonomous military applications, mass domestic surveillance
Every federal agency has been "ordered" to cease using Claude immediately. President Donald Trump ordered every U.S. federal agency to stop using technology from AI company Anthropic on Friday, February 27, posting the directive to Truth Social at 3:47 PM ET -- more than an hour before the Pentagon's own 5:01 PM ET deadline for Anthropic to comply with its demands. "THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS," Mr. Trump fumed on Truth Social, adding that he is directing every U.S. federal agency to "IMMEDIATELY CEASE all use of Anthropic's technology." The dispute stems from a contract worth up to $200 million that Anthropic signed with the Pentagon last summer. Anthropic had sought written guarantees that its Claude models would not be used for mass domestic surveillance of U.S. citizens or to control weapons systems capable of firing without human involvement. The Pentagon countered that it needed the right to deploy Claude for "all lawful purposes," arguing it was unworkable to negotiate individual use-case exemptions with a private company. After months of private talks collapsed into a public standoff this week, Amodei said Thursday his company "cannot in good conscience accede" to the DoD's terms. The Pentagon responded by threatening to invoke the Korean War-era Defense Production Act to compel Anthropic's compliance and warned it would designate the company a "supply chain risk" -- a label typically reserved for companies from adversarial nations such as Huawei. Trump, in his Truth Social post, accused Anthropic of "trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," adding that the company's position was "putting AMERICAN LIVES at risk." He gave agencies a six-month phase-out window and warned that if Anthropic failed to cooperate during that period, he would use "the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow." Claude was the only AI model approved for use in classified military systems, and defense software firm Palantir, which uses Claude to power its most sensitive government contracts, will need to find a replacement quickly. OpenAI CEO Sam Altman said Friday he shares Anthropic's position on autonomous weapons' ethical "red lines," complicating its candidacy as a direct replacement. Unsurprisingly, Elon Musk has already agreed in principle to the Pentagon's "all lawful purposes" request, potentially lining up Grok as a replacement. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
[9]
Trump Slams Anthropic as 'Leftwing Nut Jobs' for Refusing Pentagon's AI Demands
President Trump on Friday blasted Anthropic as "woke" and "leftwing" for refusing to remove safeguards on its AI for use in surveillance and autonomous weapons. "WE will decide the fate of our Country -- NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about," Trump wrote on Truth Social. The president argued that San Francisco-based Anthropic is run by "Leftwing nut jobs" who allegedly tried to strong-arm the Pentagon into obeying its terms of service, rather than the US Constitution. "Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY," Trump wrote. In a tweet, Defense Secretary Pete Hegseth piled on, claiming, "Anthropic delivered a master class in arrogance and betrayal" after being awarded a contract worth up to $200 million to prototype AI capabilities for national security. Aside from the name-calling, the White House is preparing to essentially blacklist Anthropic. "I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security," Hesgeth added, which he previously pledged to do if the company refused to comply with the Pentagon's demands by Friday at 5:01 p.m. EST. "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," he added. The Defense Department plans on phasing out Anthropic's AI technology over the next six months. In his own post, Trump also directed every other federal agency to immediately cease using the company's technology. "We don't need it, we don't want it, and will not do business with them again!" he wrote. In addition, the president is threatening to crack down even harder on Anthropic, though he didn't get specific. "Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow," Trump said. It could potentially have severe financial consequences. As noted by Dean W. Ball, a senior fellow at the Foundation for American Innovation, "Nvidia, Amazon, Google will have to divest from Anthropic if Hegseth gets his way. This is simply attempted corporate murder. I could not possibly recommend investing in American AI to any investor; I could not possibly recommend starting an AI company in the United States." However, the White House's attempt to cast Anthropic as a digital pariah appears to be bolstering the company's reputation in some corners. "It's extremely good that Anthropic has not backed down, and it's significant that OpenAI has taken a similar stance," OpenAI's former chief scientist, Ilya Sutskever, tweeted. On the same day, OpenAI reportedly confirmed it had the same restrictions as Anthropic and does not permit its AI to be used for mass surveillance or autonomous weapons. Sutskever added: "In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for fierce competitors to put their differences aside. Good to see that happen today." US Sen. Elizabeth Warren (D-Mass.) also asked: "Is the Trump administration punishing Anthropic because it's refusing to help mass surveil American communities or build killer robots? The American people deserve to know what Trump officials are planning at the Pentagon." Meanwhile, almost 500 Google employees and another 80 OpenAI staffers have signed an open letter in support of Anthropic. The letter notes: "The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused. They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand." (Googlers have some experience pushing back on the company's involvement with Pentagon projects.) Even before today's deadline, Anthropic said it would refuse the Defense Department's demands. In a Thursday blog post, it noted: "They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a 'supply chain risk' -- a label reserved for US adversaries, never before applied to an American company -- and to invoke the Defense Production Act to force the safeguards' removal." As Jan Leike, an Alignment team lead at Anthropic, tweeted: "US government just announced they are looking for a new supplier for their *checks notes* mass domestic surveillance."
[10]
US Military Relying on AI as Key Tool to Speed Iran Operations
The use of AI in the Iran campaign is part of a wider debate over the use of AI as a tool of war, including whether it can be used in a lawful manner, with some organizations arguing that AI-enabled decision-support systems risk introducing automation bias. US military forces are turning to a range of artificial intelligence tools to quickly manage enormous amounts of data for operations against Iran, according to US Central Command, highlighting the emerging technology's growing role in warfare. Since the start of military strikes last week, the US has hit more than 2,000 targets, including 1,000 within the first 24 hours. The effort has been described by Admiral Brad Cooper, the head of Central Command, as nearly "double the scale" of America's "shock and awe" assault on Iraq in 2003. In the Iran campaign, AI technology has played a critical role by supporting the initial screening of incoming data, allowing human analysts to focus on higher-level analysis and verification, according to Captain Timothy Hawkins, a Central Command spokesperson. "Centcom uses a variety of AI tools, and that is exactly what they are, tools, to assist human experts in a rigorous process aligned with US policy, military doctrine and the law," Hawkins said in an interview with Bloomberg News. He declined to name the tools or the companies that provide them to the military. The Iran war is adding new urgency to a widening global debate over who controls the future of AI as a tool of war, including whether the rapidly evolving technology can be used in a lawful manner. It lies at the heart of a high-stakes dispute pitting US defense officials against Anthropic PBC, one of the most promising AI companies whose models are used on the Pentagon's classified cloud. After failing to agree with Anthropic on terms governing the use of its AI technology, US Defense Secretary Pete Hegseth declared the company a supply-chain risk last week and gave military contractors six months to stop working with the firm. President Donald Trump also instructed federal agencies to cease work with Anthropic, describing it as an "out-of-control, Radical Left AI company." Anthropic Chief Executive Officer Dario Amodei and defense officials have since resumed discussions after last week's breakdown in talks, raising the possibility that the Pentagon could reach an agreement with the company and avert the penalties threatened by Hegseth. Although Amodei has expressed alarm at using AI in fully autonomous weapons before the technology is reliable and does not want his firm's tools used to spy on US citizens en masse, he does support working on lethal US military operations that abide by those red lines. Among the AI tech used in the Iran campaign is Maven Smart System, a digital mission control platform, according to people familiar with the US operations, who spoke on condition of anonymity in order to discuss sensitive information. The system, produced by Palantir Technologies Inc., is fed by more than 150 different data sources, according to previous public statements from US military officials. That system is also using Anthropic's Claude AI tool among the large language models installed on the system, according to the people, who said Claude is working well and has become central to US operations against Iran and to accelerating Maven's AI efforts. Spokespeople for Anthropic declined to comment while representatives for Palantir didn't immediately respond to requests for comment. The Washington Post previously reported the US military's use of Maven Smart System and Claude in US military operations. Hawkins said that artificial intelligence helps analysts whittle down what they need to focus on, generating so-called points of interest and helping personnel make "smart" decisions in the Iran operations. AI is also helping to pull data within systems and organize information to provide clarity, he said. "Bottom line, these tools help leaders -- humans -- make smarter decisions faster. The tools do not replace them or make targeting decisions," said Hawkins, adding that target selection relies on a very specific, rigorous, legal process that involves commanders and leaders. Some organizations, such as Stop Killer Robots, a coalition of 270 human-rights groups, argue that AI-enabled decision-support systems reduce the separation between recommending and executing a strike to a "dangerously thin" line and risk introducing automation bias, when humans overly trust outputs produced by machines. Centcom is investigating possible incidents of civilian harm following reports that a strike against a girls' primary school killed more than 160 people. It's still unclear who was responsible, and there's no indication whether AI played a role. "We take these reports seriously and are looking into them," Hawkins said. "The protection of civilians is of utmost importance, and we will continue to take all precautions available to minimize the risk of unintended harm. Unlike the Iranian regime, we have never - and will never - target civilians."
[11]
Trump orders feds to drop 'woke' Anthropic after AI spat
President Trump has escalated Anthropic's dispute with the Defense Department with a social media post ordering the entire federal government purge the company's software from its systems. In a post on Truth Social, the President called Anthropic a "radical left, woke" company and said he was directing all federal agencies to cease use of its technology "immediately." The move comes after a dispute between Anthropic and the Pentagon over the AI firm's refusal to relax safeguards barring its models from use in mass domestic surveillance or fully autonomous weapons systems. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," Trump said, apparently referring to the part of the Constitution about spying on Americans with AI and letting autonomous weapons make targeting and firing decisions. To be clear, neither party actually says the Pentagon intend to use Anthropic's AI for those purposes, but Anthropic's stance was enough to escalate tensions with the Defense Department and raise questions for agencies that had signed up for $1-for-a-year software deals that included Anthropic's AI. Trump said in his post that he was giving agencies six months to phase out Anthropic's AI for something less woke - perhaps like Grok. "Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow," Trump said. Following the directive, the General Services Administration announced it is removing Anthropic from USAi.gov and its Multiple Award Schedule procurement vehicle. It's not clear what level of official action the President's social media is intended to convey. It doesn't include any mention of the formal cancellation of Anthropic's up to $200 million Pentagon contract, though we can assume his directive to purge the company from federal systems would mean that the contract is probably void. It appears that, at the very least, the President has no intention of pressing Anthropic to conform to the DoD's demands via the Defense Production Act. We reached out to Anthropic for comment and to the White House to determine if and when the President intended to issue a formal order canceling Anthropic. We'll update this story if we hear back. ®
[12]
Trump says he is directing federal agencies to cease use of Anthropic technology
WASHINGTON, Feb 27 (Reuters) - U.S. President Donald Trump said on Friday he was directing every federal agency to immediately cease all use of Anthropic's technology, adding there would be a six-month phaseout for the Defense Department and other agencies that use the company's products. "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!" Trump said in a post on Truth Social. Trump's directive comes amid a feud between the Pentagon and top artificial intelligence lab Anthropic over concerns about how the military could use AI at war. Spokespeople for Anthropic, which has a $200 million contract with the Pentagon, did not immediately respond to a request for comment. Reporting by Ryan Jones, Andrea Shalal, Ismail Shakil and Jeffrey Dastin; Writing by Daphne Psaledakis; Editing by Caitlin Webber Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * United States Jeffrey Dastin Thomson Reuters Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022. Ryan Patrick Jones Thomson Reuters Ryan is a breaking news correspondent based in Toronto covering breaking news, national affairs and politics in the United States and Canada.
[13]
President Trump says he has banned federal agencies from using Anthropic
* Trump has ordered all federal agencies to stop using Anthropic's AI immediately. * The Pentagon insists on "any lawful use"; Anthropic bars domestic surveillance and autonomous weapons. * Anthropic holds a $200M Pentagon contract; it may face "supply chain risk" labeling. US President Donald Trump has stated that he has banned all federal agencies from using the AI platform Anthropic. The President's declaration follows months of back-and-forth between the Defense Department and Anthropic over the US military's use of the AI platform. In a post on Truth Social, Trump said the following regarding Anthropic. "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again! Trump's Truth Social statement was reposted on X by Pete Hegseth, the US Secretary of Defense, and directed at both Anthropic's official X account and Dario Amodei, the CEO of the AI company. Anthropic has yet to release a statement about the President's most recent comments. However, Amodei has made it clear in the past that it won't allow the Defense Department to use its AI platform for domestic surveillance or for lethal autonomous weapons. The Pentagon says it must be allowed to utilize AI systems for "any lawful use," which directly violates Anthropic's stance. I use ChatGPT, Claude, Perplexity, and Gemini daily -- here's the only one worth paying for One stands above the rest. Posts 13 By Mahnoor Faisal The US government could designate Anthropic a "supply chain risk" It's still unclear how this will play out In a statement posted on Thursday, Amodei said, "I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries." Anthropic's CEO expanded on this by saying, "Our strong preference is to continue to serve the Department and our warfighters -- with our two requested safeguards in place." Amodei's full statement is available on Anthropic's website. In response to Amodei's lengthy post, Emil Michael, the Undersecretary of Defense, stated on X that Amodei "is a liar and has a God-complex," before going on to claim that Anthropic's CEO wants to "control the US military." Sean Parnell, the Pentagon's chief spokesperson echoed a similar sentiment on X, stating that the US government's "lawful purposes" expectation "is a simple common-sense request." Anthropic currently has a $200 million contract with the Pentagon to "advance responsible AI in defense operations." Axios recently reported that Hegseth gave Anthropic a 5:01pm deadline to agree to the Pentagon's terms, alongside requesting an assessment of Claude, a step towards labeling Anthropic as a "supply chain risk," a designation typically reserved for adversaries like China that has never been applied to an American company. XDA has reached out to Anthropic and will update this story with more information as it becomes available. Netflix says it isn't increasing its bid for Warner Bros., leaving an opening for Paramount At this point, it's unclear where the Warner Bros. acquisition will land. Posts By Patrick O'Rourke
[14]
Pentagon Designates Anthropic Supply Chain Risk Over AI Military Dispute
Anthropic on Friday hit back after U.S. Secretary of Defense Pete Hegseth directed the Pentagon to designate the artificial intelligence (AI) upstart as a "supply chain risk." "This action follows months of negotiations that reached an impasse over two exceptions we requested to the lawful use of our AI model, Claude: the mass domestic surveillance of Americans and fully autonomous weapons," the company said. "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons." In a social media post on Truth Social, U.S. President Donald Trump said he was ordering all federal agencies to phase out the use of Anthropic technology within the next six months. A subsequent X post from Hegseth mandated that all contractors, suppliers, and partners doing business with the U.S. military cease any "commercial activity with Anthropic" effective immediately. "In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply Chain Risk to National Security," Hegseth wrote. The designation comes after weeks of negotiations between the Pentagon and Anthropic over the use of its AI models by the U.S. military. In a post published this week, the company argued that its contracts should not facilitate mass domestic surveillance or the development of autonomous weapons. "We support the use of AI for lawful foreign intelligence and counterintelligence missions," Anthropic noted. "But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties." The company also called out the U.S. Department of War's (DoW) position that it will only work with AI companies that allow "any lawful use" of the technology, while removing any safeguards that may exist, as part of efforts to build an "AI-first" warfighting force and bolster national security. "Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological 'tuning' that interferes with their ability to provide objectively truthful responses to user prompts," a memorandum issued by the Pentagon last month reads. "The Department must also utilize models free from usage policy constraints that may limit lawful military applications." Responding to the designation, Anthropic described it as "legally unsound" and said it would set a dangerous precedent for any American company that negotiates with the government. It also noted that a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of DoW contracts, and that it cannot affect the use of Claude to serve other customers. Hundreds of employees at Google and OpenAI have signed an open letter urging their companies to stand with Anthropic in its clash with the Pentagon over military applications for AI tools like Claude. The standoff between Anthropic and the U.S. government comes as OpenAI CEO Sam Altman said OpenAI reached an agreement with the U.S. Department of Defense (DoD) to deploy its models in their classified network. It also asked DoD to extend those terms to all AI companies. "AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman said in a post on X. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
[15]
Trump orders federal agencies to drop Anthropic's AI
On Friday afternoon, Donald Trump posted on Truth Social, accusing Anthropic, the AI company behind Claude, of attempting to "STRONG-ARM" the Pentagon and directing federal agencies to "IMMEDIATELY CEASE" use of its products. At issue is Anthropic CEO Dario Amodei's refusal of an updated agreement with the US military agreeing to "any lawful use" of Anthropic's technology, as Defense Secretary Pete Hegseth mandated in a January memo, to the frustration of many tech workers across the industry. As we explained earlier this week, that agreement would give the US military access to use the company's services for mass domestic surveillance and lethal autonomous weapons, or AI that has full power to track and kill targets with no humans involved in the decision-making process. OpenAI and xAI have reportedly already agreed to the new terms, though OpenAI is reportedly looking to negotiate with the Pentagon to adopt the same red lines as Anthropic. For weeks, Anthropic and the Pentagon have been locked in a stalemate regarding the acceptable use of Anthropic's AI technology. But negotiations with Anthropic had stalled after a dramatic exchange of public statements and social media posts. In a statement Thursday, Amodei wrote that the Pentagon's "threats do not change our position: we cannot in good conscience accede to their request." He added that Anthropic has "never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner" but that in a "narrow set of cases, we believe AI can undermine, rather than defend, democratic values." Amodei went on to say that "should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required." Now we have Trump's response: THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military. The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY. Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic's products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow. WE will decide the fate of our Country -- NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!
[16]
Trump orders federal agencies to stop using Anthropic AI tech 'immediately'
Trump tells U.S. government to stop using Anthropic's technology President Donald Trump said Friday that he was ordering every U.S. government agency to "immediately cease" using technology from the artificial intelligence company Anthropic. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," Trump said in a post on Truth Social. "Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY."
[17]
Trump orders federal agencies to drop Anthropic services amid Pentagon feud
President Donald Trump has ordered all US government agencies to stop using Claude and other Anthropic services, escalating an already volatile feud between the Department of Defense and company over AI safeguards. Taking to Truth Social on Friday afternoon, the president said there would be a six-month phase out period for federal agencies, including the Defense Department, to migrate off of Anthropic's products. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," the president wrote. "Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow." Before today, US Defense Secretary Pete Hegseth had threatened to label Anthropic a "supply chain risk" if it did not agree to withdraw safeguards that insist Claude not be used for mass surveillance against Americans or in fully autonomous weapons. Anthropic did not immediately respond to Engadget's comment request. Earlier in the day, a spokesperson for the company said the contract Anthropic received after CEO Dario Amodei outlined Anthropic's position made "virtually no progress" on preventing the outlined misuses. "New language framed as a compromise was paired with legalese that would allow those safeguards to be disregarded at will. Despite DOW's recent public statements, these narrow safeguards have been the crux of our negotiations for months," the spokesperson said. "We remain ready to continue talks and committed to operational continuity for the Department and America's warfighters." Advocacy groups like the Center for Democracy and Technology (CDT) quickly came out against the president's threats. "This action sets a dangerous precedent. It chills private companies' ability to engage frankly with the government about appropriate uses of their technology, which is especially important in national security settings that so often have reduced public visibility," said CDT President and CEO Alexandra Givens, in a statement shared with Engadget. "These threats undermine the integrity of the innovation ecosystem, distort market incentives and normalize an expansive view of executive power that should worry Americans all across the political spectrum." For now, it appears the AI industry is united behind Anthropic. On Friday, hundreds of Google and OpenAI employees signed an open letter urging their companies to stand in "solidarity" with the lab. According to an internal memo seen by Axios, OpenAI CEO Sam Altman said the ChatGPT maker would draw the same red line as Anthropic.
[18]
Donald Trump Declares War on Anthropic
President Trump is terminating the government's relationship with Anthropic, an AI company whose products, until recently, were used by Pentagon officials for classified operations. Following a weekslong standoff with the company, Trump posted on Truth Social this afternoon that all federal agencies must "IMMEDIATELY CEASE all use of Anthropic's technology," adding: "We don't need it, we don't want it, and will not do business with them again!" The General Services Administration announced that it would take action against Anthropic's products, and indeed, according to an email I obtained that was sent to the leadership of all agencies using USAi -- a GSA platform that provides chatbots from tech companies to government workers -- access to Anthropic was suspended "immediately." The government is also removing Anthropic from its primary procurement system, which is the key way for any federal agency to purchase a commercial product. Anthropic was awarded a $200 million contract with the Pentagon over the summer geared toward providing versions of its technology for military use. OpenAI, Google, and xAI were awarded similar contracts, though Anthropic's Claude models are the only advanced generative-AI programs to receive Pentagon security clearance permitting the handling of secret and classified data. Claude had been integrated across the Department of Defense and was reportedly used to assist the raid on Venezuela that led to the capture of President Nicolás Maduro. Anthropic has said it will not allow Claude to be used for mass domestic surveillance or to enable fully autonomous weaponry, which could involve applications such as Claude selecting and killing targets with drones, and analyzing data that have been indiscriminately gathered on Americans by the intelligence community. Anthropic has also said that the Pentagon never included such uses in its contracts with the firm. But now DOD is demanding unrestricted use of Claude and accusing Anthropic of trying to control the military and "putting our nation's safety at risk" by refusing to comply. Following a heated meeting on Tuesday, the department gave Anthropic until today at 5:01 p.m. eastern time to acquiesce to its demands. If not, the Pentagon would compel the company under an emergency wartime law called the Defense Production Act or, even more severe, designate Anthropic a "supply-chain risk," which could forbid any organization that works with the U.S. military to do business with the AI company. Shortly after Trump's announcement, Defense Secretary Pete Hegseth declared he was doing just that. Dean Ball, an analyst who helped write some of the Trump administration's own AI policy, has called the threats "the most aggressive AI regulatory move I have ever seen, by any government anywhere in the world." Last night, Anthropic CEO Dario Amodei wrote in a public letter, "We cannot in good conscience accede to" the Pentagon's request. Following Trump's and Hegseth's orders today, Anthropic said in a statement, "No amount of intimidation or punishment from the Department of War will change our position." DOD, which the Trump administration refers to as the Department of War, did not immediately respond to requests for comment. The situation signals a potentially seismic shift in relations between Silicon Valley and the federal government. Defense officials and technology companies alike are concerned that the U.S. military is losing its technological edge over its adversaries, in particular China -- in part because the private sector, rather than the Pentagon, is where much American innovation comes from these days. And instead of federal grants, the massive investments needed for generative AI have come from tech companies themselves. Historically, companies the Pentagon works with have not set terms for how the government uses their products. But as Thomas Wright recently wrote in The Atlantic, this dynamic is complicated when it comes to AI tools made fully by a private sector that understands the technology far better than the government does. Anthropic has shown itself to be eager to work with the government and the military, hence it being the first of the frontier AI firms to receive such a high security clearance from the military. Amodei is by far the most hawkish of any prominent AI executive, warning frequently about the need for democracies to use AI to vanquish authoritarianism and, especially, stay ahead of China. In the letter he published last night, Amodei wrote, "I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries." And although he took a principled stance against domestic surveillance, Amodei wrote that he is open to Claude eventually being used to power fully autonomous weapons -- just not yet, because today's best AI models "are simply not reliable enough" to do so. Developing such AI-powered weapons in the present, he wrote, would put American soldiers and civilians at risk. Much remains uncertain about the unraveling relationship between the Trump administration and Anthropic, but the White House has been souring on Anthropic for months. Amodei has been publicly critical of Trump, and wrote a lengthy Facebook post in support of Kamala Harris during the 2024 election. White House officials have called the company "woke" and accused it of "fear mongering." We have ended up in a paradoxical situation where the U.S. government is at once saying that Claude is so essential to national security that it could invoke an emergency law to exert extensive control over Anthropic, and that the company is so woke and radical that using Claude would itself be a national-security risk. "I don't understand it," a former senior defense official who requested anonymity to speak freely told me. "It's an existential risk if you use it or if you don't." Many in Silicon Valley have rallied in support of Anthropic, even as the major companies have maintained their business with the government. (The precise terms of the Pentagon's contracts with other AI companies have not been made public.) Jeff Dean, a top Google executive, wrote on X that generative AI should not be used for domestic mass surveillance. OpenAI CEO Sam Altman wrote in an internal memo circulated last night, a copy of which I obtained, that "we have long believed that AI should not be used for mass surveillance or autonomous lethal weapons," and he has expressed similar sentiments publicly. More than 500 current employees of both OpenAI and Google -- many of them anonymous -- signed an open letter in support of Anthropic. On the sidewalk outside of Anthropic's headquarters in San Francisco today, passersby scribbled messages of support with chalk. The fallout from the supply-chain-risk designation is still unclear. In theory, Google, Microsoft, Amazon, and several other behemoths that contract with the federal government will have to stop doing business with Anthropic, which would be a mess for everyone involved and potentially devastating for Anthropic; Amazon, for instance, is building data centers that will train future versions of Claude. But just how sweeping of an impact such a designation would have on Anthropic's customers is up for debate, and the company said in its statement today that many applications of Claude, even for customers that partner with DOD, will not be affected. Meanwhile, private AI firms will continue to be important to the federal government as it works to compete with China, Russia, and all manner of adversaries. Trump gave the Pentagon six months to phase out Claude, suggesting that the technology has indeed become essential -- and is essential to replace. And at some point, the U.S. military may no longer find itself in a position to dictate its terms. Altman, in his internal memo, wrote that OpenAI is exploring a contract with the Pentagon to use its AI models for classified workloads that would still exclude uses that "are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons." The Pentagon reportedly agreed to these conditions shortly after announcing that it would sever ties with Anthropic, although no contract has been signed. But other figures in tech, including the Anduril co-founder Palmer Luckey and the investor Katherine Boyle, have come out in support of demands for unrestricted use. This showdown was between the Pentagon and Anthropic. The next may be a war within Silicon Valley itself.
[19]
Claude AI Helped Bomb Iran. But How Exactly?
The use of AI in warfare is happening in a regulatory vacuum, with technology that is known to make errors, and there is a need for transparency and accountability in how these systems are used. The same artificial-intelligence model that can help you draft a marketing email or a quick dinner recipe has also been used to attack Iran. US Central Command used Anthropic's Claude AI for "intelligence assessments, target identification and simulating battle scenarios" during the strikes on the country, according to a report in the Wall Street Journal. Hours earlier, US President Donald Trump had ordered federal agencies to stop using Claude after a dispute with its maker, but the tool was so deeply baked into the Pentagon's systems that it would take months to untangle in favor of a more compliant rival. It was used, too, in the January operation that led to the capture of Nicolás Maduro. But what does "intelligence assessments" and "target identification" mean in practice? Was Claude flagging locations to strike or making casualty estimates? Nobody has made that disclosure and, alarmingly, no one has an obligation to. Artificial intelligence has long been used in warfare for things like analyzing satellite imagery, detecting cyber threats and guiding missile-defense systems. But the use of chatbots -- the same underlying technology that billions use for mundane tasks like writing emails -- is now being used on the battlefield. Last November, Anthropic partnered with Palantir Technologies Inc., a data-analytics company that does a lot of work for the Pentagon, turning its large language model Claude into the reasoning engine inside a decision-support system for the military. Then, in January, Anthropic submitted a $100 million proposal to the Pentagon to develop voice-controlled autonomous drone swarming technology, Bloomberg News reported. The company's pitch: Use Claude to translate a commander's intent into digital instructions to coordinate a fleet of drones. Its bid was rejected, but the contest called for much more than just summarizing intelligence reports, as you might expect a chatbot to do. This contract was to develop "target-related awareness and sharing," and "launch to termination" for potentially lethal drone swarms. Remarkably, all of this has been happening in a regulatory vacuum and with technology that is known to make errors. Hallucinations by large language models are a result of their training, when they are rewarded for grasping for an answer instead of admitting uncertainty. Some scientists say the persistent challenge of AI confabulation may never be fixed. Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. This would not be the first time unreliable AI systems have been used in warfare. Lavender was an AI-driven database used to help identify military targets associated with Hamas in Gaza. It was not a large language model but analyzed vast amounts of surveillance data, such as social connections and location history, to assign each individual a score from 1 to 100. When someone's score passed a certain threshold, Lavender flagged them as a military target. The problem was that Lavender was wrong 10% of the time, according to an investigative report published by the Israeli-Palestinian outlet +972. "Around 3,600 people were targeted by mistake," Mariarosaria Taddeo, a professor of digital ethics and defense technology at the Oxford Internet Institute, tells me. "There are such incredible vulnerabilities in these systems and such extreme unreliability... for something so dynamic, sensitive and human as warfare," says Elke Schwarz, a professor in political theory at Queen Mary University London and author of Death Machines: The Ethics of Violent Technologies. Schwarz points out that AI is often used in war to speed things up, a recipe for unwanted outcomes. Faster decisions are made at a greater scale and with less human scrutiny. The last decade and a half has seen military use of AI become even more opaque, she says. And secrecy is baked into how AI labs operate even before the warfare applications. These companies refuse to disclose what data their models are trained one or how their systems reach conclusions. Of course, military operations often have to be kept under wraps to protect combatants and keep enemies off the scent. But defense is heavily regulated by international humanitarian law and weapons testing standards, which in theory should also address the use of artificial intelligence. Yet such standards are missing or woefully inadequate. Taddeo notes that Article 36 of the Geneva Convention requires new weapons systems to be tested before deployment, but an AI system that learns from its environment becomes a new system every time it updates. That makes it almost impossible to apply the rule. In an ideal world, governments like the US would disclose how these systems are used on the battlefield, and there is a precedent. The Americans started using armed drones after 9/11 and expanded their use under the Barack Obama administration, refusing to acknowledge that such a program existed. It took nearly 15 years of leaked documents, sustained pressure from the press and lawsuits from the American Civil Liberties Union before the Obama White House finally published in 2016 the casualty numbers from drone strikes. They were widely seen as under-counting, but they allowed the public, Congress and media to hold the government accountable for the first time. AI's policing be will harder still, requiring even more public and legislative pressure to force a recalcitrant Trump administration to create a similar kind of reporting framework. The goal wouldn't be to disclose exactly how Claude was used in something like Operation Epic Fury, but to release the broad contours, according to Schwarz. And, especially, to disclose when something goes wrong. The current public debate about the Anthropic-Pentagon feud -- about what is legal and ethical for AI when it comes to the mass surveillance of Americans or creating fully autonomous weapons -- is missing the bigger question about the lack of visibility of how the technology is already being used in war. With such new and untested systems prone to making mistakes, this is sorely needed. "We haven't decided as a society if we're fine with a machine deciding if a human being should be killed or not," says Taddeo. Pushing for that transparency is critical before AI in warfare becomes so routine that nobody thinks to ask anymore. Otherwise we may find ourselves waiting for a catastrophic mistake, and imposing transparency only after the damage is done.
[20]
Trump Threatens 'Criminal Consequences' for Anthropic Over AI Safety Guard Fight
President Donald Trump unleashed an all-caps screed against Anthropic on Friday, threatening criminal consequences for the AI company if it didn't comply with his demands. The president is upset that Anthropic has refused to drop safeguards in Claude that prohibit the Department of Defense from using its AI model for domestic surveillance or completely autonomous weapons systems. "THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military," Trump wrote on Truth Social. Defense Secretary Pete Hegseth had given Anthropic an ultimatum to drop all safeguards or be labeled a “supply chain risk,†something that’s never happened to an American company before. Hegseth also threatened to invoke the Defense Production Act, which would theoretically allow the U.S. government to demand that those safeguards be stripped. The president went on to describe the people at Anthropic as "leftwing nut jobs" and made a bizarre claim that the AI company was putting American lives in danger. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War [sic], and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY," wrote Trump. There's nothing in U.S. law that requires a private company to adjust its terms of service to make the Defense Department happy, as long as they're not violating sanctions. "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!" wrote Trump. And then came the big caveat. Trump wrote that they'd be given six months for a "phase out period," which sounds an awful lot like the president just pushed the deadline back rather than actually banishing Anthropic from military work. "There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow," Trump wrote. "WE will decide the fate of our Country â€" NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!"
[21]
Anthropic's AI tool Claude central to U.S. campaign in Iran, amid a bitter feud
Smoke rises in Tehran on Tuesday after an explosion amid airstrikes by U.S. and Israeli forces. (Majid Asgaripour/WANA/via REUTERS) In order to strike a blistering 1,000 targets in the first 24 hours of its attack on Iran, the U.S. military leveraged the most advanced artificial intelligence it's ever used in warfare, a tool that could be difficult for the Pentagon to give up even as it severs ties with the company that created it. The military's Maven Smart System, which is built by data mining company Palantir, is generating insights from an astonishing amount of classified data from satellites, surveillance and other intelligence, helping provide real-time targeting and target prioritization to military operations in Iran, according to three people familiar with the system. Embedded into the system is Anthropic's AI tool Claude, a technology that was banned by the Pentagon last week after heated negotiations over the terms of its use in war. Over the last year military planners have seen Claude, paired with Maven, mature into a tool that is in daily use across most parts of the military, according to two of the people. As planning for a potential strike in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance, said two of the people. The pairing of Maven and Claude has created a tool that is speeding the pace of the campaign, reducing Iran's ability to counterstrike and turning weeks-long battle planning into real-time operations, said one of the people. The AI tools also evaluate a strike after it is initiated, the person said. Claude has also been used in countering terror plots and in the raid that captured Venezuelan president Nicolás Maduro. But this is the first time it has been used in major war operations, according to two of the people. Even though the tool is being used to support the U.S. military campaign in Iran, relations between Anthropic's CEO, Dario Amodei, and the Trump administration have soured. Hours before the bombing in Iran began, President Donald Trump announced he was banning government agencies from further using Anthropic's tools, giving the department six months to phase them out. The move came after a bitter fight between the company and the military over control of using the tools in mass domestic surveillance and fully autonomous weapons. The military will continue using its technology as it waits for a replacement to be phased in, said two of the people. Military commanders have become so dependent on the AI system that if Amodei directed the military to cease, the Trump administration would use government powers to retain the technology until it can be replaced, said one of the people. "Whether his morals are right or wrong or whatever, we're not going to let [Amodei's] decision making cost a single American life," the person said. The Pentagon did not immediately respond to a request for comment. Anthropic spokesman Eduardo Maia Silva declined to comment. Palantir spokeswoman Lisa Gordon declined to comment. Claude's use in the Iran strikes was first reported by The Wall Street Journal. The use of advanced generative AI in the Iran campaign arrives at a moment of fierce debate over the ethics and speed of using such tools in warfare. "It is notable that we're already at the point where AI has gone from hypothetical to supporting real-world operations being conducted today," said Paul Scharre, executive vice president at the Center for a New American Security, and who has written about AI in warfare. "The key paradigm shift is that AI enables the U.S. military to develop targeting packages at machine speed rather than human speed." The downsides, he said, are "AI gets it wrong. ... We need humans to check the output of generative AI when the stakes are life and death." The Pentagon began to integrate Anthropic's Claude chatbot into Maven in late 2024, according to public announcements. The system has been used to generate proposed targets, to track logistics and provide summaries of intelligence coming in from the field. The Trump administration has vastly expanded the use of Maven into many other parts of the military, with over 20,000 military personnel using it as of last May. The commanders now overseeing the Iran campaign are steeped in the use of Maven, having used earlier versions of the system in the U.S. withdrawal from Afghanistan in 2021 and to support Israel after the Oct. 7, 2023, attacks, according to a talk by Navy Rear Adm. Liam Hulin in 2024. Hulin, now the deputy director of operations at Central Command, said then that the system pulled in information from 179 sources of data. "Centcom is heavily using MSS," Hulin said, referring to Maven by an acronym. It was not immediately clear if Maven's target lists were shared with the Israelis prior to the attack, but the two sides collaborated on what to strike for months. The Israel Defense Forces "in close cooperation with the U.S. Army, worked for thousands of hours to build as valuable and extensive a target bank as possible," they said in a statement shortly after operations began. Anthropic was the first major AI company to work with classified data, as the Pentagon works to leverage the technology to help it modernize how it conducts war. Amodei said last week that Claude was "extensively deployed" across the Defense Department and at other security agencies, in use for analyzing intelligence and planning operations. "I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries," Amodei wrote in a blog post as negotiations with the Pentagon reached an impasse. It's been quickly adopted. NATO, which signed a contract with Palantir last year, portrayed its version of Maven as giving commanders video-game like abilities to oversee battles in a recent video. In the American military, the system allowed one artillery unit to do the work of 2,000 staff with a team of just 20 people, according to a study of the system's use by the Army's 18th Airborne Corps by Georgetown University. But other AI companies are now poised to supplant Anthropic at the Pentagon. Elon Musk's xAI signed an agreement last week to work on classified government systems, as did Anthropic's leading rival, OpenAI. (The Washington Post has a content partnership with OpenAI.) Ben Van Roo, the CEO and cofounder of Legion Intelligence, a defense software startup, said that in his work over the last two and half years integrating generative AI into software systems at the Department of War, "the baseline use case is chat and advanced search functions -- essentially summarizing information." It's not highly integrated into weapons or mission critical systems, he said. He said that he wasn't aware of its use in Iran, but wondered how it built on existing software that is already able to prioritize targets. Scharre said he was impressed by the speed of deployment in Iran. "It's quite remarkable -- to see this in the middle of an operation," he added.
[22]
US forces used Claude in Iran strikes even after Trump's ban: Report
Only hours after President Donald Trump announced that federal agencies should sever all ties with Anthropic and discontinue use of its AI systems, the US military reportedly continued deploying the company's Claude model in active operations against Iran. Claude was used during the large-scale joint US-Israel bombardment that began on Saturday, underscoring how deeply embedded advanced AI tools have become within US defense workflows. The development showcases the structural difficulty of rapidly removing such systems from mission-critical environments once they are integrated into operational planning cycles.
[23]
President Trump bans Anthropic from use in government systems
The Pentagon is seen from an airplane, Monday, Feb. 2, 2026, in Washington. Julia Demaree Nikhinson/Associated Press hide caption In a post on Truth Social, President Trump said the U.S. government will stop using the AI-company Anthropic's products. The decision came following a dispute between Anthropic and the Pentagon over whether the company could prohibit its tools from being used in mass surveillance of American citizens or to create autonomous weapon systems. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," said Trump in his post. "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!" He said there would be a six month phaseout of Anthropic's products. The announcement came about an hour before a deadline set by the Pentagon, which had called on Anthropic to back down. And it happened as at least one other AI firm said it had similar concerns about the military uses of AI. Earlier in the day, OpenAI CEO Sam Altman says he shares the "red lines" set by rival Anthropic restricting how the military uses AI models, amid Anthropic's escalating feud with the Pentagon. The Department of Defense had given Anthropic a deadline of 5:01 p.m. ET today to drop restrictions on its AI model, Claude, from being used for domestic mass surveillance or entirely autonomous weapons. The Pentagon has said it doesn't intend to use AI in those ways, but requires AI companies to allow their models to be used "for all lawful purposes." The government had also threatened to invoke the Korean War-era Defense Production Act (DPA) to compel Anthropic to allow use of its tools and has, at the same time, warned it would label Anthropic a "supply chain risk," potentially blacklisting it from lucrative government contracts. President Trump made no mention of either threat in his post to Truth social. By wading into the standoff between Anthropic and the Pentagon, Altman could complicate the Pentagon's efforts to replace Anthropic if it follows through on its threat to cancel the contract. OpenAI also has a Defense Department contract, along with Google, xAI, and Anthropic, but Anthropic was the first to be cleared for use on classified systems. "I don't personally think the Pentagon should be threatening DPA against these companies," Altman told CNBC in an interview on Friday morning. He said he thinks it's important for companies to work with the military "as long as it is going to comply with legal protections" and "the few red lines" that "we share with Anthropic and that other companies also independently agree with." "For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety, and I've been happy that they've been supporting our warfighters," Altman added. "I'm not sure where this is going to go." In an internal note sent to staff on Thursday evening, Altman said OpenAI was seeking to negotiate a deal with the Pentagon to deploy its models in classified systems with exclusions preventing use for surveillance in the U.S. or to power autonomous weapons without human approval, according to a person familiar with the message who was not authorized to speak publicly. The Wall Street Journal first reported Altman's note to staff. The Defense Department didn't respond to a request for comment on Altman's statements. Whether AI companies can set restrictions on how the government uses their technology has emerged as a major sticking point in recent months between Anthropic and the Trump administration. On Thursday, Anthropic CEO Dario Amodei said the Pentagon's threats over its contract would not make the company budge. "We cannot in good conscience accede to their request," he wrote in a lengthy statement. "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner," he said, using the Pentagon's rebranded "Department of War" moniker. But, he added, domestic mass surveillance and fully autonomous weapons are uses that are "simply outside the bounds of what today's technology can safely and reliably do." Emil Michael, the Pentagon's undersecretary for research and engineering, shot back in a post on X, accusing Amodei of lying and having a "God-complex." "He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk," Michael wrote. "The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company," he wrote. In an interview with CBS News, Michael said federal law and Pentagon policies already bar the use of AI for domestic mass surveillance and autonomous weapons." "At some level, you have to trust your military to do the right thing," he said. Independent experts say the standoff is highly unusual in the world of Pentagon contracting. "This is different for sure," said Jerry McGinn, director of the Center for the Industrial Base at the Center for Strategic and International Studies, a Washington DC think tank. Pentagon contractors don't usually get to tell the Defense Department how their products and services can be used, he notes "because otherwise you'd be negotiating use cases for every contract, and that's not reasonable to expect." At the same time, McGinn notes, artificial intelligence is a new and largely untested technology. "This is a very unusual, very public fight," he said. "I think it's reflective of the nature of AI."
[24]
Pentagon Casts Cloud of Doubt Over Anthropic's AI Business
Anthropic PBC started this year on a winning streak, with surging sales, multiple viral products and a large funding round all giving the startup a big advantage in the costly global AI race. On Friday, the Trump administration handed down back-to-back directives that threaten to curb growth for one of the country's most successful artificial intelligence firms. First, President Donald Trump ordered federal agencies to stop using Anthropic's software, which has become popular particularly as a programming assistant. Shortly after, the Pentagon declared the AI developer a supply-chain risk -- a designation typically reserved for companies from countries the US views as adversaries. The moves, which followed a tense showdown between the San Francisco-based startup and the Pentagon over AI safeguards, aim to not only to cut off Anthropic's sales from the US government, but also numerous other firms. "No contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," Defense Secretary Pete Hegseth wrote in a social media post late Friday. The full impact on the company -- and the AI ecosystem -- remains to be seen. At minimum, rivals such as OpenAI, Alphabet Inc.'s Google and Elon Musk's xAI now have an opportunity to take on government work that had previously gone to Anthropic. But some legal and policy experts warned the fallout could be far more dire if the Pentagon follows through on its declaration. Barring Anthropic from working with corporate customers that do business with the Defense Department would be "a death blow" to Anthropic's business, said Charlie Bullock, a lawyer and senior research fellow at the Institute for Law & AI, a Boston-based think tank. As laid out in Hegseth's post, the Pentagon's policy would effectively prevent Anthropic from working with some of its biggest partners, such as Amazon.com Inc., Bullock said. Dean Ball, a former White House adviser who helped create the Trump administration's AI Action Plan, echoed the sentiment. "Nvidia, Amazon, Google will have to divest from Anthropic if Hegseth gets his way," he wrote in a post on X. "This is simply attempted corporate murder." The uncertainty hits Anthropic at a pivotal moment. The company, which was founded in 2021 by several former employees of OpenAI, is widely expected to be preparing for an initial public offering as soon as this year. With its popular Claude chatbot, Anthropic has been racing to persuade more businesses to pay for its software to help offset the immense cost of developing AI and justify its lofty $380 billion valuation. In a statement Friday Anthropic called the move "legally unsound" and "a dangerous precedent." The company also set the stage for a legal battle over its software. "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons," it wrote. "We will challenge any supply chain risk designation in court." Some backers are concerned Anthropic's refusal to concede to the Trump administration's demands could harm the company's brand, making the startup seem hostile and anti-American, according to an Anthropic investor who spoke to Bloomberg on the condition of anonymity. Still, some investors are wary of complaining publicly or even pushing back in private. Anthropic is the crown jewel in many of their portfolios, and Chief Executive Officer Dario Amodei's resolute control over the company's direction has led many VCs to bite their tongues even if they disagree with his choice, multiple Anthropic investors said. Other investors supported Anthropic whether it decided to work with the Pentagon or not, with some noting that the government provides minimal revenue for the model maker. The move has also generated considerable support for Anthropic within the tech community, with some CEOs lauding its stance. Hegseth had given the company until 5:01 p.m. on Friday to allow the Pentagon to use Claude for any purpose within legal limits -- but without any usage restrictions from Anthropic. The startup has insisted that the chatbot not be used for mass surveillance against Americans or in fully autonomous weapons operations. Trump's decision to order agencies to ditch Anthropic posed some initial risk to the firm, though one that's limited in scope for a company with a revenue run rate of $14 billion. Anthropic inked an agreement in July with the Defense Department worth up to $200 million, but Bloomberg Government contracting records show the Pentagon paid only $2 million to Anthropic last year. This month, Anthropic signed its first deal for the State Department to use Claude, valued at just $19,000. The company also struck a broad deal with the General Services Administration for federal government agencies to use Claude for a nominal $1 fee last year. Hegseth has set a six-month maximum for Anthropic's services to be handed over another AI provider. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. The goals of the Defense Department's actions are ultimately much broader, treating Anthropic similarly to Chinese firms that the US perceives to be a security threat. However, Bullock said the legal authority Hegseth is relying on is actually quite narrow, allowing the agency to prohibit its contractors from tapping Anthropic's products for procurements related to defense contracting -- but not necessarily for things like using Claude in their businesses. Hegseth is likely to lean on the Federal Acquisitions Security Council, established in Trump's first term, to enact the policy, said Peter Harrell, a former Biden administration official and visiting scholar at Georgetown Law School. "The limit here, and where I think Hegseth is misleading at least as a legal matter, is that this should only apply to contractors in their DoD contracts," Harrell said. If the Pentagon tried to force companies that contract with it from having other unrelated business with Anthropic, "the courts will throw that out quite quickly," but "I can't say they won't try it." Harrell said. "They seem to be willing to try things that are overturned in the courts." The Pentagon did not immediately respond to request for comment. If Anthropic launches a court challenge, that could ultimately buy it time, Bullock said. For example, a court could grant the company a temporary restraining order or preliminary injunction, he said. "But we'll have to wait and see." The Pentagon's decision has quickly sent shockwaves throughout the AI community, both for its implications in the wider battle over how to deploy a powerful technology safely and because of the broad popularity of Claude Code for software development. Virtually all companies that build software do business with Anthropic, according to one AI startup executive who spoke on condition of anonymity. Not being able to use Claude Code would be disastrous for both the industry and US competitiveness, the executive said.
[25]
The Guardian view on AI in war: the Iran conflict shows that the paradigm shift has already begun
The intensified use of artificial intelligence, and rows over its control, demonstrate the need for democratic oversight and multilateral controls "Never in the future will we move as slow as we are moving now," the UN secretary-general, António Guterres, warned this week, addressing the urgent need to shape the use of artificial intelligence. The speed of technological development - as well as geopolitical turbulence - is collapsing the distinction between theoretical arguments and real world events. A political row over the US military's AI capabilities coincides with its unprecedented use in the Iran crisis. The AI company Anthropic insisted that it could not remove safeguards preventing the Department of Defense from using its technology for domestic mass surveillance or autonomous lethal weapons. The Pentagon said it had no interest in such uses - but that such decisions should not be made by companies. Outrageously, the administration has not just fired Anthropic but blacklisted it as a supply-chain risk. OpenAI stepped in, while insisting that it had maintained the red lines declared by Anthropic. Yet in an internal response to the user and employee backlash, its CEO Sam Altman acknowledged that it does not control the Pentagon's use of its products and that the deal's handling made OpenAI look "opportunistic and sloppy". But as Nicole van Rooijen, the executive director of Stop Killer Robots - which campaigns for human control in the use of force - has warned: "The issue is not just whether these weapons will be used, but how their precursor systems are already transforming the way wars are fought ... Human control risks becoming an afterthought or a mere formality." The paradigm shift has already begun. Despite the row, Anthropic's Claude has reportedly facilitated the massive and intensifying offensive which has already killed an estimated thousand-plus civilians in Iran. This is an era of bombing "quicker than the speed of thought", experts told the Guardian this week, with AI identifying and prioritising targets, recommending weaponry and evaluating legal grounds for a strike. AI is not a prerequisite for civilian deaths, military errors or unaccountability. The US defense secretary, Pete Hegseth, brags of loosening the rules of engagement. It is humans at the Pentagon who are dodging questions about the deaths of 165 schoolgirls in what appears to have been a US strike on a school in Iran on 28 February. But - even without considering questions of AI inaccuracy and biases - the impacts are obvious to its users. One Israeli intelligence source observed of its use in the war on Gaza: "The targets never end. You have another 36,000 waiting." Another said he spent 20 seconds assessing each target, stating: "I had zero added-value as a human, apart from being a stamp of approval." Mass killing is eased in every sense, with further moral and emotional distancing, and reduced accountability. Democratic oversight and multilateral constraints, instead of leaving decisions to entrepreneurs and defence departments, are essential. As the bombs rained on Iran, states met in Geneva to address lethal autonomous weapons systems; the draft text they considered would be a strong basis for a treaty that is sorely needed. Most governments want clear guidance on the military use of AI. It is the biggest players who resist - though they are at least in the room. The pace of AI-driven warfare means that caution can look like handing control to adversaries. Yet as tech workers and military officials themselves are realising, the dangers of uncontrolled expansion are far greater.
[26]
Anthropic to take Trump's Pentagon to court over AI dispute
Why it matters: The frontier AI company is doing what few other companies have done since Trump's second term began -- directly and publicly challenging the administration. * President Trump and Defense Secretary Pete Hegseth threatened earlier Friday to cut off Anthropic from a plethora of customers through a "supply chain risk" designation -- treatment that historically has been reserved for foreign adversaries. What they're saying: "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons," Anthropic said in a statement Friday night, underscoring its key objection's to the Pentagon's demands. * "We will challenge any supply chain risk designation in court." Zoom in: Anthropic cited statute 10 USC 3252, which states a a supply chain risk designation can only extend as it pertains to contracts with the Pentagon. In other words, Anthropic argues that it can't be extended to military contractors that use Claude to serve other customers. * The company claims individual customers or companies with commercial contracts with Anthropic are "completely unaffected." Per the statement: * "If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude -- through our API, claude.ai, or any of our products -- is completely unaffected. * If you are a Department of War contractor, this designation -- if formally adopted -- would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected." Context: Defense officials want to use AI models for "all lawful purposes" in classified and not have to adhere to company's ideas of what is safe and isn't, particularly in matters of national security. What's next: Anthropic said it's still committed to ensuring a smooth transition off the Pentagon's operations. Trump gave six months to replace Claude.
[27]
Trump declares Anthropic 'woke' and orders Pentagon to phase it out
Anthropic CEO Dario Amodei Credit: Ruhani Kaur/Bloomberg via Getty Images Negotiations between the Pentagon and the AI company Anthropic were severely complicated on Friday when President Donald Trump announced on Truth Social that the government would stop utilizing the company's tech. The president ordered the Pentagon to begin a six-month phase-out, accusing Anthropic of being run by "Leftwing nut jobs." The Truth Social post said that Anthropic wanted the government to abide by its terms of service. "THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!," Trump wrote. "That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military. The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution." The federal government and Anthropic have been at odds for weeks as they tried to hammer out an agreement on how the military can use Claude, Anthropic's AI model. Anthropic CEO Dario Amodei has been firm that he will not allow the Pentagon to use Claude for mass surveillance of Americans or to create autonomous weapons, like pilotless drones. The government reportedly agreed to those terms, according to the New York Times, but the contract's legal language provided too much wiggle room for Anthropic's comfort. Anthropic is known for taking a more cautious approach to AI development, and its founders famously left OpenAI over AI safety concerns. On Thursday, Amodei explained his stance in a blog post: "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner. However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today's technology can safely and reliably do." A deadline of Friday evening was set for an agreement between the Pentagon and Anthropic. It's not clear if Trump's announcement of a phase-out will equate to more time for negotiation or if the government is truly moving forward with firing Anthropic by declaring it a supply chain risk. The government may also seek to compel Anthropic to agree to its terms through the Defense Production Act, according to the Times. The government may also choose another AI partner, like Elon Musk's Grok, but CIA officials believe that product is inferior to Anthropic's, the Times reports. Following the president's Friday afternoon announcement, OpenAI CEO Sam Altman appeared on CNBC and voiced support for Anthropic. "For all the differences I have with Anthropic, I mostly trust them as a company and I think they really do care about safety, and I've been happy that they've been supporting our war fighters," Altman said, according to a clip of the appearance posted to X. This Tweet is currently unavailable. It might be loading or has been removed. Meanwhile, dozens of employees at Google and OpenAI, both competitors of Anthropic, signed letters backing Amodei's stances. And outside Anthropic's San Francisco headquarters, words of support appeared in chalk on the sidewalk, according to a post on X. This Tweet is currently unavailable. It might be loading or has been removed. This week, Anthropic softened its safety policy -- often viewed as one of the strongest in Silicon Valley -- citing competitors' reluctance to do the same and the federal government's disinterest in prioritizing security. "The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level," the company wrote. "We remain convinced that effective government engagement on AI safety is both necessary and achievable, and we aim to continue advancing a conversation grounded in evidence, national security interests, economic competitiveness, and public trust. But this is proving to be a long-term project -- not something that is happening organically as AI becomes more capable or crosses certain thresholds."
[28]
Trump bans 'woke' Anthropic AI and terminates all defense contracts
"We don't need it, we don't want it, and will not do business with them again!" Trump wrote on Truth Social. The administration and Anthropic disagree on who controls AI usage in domestic security and battlefields. The Pentagon previously set a deadline for Anthropic to grant "unfettered access" to its systems. Anthropic CEO Dario Amodei refused to comply with these specific demands earlier this week. Amodei cited concerns that the government would use his tools for mass surveillance. He also feared the development of fully autonomous weapons. Trump responded by labeling the firm as "woke" and "Radical Left" in his social media posts. The President claimed the company's leaders have no idea what the real world is about. He also threatened "major civil and criminal consequences" if the firm disrupts the six-month phase-out period. "Anthropic better get their act together, and be helpful during this phase out period," Trump posted. He added he would use the "Full Power of the Presidency" to ensure compliance.
[29]
After Banning Anthropic From Military Use, Pentagon Still Relying Heavily on It in Iran War
Can't-miss innovations from the bleeding edge of science and tech Last week, Anthropic's CEO Dario Amodei publicly drew a line in the sand with the US military, insisting that its AI models may not be used for mass surveillance of Americans or deadly autonomous weapons. The move infuriated officials at the Pentagon. Defense secretary Pete Hegseth came out in full force, accusing Anthropic of trying to "seize veto power over the operational decisions of the United States military" and banning the company from ever doing any business with any US government entity, "effective immediately." President Donald Trump ordered agencies to "immediately cease" using Anthropic's technology on Friday, while simultaneously claiming that the tool will be phased out of all government work over the next six months. But given the government's extensive use of the company's chatbot Claude during its deadly offensive in Iran, it's clearly having trouble making do without it. As The Washington Post reports, the US military is extensively using Palantir's Maven Smart System in the conflict, which has had Anthropic's Claude chatbot integrated since 2024. Last week, the Wall Street Journal first reported on the Pentagon's use of Claude to select attack targets in Iran, hours after the White House announced its ban. According to WaPo's sources, the system spits out precise location coordinates for missile strikes and prioritizes them by importance. Maven was also used during the US military's invasion of Venezuela and the kidnapping of its president, Nicolás Maduro. Center Command is "heavily using" the Maven system, Navy admiral Liam Hulin told WaPo. Military commanders told the newspaper that the military will continue using Anthropic's tech, regardless of the president ordering them not to, until a viable replacement emerges. "Whether his morals are right or wrong or whatever, we're not going to let [Anthropic CEO Dario Amodei's] decision-making cost a single American life," a source told WaPo. It remains to be seen whether OpenAI will swoop in to fill Anthropic's place. After Amodei's falling out with the Pentagon, CEO Sam Altman saw an opportunity to strike last week and signed a contract with the Department of Defense -- a move that triggered an enormous and ongoing PR crisis and sent uninstalls of ChatGPT soaring. Whatever the chatbot of choice for military commanders may end up being, the rampant use of AI in war has taken researchers aback. For one, even the most sophisticated chatbots still struggle with the very basics and continue to be haunted by rampant hallucinations. That could have immense implications when it comes to matters of life and death. So far, the offensive in Iran has resulted in the killing of many hundreds of Iranian civilians, as well as six American soldiers. "The key paradigm shift is that AI enables the US military to develop targeting packages at machine speed rather than human speed," Center for a New American Security executive vice president Paul Scharre told WP. But "AI gets it wrong," he added. "We need humans to check the output of generative AI when the stakes are life and death."
[30]
Trump orders U.S. government to stop using Anthropic but gives Pentagon 6 months to phase it out amid standoff over AI use | Fortune
President Donald Trump said Friday he will shut out Anthropic from the federal government after the AI company refused to compromise on how its technology could be used by the U.S. military. But he is also giving the Pentagon a six-month period to phaseout Anthropic's technology as it is one of the few AI companies allowed to operate in classified settings. In a Truth Social post, Trump called Anthropic "woke" and "leftwing," claiming it is endangering troops and jeopardizing national security by not acceding to the Defense Department's demands. "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology," he wrote. "We don't need it, we don't want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic's products, at various levels." Trump added that if Anthropic doesn't obey, he will use "the Full Power of the Presidency to make them comply." The San Francisco startup had refused to let users deploy its Claude models for mass domestic surveillance or autonomous weapons, while the Defense Department demanded the right to use the technology in all lawful cases. Defense Secretary Pete Hegseth threatened to revoke Anthropic's $200 million contract with the U.S. military canceled or face being labeled "a supply-chain risk." The latter would prevent companies that do business with the Pentagon from using Anthropic's technology, elevating the AI firm to a threat normally associated with foreign adversaries such as China and Russia. Hegseth also raised the possibility of invoking the Defense Production Act to force Anthropic to hand over an unrestricted version of Claude on national security grounds. "These threats do not change our position: we cannot in good conscience accede to their request," CEO Dario Amodei said in a letter Thursday. Emil Michael, the Pentagon's undersecretary for research and engineering, called Amodei "a liar" with a "God-complex" in response, accusing the CEO of wanting "to personally control the U.S Military" in posts on X. The Defense Department has publicly stated it has no intention of conducting mass surveillance or removing humans from weapons targeting decisions, but the dispute could rest on how either side is defining "autonomous" or "surveillance" in practice. Anthropic was the only AI company cleared for use in classified settings -- until Elon Musk's xAI agreed to let the Pentagon use its AI in lawful situations. Google and OpenAI are used in unclassified settings but are in talks with the Defense Department about classified work. But the Pentagon is facing a revolt from Silicon Valley, even as defense officials try to lessen their dependence on Anthropic. OpenAI CEO Sam Altman told his employees in a memo on Thursday, that the company would push for the same limitations on autonomous weapons and mass surveillance that Anthropic has, according to Axios. Also on Thursday, more than 100 workers at Google sent a letter to Jeff Dean, the company's chief scientist, also asking for similar limits on how the company's Gemini AI models are used by the U.S. military, according to the New York Times.
[31]
Trump orders government to ditch Anthropic in AI standoff with Pentagon
President Donald Trump said Friday that he was ordering all federal agencies to stop using AI services from Anthropic, the latest escalation in an increasingly bitter feud between his administration and the AI startup that makes Claude. "The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars," Trump said in all-caps on Truth Social. The president's move came a day after Anthropic rebuffed the Defense Department's latest offer to resolve a standoff over deploying Anthropic's Claude AI system for military purposes without restrictions. The Pentagon had imposed a 5:01 p.m. Friday deadline for Anthropic to yield to its demands, or face retaliation from the Trump administration. Trump's social media missive came shortly before that deadline. At stake is a $200 million defense contract between Anthropic and the Pentagon around the use of AI in classified military systems. Anthropic has pressed for assurances its AI won't be engaged in mass surveillance of Americans or used in autonomous weapons systems without human oversight. The company has dug in against repeated demands from the Defense Department that its technology must be applied as the Pentagon sees fit militarily while complying with the law. "These threats do not change our position: we cannot in good conscience accede to their request," Anthropic CEO Dario Amodei said in a Thursday statement posted on the company's website. Earlier on Friday, there still appeared to be an opening for a last-minute breakthrough with hours to go before the deadline. But Trump said late Friday afternoon that he was directing "EVERY" federal agency to "IMMEDIATELY CEASE all use of Anthropic's technology." We don't need it, we don't want it, and will not do business with them again!" Trump said. The president said there would be a six month "phase out period" for agencies including the Pentagon. The Defense Department has threatened to label Anthropic as a "supply chain" risk, a move usually reserved for foreign rivals that could sever the company from U.S. government contracts. It has also warned it may invoke the Defense Production Act (DPA), an extraordinary step that would allow the U.S. government to commandeer the company's AI technology. Analysts have pointed to a contradiction in the Trump administration's hardline approach to the company. Labeling Anthropic as a supply chain risk would bar the government from using its products. Yet invoking the Defense Production Act would allow it to claim Anthropic's AI model is essential to national security. Amodei echoed that point in his statement on Thursday, calling the threats "inherently contradictory." "One labels us a security risk," he said. "The other labels Claude as essential to national security." OpenAI CEO Sam Altman backed up Anthropic in its standoff with the Pentagon, a sign that the Trump administration might have to deal with the same concerns from other AI companies about the use of cutting-edge AI technology. "I don't personally think that the Pentagon should be threatening DPA against these companies," Altman said in a CNBC interview on Friday. "For all the differences I have with Anthropic, I mostly trust them as a company and I think they really do care about safety. I'm not sure where this is going to go." -- Joseph Zeballos-Roig contributed to this article.
[32]
Trump Orders Federal Agencies to Dump 'Woke' Anthropic AI After Pentagon Dispute - Decrypt
Trump has given agencies six months to phase out Anthropic systems. President Donald Trump has directed all U.S. federal agencies to stop using artificial intelligence technology developed by Anthropic, escalating a dispute between the AI company and the Pentagon over how the military uses the technology. In a Truth Social post on Friday, Trump said agencies must "immediately cease" using Anthropic products, with a six-month phase-out period for departments that already use the company's technology. "The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!" Trump wrote. "That decision belongs to your commander-in-chief and the tremendous leaders I appoint to run our military. The directive follows Anthropic's refusal on Thursday to remove safeguards preventing Claude from being used for "mass domestic surveillance" or "fully autonomous weapons," after Pentagon officials demanded contractors allow their systems to be used for "any lawful use." "The left-wing nut jobs at Anthropic have made a disastrous mistake trying to strong-arm the Department of War and force them to obey their terms of service instead of our Constitution," Trump wrote. President Trump called the situation a threat to U.S. troops and national security. "Their selfishness is putting American lives at risk, our troops in danger, and our national security in jeopardy," Trump said. Anthropic has resisted Pentagon demands to grant unrestricted military use of its models, while also recently walking back safety language in its Responsible Scaling Policy. On Friday, CNBC reported that OpenAI CEO Sam Altman said he is working to "help de-escalate" the situation. De-escalating the tension could be a heavy lift, however. In his post, Trump said decisions affecting U.S. military operations must remain under presidential authority rather than "some out-of-control, radical left AI company run by people who have no idea what the real world is all about," he said. "Anthropic better get their act together and be helpful during this phase-out period, or I will use the full power of the presidency to make them comply, with major civil and criminal consequences to follow," Trump said.
[33]
Inside the US's AI plans for defence and future warfare
The US military used Anthropic's Claude AI, but after Anthropic refused to remove guardrails against mass surveillance and autonomous weapons, the Pentagon cancelled the contract and turned to OpenAI. Media reports that the US military used Anthropic's artificial intelligence (AI) chatbot Claude during operations targeting leaders in Venezuela and Iran are raising new questions about how quickly artificial intelligence (AI) is being integrated into warfare. American media reported that Claude was used to help facilitate a January operation that led to the capture of Venezuela's leader Nicolás Maduro. Similar reports later emerged that the chatbot was also used during preparations for an operation targeting Iran's deceased supreme leader, Ayatollah Ali Khamenei. Experts say the incidents offer a rare glimpse into how advanced AI systems may already be supporting US military planning and intelligence work. "It was very surprising to see the sudden deployment of these tools, especially when I think the larger community does not think that they're ready for said deployment," said Heidy Khlaaf, chief AI scientist at US policy thinktank, AI Now Institute. "We're sort of questioning whether these AI models can be successful in any military settings at all because of how flawed they are," she told Euronews Next. Khlaaf said researchers have warned that large language models can produce unreliable or incorrect outputs, raising concerns about how they might perform in high-stakes environments such as military operations. The reported use of Claude also comes as the Trump administration pushes an ambitious strategy to make the US military "AI-first", arguing that rapid adoption of the technology is necessary to compete with rivals such as China. 'We see this sense of urgency' The United States has used various forms of automation technology in the military since the 2010s, and it has been a focus area for several presidents, including Trump's predecessors, Barack Obama and Joe Biden, experts told Euronews Next. Early AI models were used for logistics, maintenance, or translations, according to Elke Schwarz, a professor of political theory at Queen Mary University of London in the United Kingdom. Trump's second mandate accelerates the adoption of generative artificial intelligence (AI) models like OpenAI's ChatGPT or Anthropic's Claude in an "AI-arms race," against the country's adversaries, both Schwarz and Khlaaf said. America's policies give a "sense of urgency" to develop AI because it is a "very valuable technology" that will keep the country ahead of its rivals, said Giorgos Verdi, policy fellow at the European Council of Foreign Relations think tank. The Department of War's AI Acceleration strategy aims to secure American military dominance by eliminating barriers to AI integration and investing in strategic projects that will keep the military ahead of rivals. "The idea really is to bring AI into all kinds of domains, including the harmless ones, but also the more harmful ones," Schwartz said. He noted that previous administrations were more cautious about establishing safety guardrails governing how and when such technologies could be used. As part of that effort, the acceleration strategy has a database called genai.mil, which allows bureaucrats to access AI chatbots, including Google's Gemini and xAI's Grok. The administration's 2025 budget, called the "Big Beautiful Bill," also includes hundreds of millions of dollars in funding for AI-related military projects. The document sets aside $650 million (€550million) for military innovation, including $145 (€123 million) to develop AI-powered counter-drone systems. Another $250 million would go toward the "advancement of the AI ecosystem," while a further $250 million is allocated to expand artificial intelligence capabilities at a Cyber Command centre. An additional $115 million is earmarked for accelerating nuclear national security missions using AI. US military still in 'trial phase' with AI chatbots Due to the"inherent lack of transparency," it is difficult to determine how advanced the US government is in its plans, Schwarz said. "Unlike a certain ammunition or a specific physical weapon system, you don't really see what is being used," she said. "Everything happens in an interface and very much in the zone of invisibility." Schwarz believes that the US military is in a "trial phase," where it is experimenting with different AI companies to understand what they can do and where their limitations lie. Anthropic's $200 million partnership with the US military is for a two-year prototype that will advance national security, the company says. It will work with the department to "anticipate and mitigate potential adversarial uses of AI," and identify any risks with adopting the technology throughout the "defence enterprise." Schwartz said this suggests that the systems are being tested in live environments, which she said raises ethical concerns. "This isa terrible practice for something that involves human lives," she said. However, Verdi believes that systems like Claude that were used in the Venezuelan and Iranian contexts for "more mundane tasks," such as collecting or analysing satellite images. "A human may not be able to analyse every single piece of intelligence coming in. That's what the AI system will be able to do more quickly," he said. "Then, the humans interpret the outputs of the AI system and then act." Experts warn of growing interest in AI-powered autonomous weapons The researchers worry that the growing role of AI in US military planning and decision-making could eventually lead to the development of autonomous weapons. "I think there is definitely an interest to at least have the option to develop fully autonomous AI-enabled weapons and potentially make use of those," Verdi said. Autonomous weapons could be any weapon that could identify, select, and engage with a target without having a human involved in the final decision, Khlaaf said. "So instead of taking a recommendation from a large language model and a human acting on it or choosing not to, you would then have that be completely automated away," she said. One of the main arguments for developing such systems is the fear that the US could fall behind if a rival builds them first, Verdi said. However, there is no public information suggesting that China has integrated AI in any way into its military, Verdi and Khlaaf said. The Chinese are "very concerned about keeping that technology under control," Verdi added. The AI capabilities of other American opponents, such as Russia, Iran or North Korea, are "even less sophisticated," Verdi added, so it is even less likely that those countries would have AI autonomous weapons. Creating fully autonomous weapons with AI can also lead to escalation in a conflict, Verdi said. A recent pre-print study from King's College London found that AI chatbots almost always chose to threaten nuclear weapons use in a war game scenario. Pentagon faces 'challenging transition' away from Claude Verdi said that we should expect the US to continue using Claude or another AI chatbot in their operations because both Venezuela and Iran were "seemingly very effective," in fulfilling the mission's objectives. The perceived success of these missions creates a risk that the US will want to drop even more guardrails, such as human oversight, to make the technology even more effective, she added. The challenge for the Department of War will be to find a model that works as well as Claude, Verdi said. The government will be phasing it out in the next six months since the company refused to give the military unfettered access to its technology for what Anthropic claims could be used for mass surveillance or autonomous weapon development, according to a statement from Pete Hegseth, the US Secretary of War. Anthropic said Claude has been rolled out throughout the US government's classified information networks, deployed at national nuclear laboratories, and does intelligence analysis directly for the Department of War. Meanwhile, the Department of War signed a contract with OpenAI to integrate "advanced AI systems in classified environments," hours after the Anthropic deal was scrapped. "I think the Department of War will be looking at a challenging transition, but at the same time, it is not an impossible task," to replace Claude with a new AI system, he said. The intelligence collected and provided by Claude will likely stay with the department and could be used by the next provider, he said.
[34]
How the US might be using AI in Iran
In the week leading up to President Donald Trump's war in Iran, the Pentagon was waging a different battle: a fight with the AI company Anthropic over its flagship AI model, Claude. That conflict came to a head on Friday, when Trump said that the federal government would immediately stop using Anthropic's AI tools. Nonetheless, according to a report in the Wall Street Journal, the Pentagon made use of those tools when it launched strikes against Iran on Saturday morning. Were experts surprised to see Claude on the front lines? "Not at all," Paul Scharre, executive vice president at the Center for a New American Security and author of Four Battlegrounds: Power in the Age of Artificial Intelligence, told Vox. According to Scharre: "We've seen, for almost a decade now, the military using narrow AI systems like image classifiers to identify objects in drone and video feeds. What's newer are large-language models like ChatGPT and Anthropic's Claude that it's been reported the military is using in operations in Iran." Scharre spoke with Today, Explained co-host Sean Rameswaram about how AI and the military are becoming increasingly intertwined -- and what that combination could mean for the future of warfare. Below is an excerpt of their conversation, edited for length and clarity. There's much more in the full episode, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify. The people want to know how Claude or ChatGPT might be fighting this war. Do we know? We don't know yet. We can make some educated guesses based on what the technology could do. AI technology is really great at processing large amounts of information, and the US military has hit over a thousand targets in Iran. They need to then find ways to process information about those targets -- satellite imagery, for example, of the targets they've hit -- looking at new potential targets, prioritizing those, processing information, and using AI to do that at machine speed rather than human speed. Do we know any more about how the military may have used AI in, say, Venezuela on the attack that brought Nicolas Maduro to Brooklyn, of all places? Because we've recently found out that AI was used there, too. What we do know is that Anthropic's AI tools have been integrated into the US military's classified networks. They can process classified information to process intelligence, to help plan operations. We've had this sort of tantalizing detail that these tools were used in the Maduro raid. We don't know exactly how. We've seen AI technology in a broad sense used in other conflicts, as well -- in Ukraine, in Israel's operations in Gaza, to do a couple different things. One of the ways that AI is being used in Ukraine in a different kind of context is putting autonomy onto drones themselves. When I was in Ukraine, one of the things that I saw Ukrainian drone operators and engineers demonstrate is a little box, like the size of a pack of cigarettes, that you could put onto a small drone. Once the human locks onto a target, the drone can then carry out the attack all on its own. And that has been used in a small way. We're seeing AI begin to creep into all of these aspects of military operations in intelligence, in planning, in logistics, but also right at the edge in terms of being used where drones are completing attacks. How about with Israel and Gaza? There's been some reporting about how the Israel Defense Forces have used AI in Gaza -- not necessarily large-language models, but machine-learning systems that can synthesize and fuse large amounts of information, geolocation data, cell phone data and connection, social media data to process all of that information very quickly to develop targeting packages, particularly in the early phases of Israel's operations. But it raises thorny questions about human involvement in these decisions. And one of the criticisms that had come up was that humans were still approving these targets, but that the volume of strikes and the amount of information that needed to be processed was such that maybe human oversight in some cases was more of a rubber stamp. The question is: Where does this go? Are we headed in a trajectory where, over time, humans get pushed out of the loop, and we see, down the road, fully autonomous weapons that are making their own decisions about whom to kill on the battlefield? That's the direction things are headed. No one's unleashing the swarm of killer robots today, but the trajectory is in that direction. We saw reports that a school was bombed in Iran, where [175 people] were killed -- a lot of them young girls, children. Presumably that was a mistake made by a human. Do we think that autonomous weapons will be capable of making that same mistake, or will they be better at war than we are? This question of "will autonomous weapons be better than humans" is one of the core issues of the debate surrounding this technology. Proponents of autonomous weapons will say people make mistakes all the time, and machines might be able to do better. Part of that depends on how much the militaries that are using this technology are trying really hard to avoid mistakes. If militaries don't care about civilian casualties, then AI can allow militaries to simply strike targets faster, in some cases even commit atrocities faster, if that's what militaries are trying to do. I think there is this really important potential here to use the technology to be more precise. And if you look at the long arc of precision-guided weapons, let's say over the last century or so, it's pointed towards much more precision. If you look at the example of the US strikes in Iran right now, it's worth contrasting this with the widespread aerial bombing campaigns against cities that we saw in World War II, for example, where whole cities were devastated in Europe and Asia because the bombs weren't precise at all, and air forces dropped massive amounts of ordnance to try to hit even a single factory. The possibility here is that AI could make it better over time to allow militaries to hit military targets and avoid civilian casualties. Now, if the data is wrong, and they've got the wrong target on the list, they're going to hit the wrong thing very precisely. And AI is not necessarily going to fix that. On the other hand, I saw a piece of reporting in New Scientist that was rather alarming. The headline was, "AIs can't stop recommending nuclear strikes in war game simulations." They wrote about a study in which models from OpenAI, Anthropic, and Google opted to use nuclear weapons in simulated war games in 95 percent of cases, which I think is slightly more than we humans typically resort to nuclear weapons. Should that be freaking us out? It's a little concerning. Happily, as near as I could tell, no one is connecting large-language models to decisions about using nuclear weapons. But I think it points to some of the strange failure modes of AI systems. They tend toward sycophancy. They tend to simply agree with everything that you say. They can do it to the point of absurdity sometimes where, you know, "that's brilliant," the model will tell you, "that's a genius thing." And you're like, "I don't think so." And that's a real problem when you're talking about intelligence analysis. Do we think ChatGPT is telling Pete Hegseth that right now? I hope not, but his people might be telling him that. You start with this ultimate "yes men" phenomenon with these tools, where it's not just that they're prone to hallucinations, which is a fancy way of saying they make things up sometimes, but also the models could really be used in ways that either reinforce existing human biases, that reinforce biases in the data, or that people just trust them. There's this veneer of, "the AI said this, so it must be the right thing to do." And people put faith in it, and we really shouldn't. We should be more skeptical.
[35]
Anthropic's Claude AI being used in Iran war by U.S. military, sources say
Camilla Schick is a British journalist in D.C. and CBS News' foreign affairs producer, covering U.S. foreign relations, the State Department and national security. Two sources familiar with the U.S. military's use of artificial intelligence confirm that the U.S. used Anthropic's Claude AI model over weekend for the attack on Iran -- and is still using it. The Pentagon has not said exactly how the AI tool is being deployed, but it's being used despite a government-wide ban on the technology after a dispute last week with the Pentagon. The conflict centered around Anthropic's push for guardrails that would explicitly prevent the military from using Claude to conduct mass surveillance on Americans or to power fully autonomous weapons. The Pentagon demanded the ability to use Claude for "all lawful purposes" and contended that Anthropic's usage concerns were not material because it's already illegal for the Pentagon to conduct mass surveillance of Americans, and internal policies restrict the military from using fully autonomous weapons. President Trump announced Friday that he's ordering federal agencies to stop using Anthropic's technology, allowing them six months to phase it out, and Defense Secretary Pete Hegseth has declared the company a supply chain risk. National security news site Defense One, citing multiple sources familiar with the DoD's spat with Anthropic, reported it could take three months or longer for the Pentagon to replace Claude's capabilities with another AI platform. Pentagon chief technology officer Emil Michael told CBS News the Defense Department uses Claude for synthesizing documents and making logistics and supply chains more efficient, among other tasks.
[36]
Trump tells US govt to 'immediately' stop using Anthropic AI tech
Washington (United States) (AFP) - President Donald Trump told the US government Friday to "immediately" stop using Anthropic's technology after the AI startup rejected the Pentagon's demand that it agree to unconditional military use of its Claude models. Anthropic insists its technology should not be used for the mass surveillance of US citizens or deployed in fully autonomous weapons systems, while the Pentagon says it operates within the law and that contracted suppliers cannot set terms on how their products are employed. "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!" Trump said in a post on his Truth Social platform. "There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic's products, at various levels," the US president said, referring to the Department of Defense. "Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow," Trump added. Anthropic did not immediately reply to a request for comment. The Pentagon had said Anthropic must agree to comply with its demand by 5:01 pm (22:01 GMT) Friday or face compulsion under the Defense Production Act. The Cold War-era law, last invoked during the Covid pandemic, grants the federal government sweeping powers to direct private industry toward national security priorities. The Pentagon also threatened to designate Anthropic a supply chain risk -- a label typically reserved for companies from adversary nations -- which could severely damage its ability to work with the US government and harm its broader reputation. 'Stand together' Amid the dispute, Anthropic chief executive Dario Amodei said Thursday that "these threats do not change our position: we cannot in good conscience accede to their request." The conflict has drawn a show of solidarity from others in the industry, with hundreds of employees from AI giants Google DeepMind and OpenAI urging their companies to rally behind Anthropic in an open letter titled "We Will Not Be Divided." "We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight," the letter said. "They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand," it added. OpenAI CEO Sam Altman told employees Thursday that he too was seeking an agreement with the Pentagon that would include red lines similar to Anthropic's, and that he hoped to help broker a resolution. "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions," he wrote in a memo to employees, according to US media. Industry representatives in Washington had pressed hard for a negotiated outcome to the Anthropic-Pentagon dispute, warning that the confrontation risks damaging the AI sector as a whole. "Decisions about military AI cannot be settled through ad hoc standoffs between the Pentagon and individual firms," said Daniel Castro, vice president of the Information Technology and Innovation Foundation. "If certain AI capabilities are deemed essential for national defense, those expectations should be debated openly and written into law."
[37]
Trump bans Anthropic from government use
After months of increasingly heated rhetoric between the Defense Department and leading AI company Anthropic over the military's use of its systems, President Donald Trump announced Friday afternoon that he was banning federal agencies from using Anthropic's services. "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!" Trump wrote in a post on Truth Social. Anthropic did not immediately reply to a request for comment. The company, led by CEO Dario Amodei, has made clear in months of contract negotiations with the Pentagon that it will not allow its AI systems to be harnessed for domestic surveillance or direct use in lethal autonomous weapons. The Pentagon has maintained that it must be allowed to employ its AI systems for "any lawful use," which may violate Anthropic's red lines. "I believe deeply in the existential importance of using AI to defend the United States and other democracies," Amodei wrote in a statement Thursday night, but "using these systems for mass domestic surveillance is incompatible with democratic values." Amodei added that "today, frontier AI systems are simply not reliable enough to power fully autonomous weapons." In a series of tweets late Thursday night, Undersecretary of Defense Emil Michael wrote on X that Amodei "is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk." Earlier Thursday, Pentagon Chief Spokesperson Sean Parnell wrote on X that the Pentagon's desire to use Anthropic's model for all lawful purposes "is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations." Anthropic currently has a contract worth up to $200 million with the Pentagon to "advance responsible AI in defense operations" and works with data analytics company Palantir to provide its AI services on classified defense and intelligence networks. Throughout Friday, a growing chorus of lawmakers had called on the parties to deescalate their feud and come to an amicable solution, contrasting with the relative silence from Anthropic and the Pentagon in the hours before the deadline. In a letter to Defense Secretary Pete Hegseth made public Friday afternoon, Sens. Ed Markey, D-Mass., and Chris Van Hollen, D-Md., said the Pentagon's "threats to punish an American AI company for refusing to surrender basic safeguards on the use of its AI model represent a chilling abuse of government power." Rep. George Whitesides, D-Ca., told Hegseth he was "concerned that your threats to compel changes to safety policies on an accelerated timeline could push the Department toward broader deployment without sufficient guardrails" in a letter released Friday morning. Unlike many major defense technologies, today's leading AI systems have been developed primarily in the private sector, by companies like Anthropic, OpenAI, and Google. The increasing capabilities of these systems have forced the Pentagon to bargain with Anthropic over its usage policies or opt for a less proven services. Until this week, Anthropic was the only leading AI company that had been cleared to offer services on classified networks. In a memo sent to employees Thursday evening and viewed by NBC News, OpenAI CEO Sam Altman said that his company would largely follow Anthropic's approach if it were in the same position with the Pentagon. "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines," he wrote. Altman added that "this is no longer just an issue between Anthropic and the DoW; this is an issue for the whole industry and it is important to clarify our stance." It's unclear how other leading AI companies would respond. Google, Meta and xAI did not respond to a request for comment.
[38]
US military reportedly used Claude in Iran strikes despite Trump's ban
Trump calls Anthropic a 'Radical Left AI company run by people who have no idea what the real World is all about' The US military reportedly used Claude, Anthropic's AI model, to inform its attack on Iran despite Donald Trump's decision, announced hours earlier, to sever all ties with the company and its artificial intelligence tools. The use of Claude during the massive joint US-Israel bombardment of Iran that began on Saturday was reported by the Wall Street Journal and Axios. It underlines the complexity of the US military withdrawing powerful AI tools from its missions when the technology is already intricately embedded in operations. According to the Journal, US military command used the tools for intelligence purposes, as well as to help select targets and carry out battlefield simulations. On Friday, just hours before the Iran attack began, Trump ordered all federal agencies to stop using Claude immediately. He denounced Anthropic on Truth Social as a "Radical Left AI company run by people who have no idea what the real World is all about". The flaming row was triggered by the use of Claude by the US military in its raid to capture the president of Venezuela, Nicolás Maduro, in January. Anthropic objected, pointing to its terms of use which do not allow Claude to be applied for violent ends, to develop weapons, or for surveillance. Since then relations between Trump, the Pentagon and the AI company have steadily worsened. In a lengthy post on X on Friday, the defense secretary Pete Hegseth accused Anthropic of "arrogance and betrayal", adding that "America's warfighters will never be held hostage by the ideological whims of Big Tech". Hegseth demanded full and unrestricted access to all Anthropic's AI models for every lawful purpose. But the defense secretary also gave a nod to the difficulty of rapidly detaching military systems from the AI tool, given how widely used they have become. He said that Anthropic would continue to provide services "for a period of no more than six months to allow for a seamless transition to a better and more patriotic service". Since the break with Anthropic, the rival company OpenAI has stepped into the breach. Sam Altman, OpenAI's CEO, said he had reached agreement with the Pentagon for use in its classified network of the company's tools, which include ChatGPT.
[39]
Former Trump official: Anthropic order "attempted corporate murder"
Why it matters: It's a recognition of the stakes in the administration's effort to cull what Trump calls a "woke" company -- one that happens to have lately dominated the field. Driving the news: Anthropic rejected the Pentagon's demand to lift all safeguards on the military's use of its model, Claude, due to its concerns about the use of AI for mass domestic surveillance and the development of weapons that fire without human involvement. * President Trump posted on Truth Social about the move, calling Anthopic a "radical left" company. * The move to classify Anthropic as a "supply chain risk" could require companies to cut ties with the AI lab. What they're saying: "Nvidia, Amazon, Google will have to divest from Anthropic if Hegseth gets his way. This is simply attempted corporate murder," Dean Ball, former AI adviser in the Trump administration, said in a post on X. * "I could not possibly recommend investing in American AI to any investor; I could not possibly recommend starting an AI company in the United States," he added. * "This is obviously a psychotic power grab. It is almost surely illegal," he said in another post. For the record: Amazon, Google and Nvidia did not immediately respond to Axios requests for comment. Zoom in: Any potential forced cutting of ties with Anthropic could hinder the cash flows and AI ambitions of the top tech companies that are currently powering the U.S. stock market. * Amazon, Google and Nvidia all have deals and partnerships with Anthropic and expect returns from their investments. * Amazon, for example, an early Anthropic investor, has turned an initial $8 billion investment into what now amounts to nearly $70 billion. Zoom out: The pushback from the administration flies in the face of what Wall Street wants for tech companies: freedom. * "The United States federal government is now, by an extremely wide margin, the most aggressive regulator of artificial intelligence in the world," Ball added. * Investors have long argued that the U.S. is the best country in the world to invest in AI companies because of the perceived benefits of capitalism and lack of strict tech regulation. The bottom line: AI investors and companies seeking a pro-business, low-regulation backdrop for their technological ambitions may be facing a harsh new reality.
[40]
Trump orders federal agencies to stop using Anthropic products - SiliconANGLE
Trump orders federal agencies to stop using Anthropic products U.S. President Donald Trump today ordered federal agencies to stop using technology from Anthropic PBC. The ban will also have to be upheld by military suppliers, U.S. Defense Secretary Pete Hegseth announced separately on X. Both moves are related to disagreements over the safety guardrails that Anthropic ships with its models. Trump wrote on Truth Social that "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology." Hegseth's X post, meanwhile, states that the Defense Department has designated the company as a supply chain risk to national security. That means U.S. military contractors, suppliers and partners may not conduct "any commercial activity" with Anthropic. The saga began last June when the company won a $200 million artificial intelligence contract from the Pentagon. At the time, Anthropic stated that it would collaborate with defense officials to explore national security use cases for AI. Rival OpenAI Group PBC won a similar contract around the same time. Last month, Reuters reported that Anthropic and the Pentagon had clashed over Claude's safety guardrails. Anthropic doesn't permit its AI to be used for mass surveillance of Americans or the development of autonomous weapons. The Pentagon took issue with the company's policy. Officials reportedly demanded that Anthropic make Claude available for "all lawful purposes." Earlier this week, the Pentagon stated that it had offered concessions to resolve the matter. Officials invited Anthropic to the Defense Department's ethics board and proposed taking certain other steps. The AI provider didn't accept the offer, stating that "new language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will." The disagreement came to a head on Tuesday when Hegseth summoned Anthropic chief executive Dario Amodei to a meeting. According to Axios, Hegseth gave the company until today to revise its policy. He reportedly stated that the Defense Department could designate Anthropic as a supply chain risk or order it to adapt its models' guardrails under the Defense Production Act. Now that the Pentagon has opted for the former option, Anthropic's models will be phased out of government networks within 6 months. In a blog post published ahead of the move, Amodei wrote that "we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required."
[41]
US Military Using Claude to Select Targets in Iran Strikes
Can't-miss innovations from the bleeding edge of science and tech The ongoing attacks on the Islamic Republic of Iran, launched by a joint coalition of US and Israeli military forces, have so far claimed 555 Iranian lives, including 165 deaths from an attack on an elementary school in Southern Iran. As the Wall Street Journal reported as the attacks unfolded the military strike force had a hand in selecting its targets from Anthropic's Claude chatbot. According to the paper, Anthropic's large language model, Claude, is the key "AI tool" used by US Central Command in the Middle East. Its tasks include assessing intelligence, simulated war games, and even identifying military targets -- in short, helping military leaders plan attacks that have already claimed hundreds of lives. Anthropic's role in the devastating attacks might come as news for anyone who thought the company's ethical redlines precluded it from any military work whatsoever. The company and its CEO, Dario Amodei, have been roiled in a messy conflict with the Trump administration over two particular moral boundaries: the use of Claude for surveillance of US citizens, and for fully-autonomous, lethal weaponry. It appears that using Claude to select targets, though, isn't brushing up against the bot's ethical guardrails. That's striking, because Anthropic has spent the latter part of February embroiled in conflict with the Pentagon over the use of Claude. Last week, the Pentagon -- which currently uses Claude throughout its classified systems -- set a deadline for Anthropic to drop those dual redlines of surveillance and fully autonomous weaponry. Anthropic let that deadline go by without caving, establishing what many understood as a principled stance against the Trump administration's militarism. Yet as Pulitzer prize winning national security journalist Spencer Ackerman observed, it's important to note what Anthropic's ethical lines ignored when it inked its deal with the military in the first place. "Amodei, it is highly conspicuous, doesn't register building a surveillance panopticon of foreigners as a problem," Ackerman wrote. "The time to worry about everything ostensibly concerning Amodei was before signing the contract that Amodei didn't wish to abandon. America is in such steep decline that we don't even make Oppenheimers like we used to." "When you take Doctor Doom's money to provide him a lathe to construct components for anthropomorphic robots," Ackerman scathed, "do you not understand that he is going to build Doombots?"
[42]
US reportedly used Anthropic's AI for its attack on Iran
President Trump ordered a halt to all federal use of Anthropic technology. The US conducted a major air attack on Iran using Anthropic AI tools hours after the order. The ban highlights a conflict between immediate operational needs and policy decisions regarding AI vendors. The Department of Defense is moving to replace Anthropic with competitors despite the transition requiring months. The order followed strong disagreements between the Department of Defense and the AI company. President Trump stated there is a six-month phase-out period for agencies like the Department of War. According to The Wall Street Journal, the US military used Anthropic's AI for the operation against Iran. The military previously used the company's AI for the capture of Venezuelan president Nicolás Maduro. The Department of Defense has reached deals with xAI and OpenAI to use their models. Federal agencies are expected to eventually move away from using Claude.
[43]
US Military Used Anthropic AI in Iran Strike Despite Trump Ban: Report
The US military reportedly used Anthropic during a major air strike on Iran, only hours after President Donald Trump ordered federal agencies to halt use of the company's systems. Military commands, including US Central Command (CENTCOM) in the Middle East, used Anthropic's Claude AI model for operational support, according to people familiar with the matter cited by The Wall Street Journal. The tool has reportedly assisted with intelligence analysis, identifying potential targets and running battlefield simulations. The incident shows how deeply advanced AI systems have become embedded in defense operations. Even as the administration moved to sever ties with the company, Claude remained integrated into military workflows. On Friday, the Trump administration instructed agencies to stop working with the company and directed the Defense Department to treat it as a potential security risk. The order came after contract talks broke down, with Anthropic refusing to grant unrestricted military use of its AI for any lawful scenario requested by defense officials. Related: Crypto VC Paradigm expands into AI, robotics with $1.5B fund: WSJ Anthropic had previously secured a multiyear Pentagon contract worth up to $200 million alongside several major AI labs. Through partnerships involving Palantir and Amazon Web Services, Claude became approved for classified intelligence and operational workflows. The system was reportedly also involved in earlier operations, including a January mission in Venezuela that resulted in the capture of President Nicolás Maduro. Tensions intensified after Defense Secretary Pete Hegseth demanded the company permit unrestricted military use of its models. Anthropic CEO Dario Amodei rejected the request, describing certain applications as ethical boundaries the company would not cross, even if it meant losing government business. In response, the Pentagon began lining up replacement providers, reaching an agreement with OpenAI to deploy its AI models on classified military networks. Related: Pantera, Franklin Templeton join Sentient Arena to test AI agents During an interview on Saturday, Anthropic CEO Dario Amodei said the company opposes the use of its AI models for mass domestic surveillance and fully autonomous weapons, responding to a US government directive that labeled the firm a defense "supply chain risk" and barred contractors from using its products. He argued that certain applications cross fundamental boundaries, emphasizing that military decisions should remain under human control rather than be delegated entirely to machines.
[44]
Trump orders federal agencies to stop using Anthropic
Washington | President Donald Trump said on Friday (Saturday AEDT) he was ordering all federal agencies to phase out use of Anthropic technology after the company's unusually public dispute with the Pentagon over artificial intelligence safety. Trump's comments came just over an hour before the Pentagon's deadline for Anthropic to allow unrestricted military use of its AI technology or face consequences -- and nearly 24 hours after CEO Dario Amodei said his company "cannot in good conscience accede" to the Defence Department's demands.
[45]
Anthropic CEO Dario Amodei calls White House's actions "retaliatory and punitive"
Faris Tanyos is a news editor for CBSNews.com, where he writes and edits stories and tracks breaking news. He previously worked as a digital news producer at several local news stations up and down the West Coast. Anthropic CEO Dario Amodei said in an exclusive interview with CBS News Friday that the Pentagon's decision to designate the AI company a supply chain risk was "retaliatory and punitive." Hours earlier, Defense Secretary Pete Hegseth deemed the AI company a "supply chain risk to national security," which restricts military contractors from doing business with Anthropic. The move came after Anthropic refused to give the military unfettered access to its AI model. Amodei told CBS News senior business and technology correspondent Jo Ling Kent that the actions were "unprecedented" and that "it was made very clear in some of their statements, in some of their language, that this was retaliatory and punitive." Hei noted that Hegseth's designation was the first ever issued for a U.S. company. He said the company sought to draw "red lines" in the government's use of its technology because "we believe that crossing those lines is contrary to American values, and we wanted to stand up for American values." "Disagreeing with the government is the most American thing in the world," Amodei said. "And we are patriots. In everything we have done here, we have stood up for the values of this country." The CEO said that if he could speak to President Trump directly about the dispute, he would emphasize that Anthropic is made up of "patriotic Americans." "Everything we have done has been for the sake of this country, for the sake of supporting U.S. national security," Amodei said. "Our leaning forward in deploying our models with the military was done because we believe in this country." In July 2025, the Pentagon awarded Anthropic a $200 million contract to develop AI capabilities that would advance U.S. national security. However, the Defense Department and the artificial intelligence company have been in negotiations over concerns from the AI startup that its model, Claude, could potentially be used for surveillance on Americans and the development and use of autonomous weapons. Anthropic sought certain guardrails over the Pentagon's use of Claude, which it said were rejected. On Friday, Mr. Trump in a social media post ordered all federal agencies to "immediately" halt their use of Anthropic's technology. Mr. Trump said some agencies, including the Defense Department, would have six months to phase out their use of Anthropic. Hegseth followed that up later Friday by declaring in his own social media post that Anthropic would be deemed a "supply chain risk to national security," and said that no contractor who does business with the Pentagon can continue to conduct commercial activity with Anthropic. The Pentagon had earlier this week given Anthropic a deadline of 5:01 p.m. to either reach a deal or lose its government contracts. Emil Michael, the Pentagon's chief technology officer, told CBS News on Thursday that the military had "made some very good concessions" to Anthropic. He later added that "at some level, you have to trust your military to do the right thing."
[46]
Trump news at a glance: president blasts AI company for standing firm on safety guardrails US military wants lifted
Anthropic has resisted allowing its Claude AI system to be used for mass surveillance or autonomous weapons systems - key US politics stories from Friday, 27 February at a glance Donald Trump said Friday he will direct all federal agencies to "IMMEDIATELY CEASE" all use of artificial intelligence technology from Anthropic. The US Department of Defense and Anthropic hit an impasse with neither side backing down as a deadline for an agreement lapsed on Friday afternoon. The Pentagon had demanded the AI company loosen ethical guidelines on its systems or face severe consequences. US defense officials have for weeks been pushing for unfettered access to the company's Claude AI system's capabilities that they say can help protect the country, while Anthropic has resisted allowing its product to be used for mass surveillance or autonomous weapons systems that can kill people without human input. "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner," Anthropic chief executive Dario Amodei said in a statement this week. "However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values." Trump evidently did not appreciate Anthropic's red line. "WE will decide the fate of our Country - NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about," Trump wrote on Truth Social.
[47]
Trump moves to blacklist Anthropic AI from all government work
The big picture: Anthropic rebuffed the Pentagon's demand to lift all safeguards on the military's use of its model, Claude, due to its concerns about the use of AI for mass domestic surveillance and the development of weapons that fire without human involvement. Driving the news: The Pentagon is now following through on its threat to declare Anthropic a "supply chain risk," a penalty usually reserved for companies from adversarial countries, such as Chinese tech giant Huawei. * The department plans to sever its contract with Anthropic, valued at up to $200 million, and require companies it works with to certify they don't use Claude in those workflows. Anthropic will also be barred from all other government work. * There will be a six-month wind down period to allow the Pentagon, its customers and other government agencies to onboard alternatives to Claude. * "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology," Trump wrote on Truth Social. The decision is particularly extraordinary because Claude is the only AI model currently used in the military's classified systems. * It was used in the operation to capture Nicolás Maduro and could conceivably be used in a potential military operation in Iran. * Defense officials praised Claude's capabilities in conversations with Axios, with one admitting it would be a "huge pain in the ass" to disentangle. * The decision is also complicated for AI software firm Palantir, which uses Claude to power its most sensitive work with the military and will likely now need to strike a deal with one of Anthropic's competitors. Trump wrote that the U.S. would "NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS." * "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," he wrote. The other side: Around 24 hours earlier, Anthropic CEO Dario Amodei had rejected what the Pentagon had called it's "best and final offer" in a public statement laying out the company's concerns and stating "we cannot in good conscience accede to their request." * Emil Michael, the senior Pentagon official who has been steering the negotiations with Anthropic and other AI firms, responded by calling Amodei a "liar" with a "God complex" who was "ok putting our nation's safety at risk." * In his statement, Amodei said that if the Pentagon should "choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions." What to watch: Anthropic has not yet said whether it will attempt to fight the designation in court. * The company has hundreds of millions of dollars on the line, given the possibility that many firms that work with the government, or hope to in future, will steer clear of Claude. * Anthropic has been growing at a staggering rate and establishing dominance in some key enterprise use cases for AI. * Amodei and his company have garnered widespread praise in the past 24 hours for their principled stand. It's not yet clear how expensive that stand will be. The other side: The Pentagon argues that there are many gray areas around what constitutes mass surveillance or autonomous weaponry, and that it's unworkable to have to litigate individual cases with a private company. * Their argument is that once the military buys a tool, it has its own standards and procedures to determine whether and how to use it. They therefore demanded all AI firms make their models available for "all lawful purposes." * Defense Secretary Pete Hegseth has railed against "woke AI," and the Trump administration has grown increasingly antagonistic toward Anthropic in particular, even as the military has grown more reliant on its model. * "The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good," a Defense official told Axios ahead of the tense meeting between Hegseth and Amodei on Tuesday. What's next: Elon Musk's xAI recently signed an agreement to let the military use its model, Grok, in classified systems. Sources say it's unlikely to be a like-for-like replacement for Claude. * Google's Gemini and OpenAI's ChatGPT are both available in unclassified systems, and the Pentagon is accelerating conversations about bringing them into the classified space. * Hundreds of employees from Google and OpenAI have signed a petition in the past 24 hours calling on their companies to mirror Anthropic's position. What to watch: OpenAI CEO Sam Altman said in a memo to staff on Thursday night that the company will uphold the same red lines as Anthropic on surveillance and autonomous weapons, but still hopes to strike a deal with the Pentagon.
[48]
Anthropic AI Aided U.S. Attack in Iran, Despite Trump Ban
The U.S. Central Command (CENTCOM) in the Middle East reportedly deployed Claude to assess intelligence, identify targets, and simulate potential battle plans, according to insiders. Observers say Claude's utilization in the wake of the proclaimed ban demonstrates that AI is so deeply rooted in military maneuvers that weeding it out may take far longer than six months, the deadline imposed by Trump. This marks the latest salvo in the contentious relationship between the Defense Department and Anthropic. The company's $200 million contract with the Pentagon, signed last summer, included a provision that prohibits the military from using Claude to guide lethal, fully autonomous weapons or conduct surreptitious surveillance of Americans on a wide scale. Defense Secretary Pete Hegseth, however, demanded full access to Anthropic's AI technology and said the company has no right to tell the military how to conduct its affairs. The confrontation came to a head on Friday, when Hegseth announced his plan to designate Anthropic a "supply-chain risk to national security," a move that would effectively prohibit all federal offices from working with the company. "America's warfighters will never be held hostage by the ideological whims of Big Tech," Hegseth said in a post on X. "This decision is final."
[49]
Trump directs federal agencies to stop using Anthropic's AI tech
WASHINGTON - President Donald Trump said Feb. 27 that he was directing every federal agency to immediately cease work with artificial intelligence lab Anthropic, adding there would be a six-month phaseout for the Defense Department and other agencies that use the company's products. "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!" Trump said in a post on Truth Social. Trump's directive came during a weeks-long feud between the Pentagon and the San Francisco-based startup over concerns about how the military could use AI at war. Spokespeople for Anthropic, which has a $200 million contract with the Pentagon, did not immediately respond to a request for comment. Trump's decision stopped short of threats issued by the Pentagon, including that it could invoke the Defense Production Act to require Anthropic's compliance. Pentagon chief Pete Hegseth said on Feb. 27 that he would direct the Department of War to designate Anthropic a supply-chain risk for national security, a step previously used against businesses tied to foreign adversaries. But Trump vowed further action if Anthropic did not cooperate with the phaseout. The setback comes as AI leader Anthropic raced to win a fierce competition selling novel technology to businesses and government, particularly for national security, ahead of its widely expected initial public offering. The company has said it has not finalized an IPO decision. At the same time, the battle over technological guardrails had raised concerns that the Department of Defense would follow U.S. law but little other constraint when deploying AI for national-security missions, regardless of safety or ethics service terms embraced by the technology's developers. Anthropic had sought guarantees that its AI would not be used for fully autonomous weapons or for mass domestic surveillance - applications in which the Pentagon has said it had no interest. (Reporting by Ryan Patrick Jones in Toronto, Andrea Shalal in Washington, Ismail Shakil in Toronto and Jeffrey Dastin in San Francisco; Writing by Daphne Psaledakis; Editing by Caitlin Webber and Matthew Lewis)
[50]
Trump bans Anthropic from government use after company refuses to lift AI safeguards
Rob has been an Editorial Director at Lifewire, a news writer at Engadget, and a senior contributor at Cult of Mac. He's written about PCs, Macs, mobile phones, and games, created newsrooms from the ground up, and has extensive experience reviewing hardware, software, and games across his career. President Trump fired off an angry missive today in a Truth Social post, which was then reposted by US Secretary of War Pete Hegseth on X.com. The President directed every US government agency to immediately cease using any AI tech from Anthropic (the makers of Claude). The moment comes after a week-long standoff between Trump's administration and Anthropic, in which the former wanted the latter to allow its technology to enable two things: mass domestic surveillance of Americans and the use of fully autonomous military weapons without human involvement. Anthropic's two red lines The Pentagon disagrees According to CBS News, the Pentagon wanted full access to Claude's abilities without restrictions for all lawful purposes. Anthropic's CEO Dario Amodei rejected the offer the day before Friday's 5pm federal deadline in a long post on Anthropic's site, saying, "we believe AI can undermine, rather than defend, democratic values." and calling out two specific "red lines" it wanted to protect from having its technology used for: mass domestic surveillance and fully autonomous weapons. The CEO also pointed out that surveillance using AI systems is "a practice the Intelligence Community has acknowledged raises privacy concerns, and that has generated bipartisan opposition in Congress." President Trump's response, then, said, "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY." The President continued, essentially firing Anthropic from the federal government. "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology," he said in the post. "We don't need it, we don't want it, and will not do business with them again!" President Trump noted a six-month phase-out plan for agencies like the Department of War, and then warned, "Anthropic better get their act together, and be helpful during this phase-out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow." Complicating the ban, the data analytics firm Palantir relies on Claude in its sensitive classified work with the military, which could create even more ripple effects, as noted by Axios. Subscribe to the newsletter for AI policy coverage Want deeper context on AI and government decisions? Subscribe to the newsletter for clear, expert coverage of AI policy, surveillance debates, and the national-security implications shaping public debate. Subscribe By subscribing, you agree to receive newsletter and marketing emails, and accept our Terms of Use and Privacy Policy. You can unsubscribe anytime. OpenAI's CEO told CNBC in an interview before the President's TruthSocial post that he also shares Anthropic's views on surveillance and autonomous weapons, as reported by NPR. This story is developing.
[51]
Anthropic calls supply chain risk designation 'unprecedented,' 'legally unsound'
Anthropic called the Pentagon's decision Friday to designate the AI company as a supply chain risk an "unprecedented action," arguing it is "legally unsound" and would "set a dangerous precedent." "Designating Anthropic as a supply chain risk would be an unprecedented action -- one historically reserved for US adversaries, never before publicly applied to an American company," Anthropic said in a statement. "We are deeply saddened by these developments." "We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government," it added. Defense Secretary Pete Hegseth said Friday that the Pentagon was labeling Anthropic as a supply chain risk following a dispute over the company's terms of use for its AI models. The Defense Department (DOD) pushed for language that would allow it to use the technology for "all lawful purposes," while Anthropic argued that there needed to be restrictions on mass domestic surveillance and fully autonomous lethal weapons. Hegseth, who made the announcement on the social platform X, said that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic" as a result of the designation. His announcement came shortly after President Trump said he was directing federal agencies to "immediately cease" using the company's technology. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," the president wrote in a post on Truth Social. However, Anthropic noted in its statement late Friday that it had "not yet received direct communication" from the Pentagon or the White House about the status of negotiations. "We have tried in good faith to reach an agreement with the Department of War, making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions above," the company said. "To the best of our knowledge, these exceptions have not affected a single government mission to date." The AI firm added that it plans to challenge any supply chain risk designation in court. Despite Hegseth's comments, it also argued that the Defense secretary "does not have the statutory authority" to bar anyone who does business with the military from doing business with Anthropic. Instead, it suggested the law can only extend to the use of its AI models as part of DOD contracts and "cannot affect how contractors use Claude to serve other customers." "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons," Anthropic added.
[52]
Trump Says He Is Directing Federal Agencies to Cease Use of Anthropic Technology
WASHINGTON, Feb 27 (Reuters) - U.S. President Donald Trump on Friday said he was directing every federal agency to immediately cease all use of Anthropic's technology, adding there would be a six-month phase out for agencies such as the Defense Department who use the company's products. "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!" Trump said in a post on Truth Social. Trump's directive comes amid a feud between the Pentagon and top artificial intelligence lab Anthropic over concerns about how the military could use AI at war.Spokespeople for Anthropic did not immediately respond to a request for comment. (Reporting by Ryan Jones, Ismail Shakil and Jeffrey Dastin; Writing by Daphne Psaledakis; Editing by Caitlin Webber)
[53]
Hegseth declares Anthropic a supply chain risk, barring military contractors from doing business with AI giant
Joe Walsh is a senior editor for digital politics at CBS News. Joe previously covered breaking news for Forbes and local news in Boston. Defense Secretary Pete Hegseth deemed artificial intelligence firm Anthropic a "supply chain risk to national security" on Friday, following days of increasingly heated public conflict over the company's effort to place guardrails on the Pentagon's use of its technology. Hegseth declared on X that effective immediately, "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." The decision could have a wide-ranging impact, given the sheer number of companies that contract with the Pentagon. "America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final," Hegseth wrote. President Trump announced earlier Friday that all federal agencies must "immediately" stop using Anthropic, though the Defense Department and certain other agencies can continue using its AI technology for up to six months while transitioning to other services. CBS News has reached out to Anthropic for comment. The decision to cut off Anthropic came after a dispute with the Pentagon that highlighted sweeping disagreements about the role of AI in national security and the potential risks that the powerful technology could pose. The company -- which is the only AI firm whose model is deployed on the Pentagon's classified networks -- has sought guardrails that prevent its technology from being used to conduct mass surveillance of Americans or carry out military operations without human approval. But the Pentagon insisted any deal should allow use Anthropic's Claude model for "all lawful purposes." The Pentagon had given Anthropic a deadline of Friday at 5:01 p.m. to either reach an agreement or lose out on its lucrative contracts with the military. The military's position is that it's already illegal for the Pentagon to conduct mass surveillance of Americans, and internal policies restrict the military from using fully autonomous weapons. As talks between the two sides broke down this week, Pentagon officials have publicly accused the company of seeking to impose his own views onto the military. Hegseth called Anthropic "sanctimonious" and arrogant on Friday, and accused it of trying to "strong-arm the United States military into submission." "Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable," Hegseth alleged. But Anthropic CEO Dario Amodei has argued that guardrails are necessary because Claude is not infallible enough to power fully autonomous weapons and a powerful AI model could raise serious privacy concerns. He says the company understands that military decisions are made by the Pentagon and has never tried to limit the use of its technology "in an ad hoc manner." "However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei said in a statement Thursday. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do." Amodei has been outspoken for years about the potential risks posed by unchecked AI technology, and has backed calls for safety and transparency regulations. On Thursday, the eve of the military's deadline to reach a deal, the Pentagon's chief technology officer Emil Michael told CBS News that the Pentagon had made concessions, offering written acknowledgements of the federal laws and internal military policies that restrict mass surveillance and autonomous weapons. "At some level, you have to trust your military to do the right thing," said Michael, who also noted, "we'll never say that we're not going to be able to defend ourselves in writing to a company." Anthropic called that offer inadequate. A company spokesperson said the new language was "paired with legalese that would allow those safeguards to be disregarded at will."
[54]
The new kill chain: America is using AI to bomb targets in Iran
Iran war: The American military's engagement in Iran serves as a proving ground for cutting-edge artificial intelligence. By enhancing intelligence analysis and optimising target selection, AI enables commanders to act decisively and swiftly. Nonetheless, the ultimate authority rests with human leaders, who oversee and validate each strike. When the US launched its military campaign against Iran, called Operation Epic Fury, the conflict quickly became something more than a conventional war. The operation is emerging as one of the most consequential real-world tests of artificial intelligence (AI) in modern warfare. From the earliest stages of planning to the execution of strikes on the battlefield, US forces are relying on AI systems to process intelligence, recommend targets and compress what used to be weeks of operational planning into hours or even minutes. The result, according to multiple reports, is a radically accelerated military decision cycle in which algorithms help commanders analyse data and guide operations at unprecedented speed. US Central Command used Anthropic's Claude AI for "intelligence assessments, target identification and simulating battle scenarios" during the strikes on the country, according to a report in the Wall Street Journal. In the Iran campaign, AI technology has played a critical role by supporting the initial screening of incoming data, allowing human analysts to focus on higher-level analysis and verification, according to Captain Timothy Hawkins, a Central Command spokesperson. "Centcom uses a variety of AI tools, and that is exactly what they are, tools, to assist human experts in a rigorous process aligned with US policy, military doctrine and the law," Hawkins said in an interview with Bloomberg News. He declined to name the tools or the companies that provide them to the military. Trae Stephens, co-founder and executive chairman of Anduril Industries Inc., told Bloomberg TV that his company's technology was being used in the conflict. "We have all sorts of primarily counter-air systems that are present in conflict zones," he said, adding that the company was "actively working day to day" with the Defense Department on ongoing operations. "Obviously I can't give a whole lot of details beyond that," he said. The defense technology company has said it aims to remake the military of the US and its allies with AI, speedy manufacturing and other technology. Also read: Iran war gives India unexpected room for manoeuvre One of the most significant roles AI is playing in Operation Epic Fury is during the planning phase of military operations. According to Bloomberg, the US military is using AI systems to process enormous volumes of intelligence gathered from satellites, drones, electronic intercepts and battlefield reports. These tools are designed to identify patterns, flag potential threats and recommend targets for commanders. The central objective is to shorten what military planners call the "kill chain", the sequence from detecting a target to deciding on a strike and carrying it out. Bloomberg reported that AI tools are enabling analysts to process information far more quickly than traditional intelligence workflows allowed. Tasks that once required days or weeks of manual analysis can now be completed almost in real time as algorithms sift through vast data streams. Some of these capabilities are built around Project Maven, a Pentagon initiative that applies machine learning to imagery and sensor data. In practice, this means that algorithms scan drone and satellite footage to identify military assets such as missile launchers, ships and radar systems. Analysts then review the AI-generated findings and incorporate them into operational planning. Bloomberg reported that these systems are being used to generate lists of potential targets and rank them by threat level. Military planners can then decide which targets should be struck first and what assets should be used to carry out the attack. The speed of this process is central to the Pentagon's strategy. By accelerating intelligence analysis and target selection, AI allows commanders to respond to changing battlefield conditions much more rapidly than traditional command structures. Also read: Copycat drone wars -- Iran copied the US, then the US copied Iran Beyond identifying potential targets, AI systems are also being used to recommend how those targets should be attacked. According to The Times, software platforms built with data-integration tools from Palantir and linked to Project Maven are helping commanders determine which units and weapons should be deployed against particular targets. These systems aggregate intelligence from numerous sources and convert it into operational recommendations. Commanders receive AI-generated suggestions about which aircraft, missiles or other assets might be best suited for a specific strike. The Times described the system as functioning almost like a real-time logistics engine for war that matches available military resources with targets. The software continuously updates its recommendations as new intelligence flows in from drones, satellites and reconnaissance units. This approach allows planners to coordinate large numbers of strikes while maintaining a smaller operational staff. The Times reported that AI systems have dramatically reduced the manpower required to manage complex operations. One of the most striking operational changes is the reduction in personnel required to manage battlefield operations. According to The Times, AI-driven analysis and automation mean that a small team of about 20 US personnel can now perform tasks that previously required roughly 2,000 staff. The technology handles large portions of data analysis, intelligence fusion and logistical coordination that once demanded large teams of analysts and planners. This shift is not merely about efficiency. It also reflects the growing role of algorithmic systems in military decision making. Instead of hundreds of analysts manually reviewing imagery and intelligence feeds, AI tools scan incoming data continuously and highlight items that require human attention. The Times reported that these systems have already been used to help guide hundreds of strikes during the campaign's early phase. While human commanders still authorise the final decision to launch attacks, the AI tools provide the analytical backbone that allows operations to scale rapidly. AI is also playing a growing role in intelligence analysis during the campaign. Bloomberg reported that the US military has experimented with using advanced AI models to process large volumes of intelligence data and help analysts interpret it more quickly. These systems are capable of analyzing multiple streams of information simultaneously. Satellite imagery, drone video feeds and intercepted communications can all be processed by algorithms that identify patterns and anomalies that might indicate the presence of military targets. Bloomberg reported that the technology has been used to support intelligence analysts by rapidly sorting through large datasets and flagging information that might otherwise be overlooked. Rather than replacing human analysts, the AI systems act as accelerators for the intelligence process. This approach reflects a broader shift in military doctrine toward what defense officials sometimes call "algorithmic warfare." In this model, human decision makers remain in charge but rely heavily on machine learning systems to interpret data and recommend actions. The scale of Operation Epic Fury means the conflict is effectively becoming a testing ground for AI-enabled warfare. Bloomberg reported that the campaign represents one of the most extensive real-world deployments of AI tools in military operations to date. Unlike earlier conflicts where AI applications were limited to narrow tasks, the systems used in this campaign appear to span the entire operational cycle. They assist with intelligence gathering, target identification, strike planning and post-strike assessment. This integration allows commanders to respond quickly as the battlefield evolves. New intelligence can be processed immediately, targets reassessed and strike plans updated within a single operational cycle. The result is a warfighting model in which the speed of decision making becomes a decisive advantage. By reducing the time between detection and action, the US military hopes to maintain operational momentum and disrupt Iranian forces before they can reposition or respond. Despite the extensive role of AI, human commanders remain responsible for authorizing strikes. AI systems generate recommendations and prioritize targets, but final decisions about whether to carry out attacks still rest with military personnel. Analysts review algorithmic outputs and commanders weigh the operational and political implications before approving any action. This "human-in-the-loop" approach reflects ongoing concerns about the risks of fully autonomous weapons. Military officials have emphasized that AI is intended to assist decision makers rather than replace them. Hawkins, the Central Command spokesperson, told Bloomberg that artificial intelligence helps analysts whittle down what they need to focus on, generating so-called points of interest and helping personnel make "smart" decisions in the Iran operations. AI is also helping to pull data within systems and organize information to provide clarity, he said. "Bottom line, these tools help leaders -- humans -- make smarter decisions faster. The tools do not replace them or make targeting decisions," said Hawkins, adding that target selection relies on a very specific, rigorous, legal process that involves commanders and leaders.
[55]
Trump Orders All Federal Agencies To Phase Out Use Of Anthropic Technology
Anthropic said it sought narrow assurances from the Pentagon that Claude won't be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language "framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will." That was after Sean Parnell, the Pentagon's top spokesman, posted on social media that the military "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement." He emphasized that the Pentagon wants to "use Anthropic's model for all lawful purposes," but he and other officials haven't detailed how they want to use the technology.
[56]
Anthropic Just Got Fired by the U.S. Government. It's the Best Thing That Ever Happened to Its Brand
President Donald Trump announced on Friday that every federal agency must immediately cease using the company's AI models, including its flagship Claude system. The Department of Defense and other agencies have a six-month timeline to stop using the AI company's technology. Trump also warned of potential civil and criminal consequences if Anthropic isn't "helpful" during the transition. Defense Secretary Pete Hegseth went further, designating the company a "supply-chain risk," which restricts its ability to work with other government contractors. On its face, that's a pretty weird reaction, if you think about it. Anthropic's technology was so good, and the government wanted so badly to use it, that it was trying to force the company to allow unfettered access to its technology to do as the government wished. Then, when rebuffed, the Department of Defense labeled it a massive security risk. It seems pretty clear that this is simple retaliation by someone who didn't get what they wanted. The thing is, they still don't get what they wanted, so I'm not really sure who this is good for.
[57]
Hegseth says Pentagon designating Anthropic as supply chain risk after Trump bans AI firm
Defense Secretary Pete Hegseth said the Pentagon is designating Anthropic as a supply chain risk shortly after President Trump directed all federal agencies to cease using the company's technology amid an ongoing feud with the Défense Department (DOD). "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," Hegseth said in a post on X. The Pentagon chief added that the company will continue to provide its services to the Pentagon for no more than half a year to "allow for a seamless transition to a better and more patriotic service." "Anthropic's stance is fundamentally incompatible with American principles," Hegseth said. "Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered." "America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final," he added.
[58]
Trump orders US agencies to stop use of Anthropic technology amid dispute over ethics of AI
Department of Defense and artificial intelligence company were unable to reach agreement before deadline Donald Trump said Friday he will direct all federal agencies to "IMMEDIATELY CEASE" all use of Anthropic technology. The Department of Defense and Anthropic have hit an impasse with neither side backing down as a deadline for an agreement hits Friday afternoon. The Pentagon is demanding that the artificial intelligence company loosen ethical guidelines on its AI systems or, the government says, face severe consequences. Trump weighed in just an hour before the deadline, saying on Truth Social: "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution." "WE will decide the fate of our Country - NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about," Trump wrote. Late Thursday evening, Sean Parnell, the Pentagon's top spokesperson, posted on social media that "we will not let ANY company dictate the terms regarding how we make operational decisions". He added that Anthropic has "until 5:01PM ET on Friday to decide". This came after Anthropic CEO Dario Amodei said in a statement earlier in the day that his company "cannot in good conscience accede" to the Pentagon's demand for unrestricted use of its AI tools. The public showdown began earlier this week when the Department of Defense and Anthropic entered into discussions about the military's use of the company's Claude AI system. But the talks broke down as both sides appeared to be unable to come to agreement over safety guardrails. Anthropic, which presents itself as the most safety-forward of the leading AI companies, has been mired in months of disagreement with the Pentagon even before the public discussions began this week. US defense officials have pushed for unfettered access to Claude's capabilities that they say can help protect the country, while Anthropic has resisted allowing its product to be used for mass surveillance or autonomous weapons systems that can kill people without human input. "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner," Amodei said Thursday. "However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values." The Department of Defense has already integrated Claude into its operations, but with the breakdown in talks it is now threatening to sever the relationship. Not only has the Pentagon said it will cancel a $200m contract with the company, but military officials have also warned they will deem Anthropic a "supply chain risk". This type of designation is normally used on foreign adversaries and could endanger the company's partnerships with other businesses. Parnell said the Defense Department "has no interest" in using AI for mass surveillance or to develop autonomous weapons. "This narrative is fake and being peddled by leftists in the media," he said. In Silicon Valley, Anthropic has drawn support from its most fierce rivals. Top executives at AI companies have publicly sided with Anthropic, including OpenAI CEO Sam Altman who indicated in a CNBC interview on Friday that OpenAI shares the same red lines as Anthropic. Nearly 500 OpenAI and Google employees have also signed onto an open letter saying "we will not be divided". Both OpenAI and Google also have contracts with the military. "The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," reads the letter. "They're trying to divide each company with fear that the other will give in." Amodei said in his statement on Thursday that he hopes the Pentagon will reconsider. "Our strong preference is to continue to serve the Department and our warfighters", he said. But, if the government does offboard Anthropic, he said the company "will work to enable a smooth transition to another provider".
[59]
Apparent use of AI in Iran war raises daunting questions, expert says
Suspected widespread use of artificial intelligence to select targets and launch attacks on Iran raises many questions and fears that human control of war machinery could be slipping, a leading expert said Wednesday. The United States and Israel have carried out thousands of strikes across Iran since launching their offensive, including one that killed Iran's supreme leader, Ayatollah Ali Khamenei, on Saturday, the first day of the war. Peter Asaro, an expert on artificial intelligence and robotics, said it appeared likely the two countries had used AI to identify targets in Iran, pointing to what seemed to be a very short planning phase and large number of targets.
[60]
Pentagon Used Anthropic's Claude AI During Iran Strike Hours After Trump Ordered Ban: Report - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL)
U.S. Central Command reportedly used Anthropic's Claude AI during the Trump administration's major air operation against Iran, just hours after the president ordered federal agencies to stop using the company's technology. Claude Deeply Embedded Across Combat Operations The Wall Street Journal, citing sources, reported that the ban was announced just hours before the military operation was launched. Claude was used by the U.S. Central Command in the Middle East for intelligence assessments, target identification, and simulating battle scenarios. Despite ongoing tensions between Anthropic and the Pentagon, the AI tool remains deeply embedded in military operations. The military has previously used Claude in high-profile missions, including the operation that captured Venezuelan President Nicolas Maduro. OpenAI Moves In Just hours after the ban was declared on Anthropic, OpenAI announced a deal to deploy its AI tools in the Pentagon's classified systems. The feud stems from Anthropic's refusal to allow unrestricted Pentagon use during contract negotiations, compounded by its lobbying against the administration's AI policy. Photo courtesy: gguy on Shutterstock.com Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[61]
Trump orders federal ban on Anthropic software over 'leftist' restrictions on military AI use
President Donald Trump reacts at the end of the State of the Union address in the House Chamber of the US Capitol in Washington, February 24, 2026. US President Donald Trump ordered on Friday that all federal agencies stop using software developed by Anthropic, the company responsible for creating the Claude AI model. "I am directing every federal agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!" Trump said in a Truth Social post. "There will be a six-month phase-out period for agencies like the Department of War who are using Anthropic's products, at various levels. Anthropic better get their act together, and be helpful during this phase-out period, or I will use the full power of the presidency to make them comply, with major civil and criminal consequences to follow," he added. Trump justified his decision over "Anthropic's leftism," calling the company "left-wing nut jobs" that tried to "strong-arm the Department of War, and force them to obey their Terms of Service instead of our Constitution." "Their selfishness is putting American lives at risk, our troops in danger, and our national security in jeopardy," he added. Anthropic-US Gov't clash over AI limited use The tension has been building up between Anthropic and the US government, with the company wanting to ensure that its software was not being used to pursue mass surveillance of Americans and the development of fully autonomous weaponry. According to a report by Axios, the Pentagon demanded a new agreement with the AI company to deploy it for "all lawful purposes," ranging from weapons development and intelligence collection to direct battlefield operations. This led to disagreements between the Pentagon and Anthropic, as Claude is not intended for offensive use, per Anthropic's public usage policies. The main conflict arose after a report by The Wall Street Journal where Claude was allegedly utilized, through Anthropic's existing partnership with the software firm Palantir, in the operation to capture former Venezuelan president Nicolas Maduro. "Everything's on the table, including dialing back the partnership with Anthropic or severing it entirely," a Pentagon official said, according to Axios. "But there'll have to be an orderly replacement for them, if we think that's the right answer." The Pentagon was now working with OpenAI, Alphabet (the developer of Google's Gemini), and xAI (developer of GrokAI) to establish these more permissive contracts, the Axios report added. The Jerusalem Post Staff contributed to this report.
[62]
Claude AI helped bomb Iran. But how exactly?
The US military has employed Anthropic's Claude AI for critical functions like intelligence assessments and target identification during strikes on Iran. Despite a presidential order to halt its use, the AI's deep integration means it will take months to remove. This deployment highlights the alarming lack of transparency and regulation surrounding AI's use in warfare, even with known error rates. The same artificial-intelligence model that can help you draft a marketing email or a quick dinner recipe has also been used to attack Iran. US Central Command used Anthropic's Claude AI for "intelligence assessments, target identification and simulating battle scenarios" during the strikes on the country, according to a report in the Wall Street Journal. Hours earlier, US President Donald Trump had ordered federal agencies to stop using Claude after a dispute with its maker, but the tool was so deeply baked into the Pentagon's systems that it would take months to untangle in favor of a more compliant rival. It was used, too, in the January operation that led to the capture of Nicolás Maduro. Also Read| Israel's airstrikes, Iran's leadership vacuum & latest Trump moves: What to know on Day 5 But what does "intelligence assessments" and "target identification" mean in practice? Was Claude flagging locations to strike or making casualty estimates? Nobody has made that disclosure and, alarmingly, no one has an obligation to. Artificial intelligence has long been used in warfare for things like analyzing satellite imagery, detecting cyber threats and guiding missile-defense systems. But the use of chatbots -- the same underlying technology that billions use for mundane tasks like writing emails -- is now being used on the battlefield. Last November, Anthropic partnered with Palantir Technologies Inc., a data-analytics company that does a lot of work for the Pentagon, turning its large language model Claude into the reasoning engine inside a decision-support system for the military. Then, in January, Anthropic submitted a $100 million proposal to the Pentagon to develop voice-controlled autonomous drone swarming technology, Bloomberg News reported. The company's pitch: Use Claude to translate a commander's intent into digital instructions to coordinate a fleet of drones. Its bid was rejected, but the contest called for much more than just summarizing intelligence reports, as you might expect a chatbot to do. This contract was to develop "target-related awareness and sharing," and "launch to termination" for potentially lethal drone swarms. Remarkably, all of this has been happening in a regulatory vacuum and with technology that is known to make errors. Hallucinations by large language models are a result of their training, when they are rewarded for grasping for an answer instead of admitting uncertainty. Some scientists say the persistent challenge of AI confabulation may never be fixed. This would not be the first time unreliable AI systems have been used in warfare. Lavender was an AI-driven database used to help identify military targets associated with Hamas in Gaza. It was not a large language model but analyzed vast amounts of surveillance data, such as social connections and location history, to assign each individual a score from 1 to 100. When someone's score passed a certain threshold, Lavender flagged them as a military target. The problem was that Lavender was wrong 10% of the time, according to an investigative report published by the Israeli-Palestinian outlet +972. "Around 3,600 people were targeted by mistake," Mariarosaria Taddeo, a professor of digital ethics and defense technology at the Oxford Internet Institute, tells me. "There are such incredible vulnerabilities in these systems and such extreme unreliability... for something so dynamic, sensitive and human as warfare," says Elke Schwarz, a professor in political theory at Queen Mary University London and author of Death Machines: The Ethics of Violent Technologies. Schwarz points out that AI is often used in war to speed things up, a recipe for unwanted outcomes. Faster decisions are made at a greater scale and with less human scrutiny. The last decade and a half has seen military use of AI become even more opaque, she says. And secrecy is baked into how AI labs operate even before the warfare applications. These companies refuse to disclose what data their models are trained one or how their systems reach conclusions. Of course, military operations often have to be kept under wraps to protect combatants and keep enemies off the scent. But defense is heavily regulated by international humanitarian law and weapons testing standards, which in theory should also address the use of artificial intelligence. Yet such standards are missing or woefully inadequate. Taddeo notes that Article 36 of the Geneva Convention requires new weapons systems to be tested before deployment, but an AI system that learns from its environment becomes a new system every time it updates. That makes it almost impossible to apply the rule. In an ideal world, governments like the US would disclose how these systems are used on the battlefield, and there is a precedent. The Americans started using armed drones after 9/11 and expanded their use under the Barack Obama administration, refusing to acknowledge that such a program existed. It took nearly 15 years of leaked documents, sustained pressure from the press and lawsuits from the American Civil Liberties Union before the Obama White House finally published in 2016 the casualty numbers from drone strikes. They were widely seen as under-counting, but they allowed the public, Congress and media to hold the government accountable for the first time. AI's policing be will harder still, requiring even more public and legislative pressure to force a recalcitrant Trump administration to create a similar kind of reporting framework. The goal wouldn't be to disclose exactly how Claude was used in something like Operation Epic Fury, but to release the broad contours, according to Schwarz. And, especially, to disclose when something goes wrong. The current public debate about the Anthropic-Pentagon feud -- about what is legal and ethical for AI when it comes to the mass surveillance of Americans or creating fully autonomous weapons -- is missing the bigger question about the lack of visibility of how the technology is already being used in war. With such new and untested systems prone to making mistakes, this is sorely needed. "We haven't decided as a society if we're fine with a machine deciding if a human being should be killed or not," says Taddeo. Pushing for that transparency is critical before AI in warfare becomes so routine that nobody thinks to ask anymore. Otherwise we may find ourselves waiting for a catastrophic mistake, and imposing transparency only after the damage is done.
[63]
Trump orders federal agencies to stop using Anthropic's AI technology
Washington -- President Trump announced Friday that he is ordering all federal agencies to "immediately" stop using Anthropic's AI technology, as the company nears a Pentagon deadline to drop its guardrails over the military's use of its AI. "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology," Mr. Trump wrote on Truth Social. "We don't need it, we don't want it, and will not do business with them again!" The president said he will give agencies six months to phase out its use of Anthropic's products and threatened to take additional action against the company if it does not assist during that period. "Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow," he wrote.
[64]
Donald Trump Dumps Anthropic From U.S. Government: 'We Don't Want It'
In a lengthy post on Truth Social, President Donald Trump raged at Anthropic, the company behind the Claude family of AI models, for opposing demands from the Department of War to allow their AI to autonomously operate lethal weapons and surveil Americans. Trump directed all federal agencies to "immediately cease all use of Anthropic's technology." This week, Anthropic and the Trump administration have been negotiating a contract; Claude is only one of two AI models approved to work in classified settings. At the core of these negotiations are two positions that Anthropic has adopted: that its AI shouldn't be used to empower deadly machines that can fire without human interaction, and that it shouldn't be used for mass domestic surveillance of Americans. The Pentagon wants Anthropic, which Inc. named its Co-Founder of the Year, to agree to Claude being used for "any lawful use" without creating carveouts for Anthropic's two opposed use cases. On Thursday, February 27, Anthropic CEO Dario Amodei wrote in a statement that Anthropic would not "accede" to the Pentagon's proposed terms.
[65]
Trump orders federal agencies to 'immediately cease' using Anthropic technology
President Trump on Friday directed federal agencies to "immediately cease" using Anthropic technology amid an escalating feud between the AI company and the Pentagon. "THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military," Trump wrote in a post on Truth Social. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," he continued. "Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY." "We don't need it, we don't want it, and will not do business with them again!" he added. Trump's post comes as the Pentagon and Anthropic have been locked in negotiations over the AI firm's terms of service, with the Department of Defense (DOD) threatening to cancel its contract with the company if it does not agree to its terms by Friday afternoon. The president said there would be a six month phase out period for agencies using the company's products.
[66]
Trump directs U.S. agencies to toss Anthropic's AI as Pentagon calls startup a supply risk
U.S. President Donald Trump said Friday he is directing the government to stop work with Anthropic and the Pentagon said it will declare the startup a supply-chain risk, dealing a major blow to the artificial intelligence lab after a showdown about technology guardrails. Trump added there would be a six-month phase-out for the Defense Department and other agencies that use the company's products. If Anthropic does not help with the transition, Trump said, he would use "the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow." Spokespeople for Anthropic, which won up to $200 million from the Pentagon in a contract last year, did not immediately respond to a request for comment. The planned designation as a supply-chain risk, announced by Defense Secretary Pete Hegseth, means that contractors could be barred from deploying Anthropic's AI as part of work for the Pentagon.
[67]
Pentagon casts cloud of doubt over Anthropic's AI business
Anthropic PBC started this year on a winning streak, with surging sales, multiple viral products and a large funding round all giving the startup a big advantage in the costly global AI race. On Friday, the Trump administration handed down back-to-back directives that threaten to curb growth for one of the country's most successful artificial intelligence firms. First, President Donald Trump ordered federal agencies to stop using Anthropic's software, which has become popular particularly as a programming assistant. Shortly after, the Pentagon declared the AI developer a supply-chain risk -- a designation typically reserved for companies from countries the US views as adversaries. ALSO READ: Anthropic vs Pentagon: The Trump administration is waging war on American genius The moves, which followed a tense showdown between the San Francisco-based startup and the Pentagon over AI safeguards, aim to not only to cut off Anthropic's sales from the US government, but also numerous other firms. "No contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," Defense Secretary Pete Hegseth wrote in a social media post late Friday. The full impact on the company -- and the AI ecosystem -- remains to be seen. At minimum, rivals such as OpenAI, Alphabet Inc.'s Google and Elon Musk's xAI now have an opportunity to take on government work that had previously gone to Anthropic. But some legal and policy experts warned the fallout could be far more dire if the Pentagon follows through on its declaration. ALSO READ: Trump orders US agencies to stop using Anthropic technology in clash over AI safety Barring Anthropic from working with corporate customers that do business with the Defense Department would be "a death blow" to Anthropic's business, said Charlie Bullock, a lawyer and senior research fellow at the Institute for Law & AI, a Boston-based think tank. As laid out in Hegseth's post, the Pentagon's policy would effectively prevent Anthropic from working with some of its biggest partners, such as Amazon.com Inc., Bullock said. Dean Ball, a former White House adviser who helped create the Trump administration's AI Action Plan, echoed the sentiment. "Nvidia, Amazon, Google will have to divest from Anthropic if Hegseth gets his way," he wrote in a post on X. "This is simply attempted corporate murder." The uncertainty hits Anthropic at a pivotal moment. The company, which was founded in 2021 by several former employees of OpenAI, is widely expected to be preparing for an initial public offering as soon as this year. With its popular Claude chatbot, Anthropic has been racing to persuade more businesses to pay for its software to help offset the immense cost of developing AI and justify its lofty $380 billion valuation. In a statement Friday Anthropic called the move "legally unsound" and "a dangerous precedent." The company also set the stage for a legal battle over its software. "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons," it wrote. "We will challenge any supply chain risk designation in court." Some backers are concerned Anthropic's refusal to concede to the Trump administration's demands could harm the company's brand, making the startup seem hostile and anti-American, according to an Anthropic investor who spoke to Bloomberg on the condition of anonymity. Still, some investors are wary of complaining publicly or even pushing back in private. Anthropic is the crown jewel in many of their portfolios, and Chief Executive Officer Dario Amodei's resolute control over the company's direction has led many VCs to bite their tongues even if they disagree with his choice, multiple Anthropic investors said. Other investors supported Anthropic whether it decided to work with the Pentagon or not, with some noting that the government provides minimal revenue for the model maker. The move has also generated considerable support for Anthropic within the tech community, with some CEOs lauding its stance. Hegseth had given the company until 5:01 p.m. on Friday to allow the Pentagon to use Claude for any purpose within legal limits -- but without any usage restrictions from Anthropic. The startup has insisted that the chatbot not be used for mass surveillance against Americans or in fully autonomous weapons operations. Trump's decision to order agencies to ditch Anthropic posed some initial risk to the firm, though one that's limited in scope for a company with a revenue run rate of $14 billion. Anthropic inked an agreement in July with the Defense Department worth up to $200 million, but Bloomberg Government contracting records show the Pentagon paid only $2 million to Anthropic last year. This month, Anthropic signed its first deal for the State Department to use Claude, valued at just $19,000. The company also struck a broad deal with the General Services Administration for federal government agencies to use Claude for a nominal $1 fee last year. Hegseth has set a six-month maximum for Anthropic's services to be handed over another AI provider. The goals of the Defense Department's actions are ultimately much broader, treating Anthropic similarly to Chinese firms that the US perceives to be a security threat. However, Bullock said the legal authority Hegseth is relying on is actually quite narrow, allowing the agency to prohibit its contractors from tapping Anthropic's products for procurements related to defense contracting -- but not necessarily for things like using Claude in their businesses. Hegseth is likely to lean on the Federal Acquisitions Security Council, established in Trump's first term, to enact the policy, said Peter Harrell, a former Biden administration official and visiting scholar at Georgetown Law School. "The limit here, and where I think Hegseth is misleading at least as a legal matter, is that this should only apply to contractors in their DoD contracts," Harrell said. If the Pentagon tried to force companies that contract with it from having other unrelated business with Anthropic, "the courts will throw that out quite quickly," but "I can't say they won't try it." Harrell said. "They seem to be willing to try things that are overturned in the courts." The Pentagon did not immediately respond to request for comment. If Anthropic launches a court challenge, that could ultimately buy it time, Bullock said. For example, a court could grant the company a temporary restraining order or preliminary injunction, he said. "But we'll have to wait and see." The Pentagon's decision has quickly sent shockwaves throughout the AI community, both for its implications in the wider battle over how to deploy a powerful technology safely and because of the broad popularity of Claude Code for software development. Virtually all companies that build software do business with Anthropic, according to one AI startup executive who spoke on condition of anonymity. Not being able to use Claude Code would be disastrous for both the industry and US competitiveness, the executive said.
[68]
US Military Used Claude AI After Trump's Anthropic Ban, Claims Report
The command used Anthropic's AI for intelligence assessments, target identification, and simulating battle scenarios. Prior to the Iran attack, Claude AI was also used by the Pentagon during the capture of Venezuela president Nicolás Maduro. In a Truth Social post about ending the deal with Anthropic, US President had gone on to call the company 'leftwing nut jobs' and 'woke' while claiming that 'their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.' Trump had directed all federal agencies in the US to 'immediately cease' using Anthropic technology. "We don't need it, we don't want it, and will not do business with them again! There will be a six-month phase-out period for agencies like the Department of War who are using Anthropic's products at various levels," he wrote.
[69]
Trump orders all federal agencies to phase out use of Anthropic technology
WASHINGTON (AP) -- President Donald Trump said Friday he was ordering all federal agencies to phase out the use of Anthropic technology after the company's unusually public dispute with the Pentagon over artificial intelligence safety. Trump's comments came just over an hour before the Pentagon's deadline for Anthropic to allow unrestricted military use of its AI technology or face consequences -- and nearly 24 hours after CEO Dario Amodei said his company "cannot in good conscience accede" to the Defense Department's demands. Trump said most agencies must immediately cease using Anthropic technology, but gave the Pentagon a 6-month period to phase out the technology that is already embedded in military platforms. "We don't need it, we don't want it, and will not do business with them again!" Trump wrote. Anthropic didn't immediately reply to a request for comment on Trump's remarks. At issue in the defense contract was a clash over AI's role in national security and concerns about how increasingly capable machines could be used in high-stakes situations involving lethal force, sensitive information or government surveillance. The move is likely to benefit Elon Musk's competing chatbot, Grok, which the Pentagon plans to give access to classified military networks, and could serve as a warning to two other competitors, Google and OpenAI, that also have contracts to supply their AI tools to the military. Anthropic, maker of the chatbot Claude, could afford to lose the contract. But the ultimatum this week from Defense Secretary Pete Hegseth posed broader risks at the peak of the company's meteoric rise from a little-known computer science research lab in San Francisco to one of the world's most valuable startups. Anthropic spurned Pentagon's latest proposal over its safeguards If Amodei does not budge, military officials said they would not just pull Anthropic's contract but also "deem them a supply chain risk," a designation typically stamped on foreign adversaries that could derail the company's critical partnerships with other businesses. Trump didn't make such a designation in his announcement Friday, but said Anthropic could face "major civil and criminal consequences" if it's not helpful in the phase-out period. And if Amodei were to cave, he could have lost trust in the booming AI industry, particularly from top talent drawn to the company for its promises of responsibly building better-than-human AI that, without safeguards, could pose catastrophic dangers. Anthropic had said it sought narrow assurances from the Pentagon that Claude won't be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language "framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will." That was after Sean Parnell, the Pentagon's top spokesman, posted on social media that the military "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement." He emphasized that the Pentagon wants to "use Anthropic's model for all lawful purposes," but he and other officials haven't detailed how they want to use the technology. Dispute further polarizes the tech industry Emil Michael, the defense undersecretary for research and engineering, later lashed out at Amodei, alleging on X that he "has a God-complex" and "wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk." That message hasn't resonated in much of Silicon Valley, where a growing number of tech workers from Anthropic's top rivals, OpenAI and Google, voiced support for Amodei's stand late Thursday in an open letter. OpenAI and Google, along with Elon Musk's xAI, also have contracts to supply their AI models to the military. Musk sided with Trump's Republican administration on Friday, saying on his social media platform X that "Anthropic hates Western Civilization" after Michael drew attention to a previous version of Claude's guiding principles that encouraged "consideration of non-Western perspectives." All of the leading AI models, including Musk's Grok and OpenAI's ChatGPT, are programmed with a set of instructions that guide a chatbot's values and behavior. Anthropic calls that guidance a constitution. While some Trump-allied tech leaders have joined the fray -- including Musk and Palmer Luckey, co-founder of defense contractor Anduril -- the polarizing debate over "woke AI" has put others in a difficult position. "The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," the open letter from some OpenAI and Google employees says. "They're trying to divide each company with fear that the other will give in." But in a surprise move from one of Amodei's fiercest rivals, OpenAI CEO Sam Altman on Friday sided with Anthropic and questioned the Pentagon's "threatening" move in a CNBC interview, suggesting that OpenAI and most of the AI field share the same red lines. Amodei once worked for OpenAI before he and other OpenAI leaders quit to form Anthropic in 2021. "For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety," Altman told CNBC. "I've been happy that they've been supporting our warfighters. I'm not sure where this is going to go." Also raising concerns about the Pentagon's approach were Republican and Democratic lawmakers and a former leader of the Defense Department's AI initiatives. "Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end," wrote retired Air Force Gen. Jack Shanahan in a social media post. Shanahan faced a different wave of tech worker opposition during the first Trump administration when he led Maven, a project to use AI technology to analyze drone footage and target weapons. So many Google employees protested its participation in Project Maven at the time that the tech giant declined to renew the contract and then pledged not to use AI in weaponry. "Since I was square in the middle of Project Maven & Google, it's reasonable to assume I would take the Pentagon's side here," Shanahan wrote Thursday on social media. "Yet I'm sympathetic to Anthropic's position. More so than I was to Google's in 2018." He said Claude is already being widely used across the government, including in classified settings, and Anthropic's red lines are "reasonable." He said the AI large language models that power chatbots like Claude are also "not ready for prime time in national security settings," particularly not for fully autonomous weapons. "They're not trying to play cute here," he wrote. Pentagon ready to punish Anthropic if it doesn't compromise Parnell asserted Thursday that opening up use of the technology would prevent the company from "jeopardizing critical military operations." "We will not let ANY company dictate the terms regarding how we make operational decisions," Parnell wrote. Anthropic has "until 5:01 p.m. ET on Friday to decide" if it would meet the demands or face consequences. When Hegseth and Amodei met on Tuesday, military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn't approve. Amodei said Thursday that "those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." He said he hopes the Pentagon will reconsider given Claude's value to the military, but, if not, Anthropic "will work to enable a smooth transition to another provider."
[70]
Trump orders federal agencies to stop using Anthropic's AI models,...
President Trump said Friday that he has directed all federal agencies to "immediately cease" working with Anthropic, blasting the AI company's leadership as "leftwing nut jobs." The scathing announcement from Trump brought an abrupt end to a major dispute between Anthropic and the Pentagon, which had given the company until 5:01 p.m. Eastern Time Friday to remove safeguards on how its Claude chatbot could be used by the US military. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," Trump wrote in a Truth Social post. "Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY. "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology," the president continued. "We don't need it, we don't want it, and will not do business with them again!" The Pentagon declined to comment. Anthropic representatives did not immediately return a request for comment. The president said there "will be a six-month phase-out period" for the Pentagon and other agencies that are using Anthropic's models. The company signed a $200 million contract with the Pentagon just last July. Trump said he would use the "Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow" if Anthropic was uncooperative during the transition period. Anthropic and its CEO Dario Amodei have objected to any use of their technology that would enable mass surveillance of Americans or powering weapons capable of firing without human oversight. Claude is currently the only AI model used by the US military in classified situations. As the deadline to comply with the Pentagon's request loomed, Amodei said Thursday that the company "cannot in good conscience accede" to the Pentagon's demands. The Pentagon, meanwhile, has said the dispute was never about Anthropic's "red lines" and that the US military has only given out lawful orders when using AI. Elon Musk's Grok recently received approval to be used in classified settings, while a senior Pentagon official said OpenAI and Google were "close" to getting permission. OpenAI's Sam Altman said Thursday that the company shared the same red lines as Anthropic, but suggested his firm could reach common ground with the Pentagon on how models should be used. The feud between the two sides recently escalated in January after Claude was used in the operation to arrest Venezuela's Nicolás Maduro. During the Tuesday meeting, Defense Secretary Pete Hegseth referenced the Pentagon's claim, first reported by Axios earlier this month, that Anthropic had complained to fellow contractor Palantir about how its technology was used in the Maduro raid. During a meeting with Hegseth earlier this week, Amodei denied that anyone at Anthropic had complained to Palantir. The Post first reported in November that Anthropic's ties to the cultlike Effective Altruism movement and Democratic megadonors like LinkedIn cofounder Reid Hoffman were on the Trump administration's radar and were complicating its efforts to work with the government.
[71]
Trump orders US agencies to stop using Anthropic technology in clash over AI safety
The Trump administration on Friday ordered all U.S. agencies to stop using Anthropic's artificial intelligence technology and imposed other major penalties, culminating an unusually public clash between the government and the company over AI safety. President Donald Trump, Defense Secretary Pete Hegseth and other officials took to social media to chastise Anthropic for failing to allow the military unrestricted use of its AI technology by a Friday deadline, accusing it of endangering national security after CEO Dario Amodei refused to back down over concerns the company's products could be used in ways that would violate its safeguards. "We don't need it, we don't want it, and will not do business with them again!" Trump said on social media. Hegseth also deemed the company a "supply chain risk," a designation typically stamped on foreign adversaries that could derail the company's critical partnerships with other businesses. Anthropic had said it sought narrow assurances from the Pentagon that its AI chatbot Claude would not be used for mass surveillance of Americans or in fully autonomous weapons. The Pentagon said it was not interested in such uses and would only deploy the technology in legal ways, but it also insisted on access without any limitations. The government's effort to assert dominance over the internal decision-making of the company comes amid a wider clash over AI's role in national security and concerns about how increasingly capable machines could be used in high-stakes situations involving lethal force, sensitive information or government surveillance. Trump and others lash out at Anthropic Trump said Anthropic made a mistake trying to strong-arm the Pentagon. He wrote on Truth Social that most agencies must immediately stop using Anthropic's AI but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms. "The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!" he wrote in all caps. After months of private talks exploded into public debate this week, Anthropic said Thursday that the government's new contract language would allow "safeguards to be disregarded at will." Amodei said his company "cannot in good conscience accede" to the demands. Anthropic can afford to lose the contract. But the government's actions posed broader risks at the peak of the company's meteoric rise from a little-known computer science research lab in San Francisco to one of the world's most valuable startups. The president's decision was preceded by hours of top Trump appointees from the Pentagon and the State Department taking to social media to criticize Anthropic, but their complaints posed contradictions. Top Pentagon spokesman Sean Parnell said on social media Thursday that Anthropic's unwillingness to go along with the military's demands was "jeopardizing critical military operations and potentially putting our warfighters at risk." Hegseth said Friday that the Pentagon "must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic." Trump's social media post also mandated the company "better get their act together, and be helpful" during a six-month phase-out period or there would be "major civil and criminal consequences to follow." However, Hegseth's choice to designate Anthropic a supply chain risk uses an administrative tool that has been designed for companies owned by U.S. adversaries to prevent them from selling products that are harmful to American interests. Virginia Sen. Mark Warner, the top Democrat on the Senate Intelligence Committee, noted that this dynamic, "combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations." Anthropic didn't immediately reply to a request for comment on the Trump administration's actions. Dispute shakes up Silicon Valley The dispute stunned AI developers in Silicon Valley, where venture capitalists, prominent AI scientists and a large number of workers from Anthropic's top rivals - OpenAI and Google - voiced support for Amodei's stand in open letters and other forums. The move is likely to benefit Elon Musk's competing chatbot, Grok, which the Pentagon plans to give access to classified military networks, and could serve as a warning to two other competitors, Google and OpenAI, that have still-evolving contracts to supply their AI tools to the military. Musk sided with Trump's administration, saying on his social media platform X that "Anthropic hates Western Civilization." But one of Amodei's fiercest rivals, OpenAI CEO Sam Altman, sided with Anthropic and questioned the Pentagon's "threatening" move in a CNBC interview and a letter to employees that said OpenAI shared the same red lines. Amodei once worked for OpenAI before he and other OpenAI leaders quit to form Anthropic in 2021. "For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety," Altman told CNBC, hours before he gathered employees for an all-hands meeting Friday. Retired Air Force Gen. Jack Shanahan, a former leader of the Pentagon's AI initiatives, wrote on social media this week that "painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end." Shanahan said Claude is already being widely used across the government, including in classified settings, and Anthropic's red lines were "reasonable." He said the AI large language models that power chatbots like Claude, Grok and ChatGPT are also "not ready for prime time in national security settings," particularly not for fully autonomous weapons. Anthropic is "not trying to play cute here," he wrote Thursday on LinkedIn. "You won't find a system with wider & deeper reach across the military."
[72]
Trump says he is directing federal agencies to cease use of Anthropic technology
U.S. President Donald Trump on Friday said he was directing every federal agency to immediately cease all use of Anthropic's technology, adding there would be a six-month phase out for agencies such as the Defence Department who use the company's products. "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!" Trump said in a post on Truth Social. Trump's directive comes amid a feud between the Pentagon and top artificial intelligence lab Anthropic over concerns about how the military could use AI at war. Spokespeople for Anthropic did not immediately respond to a request for comment.
[73]
Trump says he is directing federal agencies to cease use of Anthropic technology
WASHINGTON, Feb 27 (Reuters) - U.S. President Donald Trump on Friday said he was directing every federal agency to immediately cease all use of Anthropic's technology, adding there would be a six-month phase out for agencies such as the Defense Department who use the company's products. "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!" Trump said in a post on Truth Social. Trump's directive comes amid a feud between the Pentagon and top artificial intelligence lab Anthropic over concerns about how the military could use AI at war.Spokespeople for Anthropic did not immediately respond to a request for comment. (Reporting by Ryan Jones, Ismail Shakil and Jeffrey Dastin; Writing by Daphne Psaledakis; Editing by Caitlin Webber)
[74]
US-Israel strike on Iran relied on Anthropic AI despite Trump's ban: Report
OpenAI later secured a Pentagon agreement to deploy AI systems in classified military networks. The US military reportedly deployed AI software developed by Anthropic during a massive air operation against Iran, even after President Donald Trump had redirected federal agencies to discontinue the usage of the company's tool. This comes after The Wall Street Journal mentioned about the usage of Anthropic in planning the attacks on Iran with Israel. According to the report citing sources familiar with the matter confirmed that the military commands worldwide, including the US Central Command, rely on Anthropic's Claude model for intelligence reviews, target identification and battlefield simulations. The report also added that the command has refused to discuss the specif technologies used in the ongoing operations. Coming back, the reported deployment came despite an order from the administration telling agencies to stop collaboration with the company and urging the Pentagon to classify it as a potential risk to the defense supply chain. This dispute follows months of negotiations over contract terms, particularly around the scope of permissible military use. According to the report, Anthropic resisted granting blanket authorisation for all lawful applications that the Defence Department might pursue. This standoff caused defence officials to consider alternatives, resulting in agreements with the developers of OpenAI's ChatGPT and xAI models for classified environments. Tensions rose after Defence Secretary Pete Hegseth set a deadline for the unrestricted use of AI tools for any lawful military objective. Anthropic CEO Dario Amodei publicly rejected the request, citing certain applications as ethical boundaries that the company would not cross, even if it meant losing federal contracts. After that Sam Altman's OpenAI has partnered with the Pentagon to deploy artificial intelligence to the classified networks. "Yesterday we reached an agreement with the Pentagon for deploying advanced AI systems in classified environments, which we requested they also make available to all AI companies," the company stated on its blog post.
Share
Share
Copy Link
Anthropic refused to allow its Claude AI to be used for mass surveillance or fully autonomous weapons, leading President Trump to order federal agencies to stop using the company's technology. Defense Secretary Pete Hegseth designated Anthropic a supply-chain risk, barring military contractors from working with the firm. The dispute highlights mounting tensions between AI tech companies and government over ethical boundaries in military applications.
A high-stakes standoff between Anthropic and the US Military has exposed deep divisions over how AI in warfare should be deployed. The conflict erupted when Anthropic refused to remove safeguards preventing its Claude AI from being used for mass surveillance of Americans or to guide autonomous weapons without human oversight
1
. CEO Dario Amodei stated the company "cannot in good conscience" comply with the Department of Defense demand that AI models be available for "any lawful use" without constraints5
.
Source: Digit
The disagreement centers on a $200-million contract under which Claude AI has powered the Maven Smart System since 2024
1
. In January, the Department of Defense issued a memo requiring all AI procurement contracts to permit unrestricted lawful use. When Anthropic refused to comply by the Friday deadline set by Defense Secretary Pete Hegseth, President Donald Trump directed federal agencies to "immediately cease" using Anthropic products, allowing a six-month phase-out period2
.
Source: The Hill
Hegseth escalated the dispute by designating Anthropic a supply-chain risk to national security, effectively barring any contractor, supplier, or partner doing business with the US military from conducting commercial activity with the company
3
. The designation sent shockwaves through Silicon Valley, with companies scrambling to understand whether they must sever ties with one of the industry's most popular AI models3
.Anthropic responded by announcing it would "challenge any supply chain risk designation in court," arguing such action would "set a dangerous precedent for any American company that negotiates with the government"
3
. The company maintained it received no direct communication from the Pentagon or White House regarding negotiations. Legal experts questioned whether Hegseth possessed statutory authority for such sweeping restrictions, with federal contract specialists unable to determine which Anthropic customers must cut ties3
.Following the government-tech company rupture, the US Military quickly signed a deal with OpenAI to deploy its AI models in classified environments
3
. OpenAI CEO Sam Altman announced the agreement includes similar protections against domestic mass surveillance and requires human responsibility for use of force, including for autonomous weapon systems. In an internal memo, Altman reportedly told employees OpenAI maintains the same redlines as Anthropic but believed these guardrails could be managed through technical requirements5
.The Maven Smart System, which uses AI models for applications including image processing and tactical support, speeds up attack capabilities by suggesting and prioritizing targets
1
. The system has been used in previous conflicts and reportedly in recent attacks on Iran. As of March 5, Dario Amodei was reportedly back in talks with the department, suggesting potential resolution remains possible1
.Source: Japan Times
Related Stories
The dispute underscores broader concerns about military applications of AI technology. While AI's precision targeting could theoretically reduce civilian casualties, ongoing conflicts in Ukraine and Gaza where AI assists target identification have seen high civilian death tolls
1
. Political geographer Craig Jones notes "there is no evidence that AI lowers civilian deaths or wrongful targeting decisions and it may be that the opposite is true"1
.Academics and legal experts met in Geneva this week to discuss lethal autonomous weapons systems as part of long-running efforts toward international agreement on ethical or legal uses of AI in warfare
1
. Political scientist Michael Horowitz at the University of Pennsylvania observes that rapid technological development is outpacing slow international discussions. LLMs powering fully autonomous weapons without human oversight are not currently reliable and do not comply with international laws requiring the ability to distinguish between military and civilian targets1
.Employees at Google and OpenAI circulated a petition calling on company leaders to refuse permission for AI models to be used for domestic mass surveillance or to autonomously kill people without human oversight
1
. The petition argued the Pentagon is "trying to divide each company with fear that the other will give in"5
. A project led by political scientist Toni Erskine concluded that "fully autonomous weapons systems without a human in the loop are ethically untenable and should be banned internationally," while noting non-autonomous systems also carry risks requiring regulation1
.Michael Pastor, dean for technology law programs at New York Law School, said Anthropic is "right to press hard on what 'for lawful purposes' means," noting that if the Pentagon is unwilling to clarify whether it would use the technology for mass domestic surveillance, "that raises flags Anthropic seems justified in waving"
5
. The conflict escalation between AI tech companies and government may determine what leverage each holds when their views on appropriate technology use clash, with significant ramifications for future defense contracts5
.Summarized by
Navi
[2]
11 Mar 2026•Technology

11 Mar 2026•Technology

05 Feb 2025•Technology
