2 Sources
2 Sources
[1]
Trump Says He Fired Anthropic 'Like Dogs'
President Donald Trump took credit for decisive action against Anthropic on Thursday, as the AI company still waits to be formally designated as a "supply chain risk" by the Pentagon. And he used one of his favorite phrases in the process. “Well, I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs, because they shouldn’t have done that,†Trump said in a new interview with Politico. Trump has a long history of saying that such-and-such person died like a dog (ISIS leader Abu Bakr al-Baghdadi) or lied like a dog (Ted Cruz) or "cheated on him like a dog" (Kristen Stewart). And while it often makes no sense, that's typically his definitive statement on the matter. The question is whether we should read much into it this time. The tech world is waiting to see how the Pentagon's battle with Anthropic shakes out, given the stakes for the future of AI. The Department of Defense had demanded that Anthropic drop guardrails on its AI model Claude that prohibit its use in mass domestic surveillance and fully autonomous weapons. That didn't go over well with Defense Secretary Pete Hegseth, who wrote on X that he was designating Anthropic as a supply chain risk. That designation, at least according to Hegseth, means that other businesses that work with the U.S. government can't work with Anthropic, a move that's been described as a form of corporate murder. Anthropic instantly promised to file suit if the Pentagon was going to do something so drastic. But Anthropic has yet to acknowledge being given formal notice, leaving everything in a state of limbo. Bloomberg reported Thursday that it had given the company notice, but the source is an anonymous official who "didn’t say when or by what means the Pentagon informed Anthropic of the designation." The latest reporting from the Financial Times and Bloomberg indicates Anthropic CEO Dario Amodei is currently in talks with the Pentagon, suggesting that there could still be some form of agreement that's reached in the near future. The discussions are being held with Emil Michael, Under-Secretary of Defense for Research and Engineering, a man who last week tweeted that Amodei was a "liar" with a "god-complex." So it remains to be seen how that will go, especially since it seems unlikely Anthropic can compromise after making such a big deal about ethics. On the other hand, Anthropic's corporate survival is on the line. And it's not like the company is some bastion of moral behavior right now. The U.S. military is currently using Claude to help choose targets in the bombing of Iran, according to the Wall Street Journal. Specifically, the military is using Claude as it's embedded into Palantir's Maven Smart System, according to the Washington Post. From the Post: As planning for a potential strike in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance, said two of the people. The pairing of Maven and Claude has created a tool that is speeding the pace of the campaign, reducing Iran’s ability to counterstrike and turning weeks-long battle planning into real-time operations, said one of the people. The AI tools also evaluate a strike after it is initiated, the person said. It's unclear whether the U.S. or Israel was the country that bombed a school in Minab, Iran, that killed 168 people, most of them children. But there's speculation that it was the U.S. (Minab is in the south, territory the U.S. is primarily bombing) and that they could have been using Claude. The school was built on an old Revolutionary Guard base that closed about 15 years ago, according to NBC News. Did the U.S. military use AI to choose the target, and did it have old information that it failed to properly check? Plenty of people are talking about that possibility right now, and it all seems too plausible.
[2]
Trump says he fired Anthropic 'like dogs' as negotiations with Pentagon reportedly restart
Donald Trump boasted about severing the ties between the US military and Anthropic the same day multiple reports said that negotiations between the Department of Defense and the AI startup had resumed. "Well, I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs, because they shouldn't have done that," Trump told Politico. Trump on Friday ordered the entire federal government to cease using Anthropic's tech, which the state and treasury departments have begun, according to their respective heads. The termination may not be permanent, though. Negotiations have restarted between the Pentagon and Anthropic over the military's use of the company's AI and the contract between the two, according to the Financial Times and Bloomberg. Anthropic's products, which include the popular Claude chatbot and coding assistant, are integrated into Palantir's Maven system, a newly vital tool of military intelligence that was used in recent strikes on Iran, according to The Washington Post. Anthropic's CEO, Dario Amodei, has been discussing the Pentagon's contract with Emil Michael, the undersecretary of defense for research and engineering and a former Uber executive, per Bloomberg. The two strongly dislike one another, the New York Times reported. On Thursday, Amodei sent a scorching message to his employees disparaging the Trump administration and rival OpenAI, which late last week jumped into the void left by the government and Anthropic's breakup and announced its own deal with the military. Amodei called OpenAI CEO Sam Altman "mendacious" and said Altman's claims of that the Pentagon would abide by safeguards amounted to nothing more than "safety theater". From his end, Altman acceded to employees in an internal message this week that the company would have no control over how the military used OpenAI's technology. Anthropic last week refused a deal with the Pentagon over concerns its model could be used for domestic mass surveillance or fully autonomous weapons. Pete Hegseth, the US defense secretary, declared the company a "supply-chain risk" in response, a designation that prevents all government contractors from using its technology and which has never been used before against a US company. Silicon Valley, including OpenAI, has backed Anthropic in the fight over the designation. Amodei said his company will sue over the label. The retributive blacklisting of the company could cause significant financial harm if formally enacted. Anthropic's most recent round of financing, some $60bn, is in jeopardy over the fracas with the Pentagon, per Axios. As the defense secretary vowed punitive measures against Anthropic, his agency also announced a deal with OpenAI to use its technology for military operations in its classified network. The timing as well as questions over ethics elicited backlash that propelled Claude's app to the top of the download charts in the US. In the following days, Altman said he would amend the agreement with the department and admitted his company's conduct appeared "opportunistic and sloppy".
Share
Share
Copy Link
Donald Trump claimed he fired Anthropic over its refusal to drop AI guardrails for military use, even as reports emerged that negotiations between the Pentagon and the AI startup have restarted. The dispute centers on whether Claude can be used for mass domestic surveillance and fully autonomous weapons, with Anthropic's $60 billion financing round now in jeopardy.
Donald Trump declared he "fired Anthropic like dogs" in a Thursday interview with Politico, using one of his characteristic phrases to describe his administration's decisive action against the AI company
1
. The statement came as the tech world watches the Pentagon's unprecedented confrontation with Anthropic unfold, with massive implications for how military AI will be developed and deployed in the United States. Trump on Friday ordered the entire federal government to cease using Anthropic's technology, a directive that the state and treasury departments have already begun implementing according to their respective heads2
.
Source: Gizmodo
The conflict erupted when the Department of Defense demanded that Anthropic drop guardrails on its AI model Claude that prohibit its use in mass domestic surveillance and fully autonomous weapons
1
. Defense Secretary Pete Hegseth responded to Anthropic's refusal by threatening to designate the company as a supply chain risk, a label that would prevent all government contractors from using its technology and has never been used before against a US company2
. This designation amounts to what industry observers have described as corporate murder, potentially devastating Anthropic's business model and competitive position.Despite Trump's inflammatory rhetoric, negotiations have restarted between the Pentagon and Anthropic over the military's use of the company's AI and the contract between the two, according to reports from the Financial Times and Bloomberg
2
. Anthropic CEO Dario Amodei is currently in talks with Emil Michael, Under-Secretary of Defense for Research and Engineering, though the discussions face significant obstacles given that Michael last week tweeted that Amodei was a "liar" with a "god-complex"1
. The company has yet to acknowledge receiving formal notice of the supply chain risk designation, leaving the entire situation in limbo.The irony of the dispute is that Claude is already being used in active military operations through its integration into Palantir's Maven Smart System
2
. The US military is currently using Claude to help choose targets in bombing campaigns against Iran, according to the Wall Street Journal. As planning for strikes in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance, turning weeks-long battle planning into real-time operations1
. This AI use in defense raises troubling questions about target selection accuracy, particularly after a school in Minab, Iran, built on an old Revolutionary Guard base that closed 15 years ago, was bombed, killing 168 people, most of them children.Related Stories
Anthropicโ€™s corporate survival hangs in the balance as its most recent financing round, valued at some $60 billion, is now in jeopardy over the fracas with the Pentagon, according to Axios
2
. The retributive blacklisting could cause significant financial harm if formally enacted. Meanwhile, OpenAI seized the opportunity created by government contracts with AI companies being severed, announcing its own deal with the military late last week. However, this move elicited backlash that propelled Claude's app to the top of the download charts in the US, and OpenAI CEO Sam Altman later admitted the conduct appeared "opportunistic and sloppy"2
. Amodei sent a scorching message to employees calling Altman "mendacious" and dismissing his safety claims as "safety theater," while Altman conceded his company would have no control over how the military used OpenAI's technology.The standoff highlights fundamental tensions around the ethical implications of AI and how tech companies navigate demands from their most powerful potential customer. Silicon Valley, including OpenAI, has backed Anthropic in the fight over the designation, and Amodei has vowed the company will sue over the label
2
. Whether Anthropic can maintain its ethical stance after making such a public stand remains uncertain, especially given that compromise seems unlikely and the company's survival depends on finding a resolution. The tech industry is now watching closely to see whether principled opposition to military AI applications can coexist with commercial viability, or whether government pressure will force even the most ethics-focused companies to capitulate.Summarized by
Navi
[1]
03 Mar 2026•Policy and Regulation

04 Mar 2026•Policy and Regulation

02 Mar 2026•Policy and Regulation

1
Policy and Regulation

2
Technology

3
Technology
