65 Sources
65 Sources
[1]
Anthropic to challenge DOD's supply chain label in court | TechCrunch
Dario Amodei said Thursday that Anthropic plans to challenge the Defense Department's decision to label the AI firm a supply chain risk in court, a designation he has called "legally unsound." The statement comes a few hours after the Department officially designated Anthropic a supply chain risk following a weeks-long dispute over how much control the military should have over AI systems. A supply chain risk designation can bar a company from working with the Pentagon and its contractors. Amodei drew a firm line that Anthropic's AI should not be used for mass surveillance of Americans or for fully autonomous weapons, but the Pentagon believed it should have unrestricted access for "all lawful purposes." In his statement, Amodei said the vast majority of Anthropic's customers are unaffected by the supply chain risk designation. "With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts," he said. As a preview of what Anthropic will likely argue in court, Amodei said the Department's letter labeling the firm a supply chain risk is narrow in scope. "It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain," Amodei said. "Even for Department of War contractors, the supply chain risk designation doesn't (and can't) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts." Amodei reiterated that Anthropic had been having productive conversations with the Department over the last several days, conversations that some suspect got derailed when an internal memo he sent to staff was leaked. In it, Amdodei characterized rival OpenAI's dealings with the Department of Defense as "safety theater." OpenAI has signed a deal to work with the Defense Department in Anthropic's place, a move that has sparked backlash among OpenAI staff. Amodei apologized for the leak in his Thursday statement, claiming that the company did not intentionally share the memo or direct anyone else to do so. "It is not in our interest to escalate the situation," he said. Amodei said the memo was written within "a few hours" of a series of announcements, including a presidential Truth Social post saying Anthropic would be removed from federal systems, then Defense Secretary Hegseth's supply chain risk designation, and finally the Pentagon's deal announcement with OpenAI. He apologized for the tone, calling it "a difficult day for the company" and said the memo didn't reflect his "careful or considered views." Written six days ago, he added, it's now an "out-of-date assessment." He finished by saying Anthropic's top priority is to ensure American soldiers and national security experts maintain access to important tools in the middle of ongoing major combat operations. Anthropic is currently supporting some of the U.S.'s operations in Iran, and Amodei said the company would continue to provide its models to the Defense Department at "nominal cost" for "as long as necessary to make that transition." Anthropic could challenge the desingation in federal court, likely in Washington, but the law behind the decision makes it harder to contest because it limits the usual ways companies can challenge government procurement decisions and gives the Pentagon broad discretion on national security matters. Or as Dean Ball -- a former Trump-era White House advisor on AI who has spoken out against Hegseth's treatment of Anthropic -- put it: "Courts are pretty reluctant to second-guess the government on what is and is not a national security issue...There's a very high bar that one needs to clear in order to do that. But it's not impossible."
[2]
It's official: The Pentagon has labeled Anthropic a supply chain risk | TechCrunch
The Department of Defense has officially notified Anthropic leadership that the company and its products have been designated a supply chain risk, Bloomberg reports, citing a senior department official. The designation comes after weeks of conflict between the AI lab and the DOD. Anthropic CEO Dario Amodei has refused to allow the military to use its AI systems for mass surveillance of Americans or to power fully autonomous weapons with no humans assisting in the targeting or firing decisions. The Department has argued that its use of AI should not be limited by a private contractor. Supply chain risk designations are typically reserved for foreign adversaries. The label requires any company or agency that does work with the Pentagon to certify that it doesn't use Anthropic's models. The Pentagon's finding threatens to disrupt both the company and its own operations. Anthropic has been the only frontier AI lab with classified-ready systems. The U.S. military is currently relying on Claude in its Iran campaign, where American forces are using AI tools to quickly manage the data for their operations. Claude is one of the main tools installed in Palantir's Maven Smart System, which military operators in the Middle East rely on, according to Bloomberg. Labeling Anthropic a supply chain risk over this disagreement is an unprecedented move from the Department, several critics say. Dean Ball, a former Trump White House AI advisor, has referred to the designation as a "death rattle" of the American republic, arguing government has abandoned strategic clarity and respect in favor of "thuggish" tribalism that treats domestic innovators worse than foreign adversaries. Hundreds of employees from OpenAI and Google have urged the DOD to withdraw its designation and called on Congress to push back on what could be perceived as an inappropriate use of authority against an American technology company. They have also urged their leaders to stand together to continue to refuse the DOD's demands to use their AI models for domestic mass surveillance and "autonomously killing people without human oversight." TechCrunch has reached out to Anthropic for comment. In the midst of the dispute, OpenAI forged its own deal with the Department to allow the military to use its AI systems for "all lawful purposes." Some of the company's own employees have expressed concern about the ambiguous phrasing of the deal, which could lead to exactly the type of uses Anthropic was trying to avoid. Amodei has called the actions of the DOD "retaliatory and punitive," and reportedly said his refusal to praise or donate to President Trump contributed to the dispute with the Pentagon. OpenAI President Greg Brockman has been a staunch backer of Trump, recently donating $25 million to the MAGA Inc. Super PAC.
[3]
The Pentagon formally labels Anthropic a supply-chain risk
The decision, first reported by The Wall Street Journal on Thursday, citing one source familiar, will bar defense contractors from working with the government if they use Claude, Anthropic's AI program, in their products. Though the designation is typically applied to foreign companies with ties to adversarial governments, this is the first time that an American company has publicly received this label. At the heart of the conflict is Anthropic's refusal to allow the Pentagon to use Claude for two purposes: autonomous lethal weapons without human oversight, and mass surveillance. The Pentagon has argued that Anthropic's demands for control over government usage would place too much power in the hands of a private company, while Anthropic was not reassured that the government would respect their red lines. The negotiations grew ugly, however, as the Pentagon increasingly threatened to use the supply-chain risk designation should Anthropic refuse to comply with their demands. After Anthropic announced last Thursday that they would not comply, the Pentagon made good on that threat. (The Pentagon did not comment on the record. Anthropic did not immediately return a request for comment.)
[4]
Pentagon Gives Anthropic Supply-Chain Risk Label, CEO Confirms Court Challenge
The next step in a dispute between AI company Anthropic and the US Department of Defense has seen the former designated a supply-chain risk, meaning it will be cut off from partnering with other brands that work with the Pentagon. This comes after a week of failed negotiations between the makers of Claude and the US government. Anthropic refused to agree to new guidelines requiring its AI models to be used in any lawful use case. Anthropic said it wouldn't agree to the new rules without assurances that its AI models can't be used to power fully autonomous weapons or enable mass domestic surveillance of US citizens. Defense Secretary Pete Hegseth said last week that he would give Anthropic a supply-chain risk designation if it continued to refuse to implement the changes to the guidelines. A senior Pentagon official confirmed Anthropic's new label to The Wall Street Journal on Thursday. Anthropic CEO Dario Amodei posted on the company's blog, saying the brand sees "no choice but to challenge it in court." Amodei also says Anthropic doesn't foresee problems for most Claude customers stemming from the new designation. He says, "The language used by the Department of War in the letter (even supposing it was legally sound) matches our statement on Friday that the vast majority of our customers are unaffected by a supply chain risk designation." "With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts." Claude AI user numbers continue to grow amid the disruption, with the app sitting at the top of the free charts on the Google Play Store and Apple App Store in the US and in some other countries. Anthropic's chief product officer said on Thursday that the brand is seeing more than a million new customers a day. Amodei also apologized for a post sent to company staff last week, which has since leaked and been published by The Information. The memo quoted Amodei as saying, "We haven't given dictator-style praise to Trump (while Sam has)," referring to OpenAI CEO Sam Altman. OpenAI has agreed to the US government's changes, announcing that GPT models can be used in the Department of Defense's classified networks. Disclosure: Ziff Davis, PCMag's parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
[5]
Anthropic Tells Judge It Could Lose Billions If US Shuns AI Tool
Anthropic PBC told a judge it could lose as much as billions of dollars in revenue this year and urged quick action on its request to block the Trump administration's declaration of the company as a US supply-chain risk after a blowup with the Pentagon over artificial intelligence safety issues. The startup made a case for urgency to US District Judge Rita F. Lin at a hearing in San Francisco a day after Anthropic sued the Defense Department over the supply-risk designation. The dispute is over the startup's demand for assurances that its AI wouldn't be used for mass surveillance of Americans or autonomous weapons deployment.
[6]
US defense department told Anthropic it is a supply chain risk, CEO says
March 5 (Reuters) - The U.S. defense department has informed Anthropic that the Artificial Intelligence lab is a supply chain risk, Anthropic CEO Dario Amodei said on Thursday. "Yesterday (March 4) Anthropic received a letter from the Department of War confirming that we have been designated as a supply chain risk to America's national security," Amodei said. "We do not believe this action is legally sound, and we see no choice but to challenge it in court," he said. Reporting by Mrinmay Dey and Chris Thomas in Mexico City; Editing by Christopher Cushing Our Standards: The Thomson Reuters Trust Principles., opens new tab
[7]
Microsoft backs Anthropic, urging a judge to halt Pentagon's actions against AI company
SAN FRANCISCO (AP) -- Microsoft is throwing its weight behind Anthropic in asking a federal court to block the Trump administration's designation of the artificial intelligence company as a supply chain risk. Microsoft, in a legal filing, is challenging Defense Secretary Pete Hegseth's action last week to shut Anthropic out of military work by labeling its AI products as a national security threat. The Pentagon took the action against Anthropic after an unusually public dispute over the company's refusal to allow unrestricted military use of its AI model Claude. President Donald Trump also said he was ordering all federal agencies to stop using Claude. "The use of a supply chain risk designation to address a contract dispute may bring severe economic effects that are not in the public interest," Microsoft, a major government contractor, said in its Tuesday filing in the San Francisco federal court, where Anthropic sued the Trump administration on Monday. The Pentagon's action "forces government contractors to comply with vague and ill-defined directions that have never before been publicly wielded against a U.S. company," Microsoft's legal brief says. It asks for a judge to order a temporary lifting of the designation to allow for more "reasoned discussion." The Pentagon declined to comment, saying it does not remark on matters in litigation. Microsoft also sided with Anthropic's two ethical red lines that were a sticking point in the contract negotiations. "Microsoft also believes that American AI should not be used to conduct domestic mass surveillance or start a war without human control," Microsoft said. "This position is consistent with the law and broadly supported by American society, as the government acknowledges."
[8]
Microsoft says Anthropic's products can remain available to customers after security risk designation
Microsoft said Thursday that it will keep startup Anthropic's artificial intelligence technology embedded in its products for clients, excluding the U.S. Department of War. The statement comes on the same day the federal agency informed Anthropic that it would label the company a supply-chain risk. Anthropic subsequently said it intends to challenge the move in court. "Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers -- other than the Department of War -- through platforms such as M365, GitHub, and Microsoft's AI Foundry and that we can continue to work with Anthropic on non-defense related projects," a Microsoft spokesperson told CNBC in an email. Microsoft supplies its technology to a variety of U.S. government agencies. The Microsoft 365 productivity software is widely used inside the Department of War. In September Microsoft said it was integrating Anthropic's generative artificial intelligence models into the Microsoft 365 Copilot add-on for Microsoft 365 subscriptions, alongside models from OpenAI. Many software engineers have adopted Anthropic's Claude models for drafting source code, and they are available inside GitHub Copilot, as are OpenAI's competing Codex models.
[9]
Anthropic says it will challenge Defense Department's supply chain risk designation in court
In a new blog post, Anthropic CEO Dario Amodei has admitted that it received a letter from the Defense Department, officially labeling it a supply chain risk. He said he doesn't "believe this action is legally sound," and that his company sees "no choice" but to challenge it in court. Hours before Amodei published the post, the Pentagon announced that it notified the company that its "products are deemed a supply chain risk, effective immediately." If you'll recall, the Defense Department (called the Department of War under the current administration) threatened to give the company the designation typically reserved for firms from adversaries like China if it didn't agree to remove its safeguards over mass surveillance and autonomous weapons. President Trump then ordered federal agencies to stop using Anthropic's tech. Amodei explained that the designation has a narrow scope, because it only exists to protect the government. That is why the general public, and even Defense Department contractors, can still use Anthropic's Claude chatbot and its AI technologies. Microsoft told CNBC that it will continue using Claude after its lawyers had concluded that it can keep on working with Anthropic on non-defense related projects. The CEO has also admitted that his company had "productive conversations" with the department over the past few days. He said that they were looking at ways to serve the Pentagon that adheres to its two exceptions, namely that its technology not be used for mass surveillance and the development of fully autonomous weapons, and at ways to "ensure a smooth transition if that is not possible." That confirms reports that Anthropic is back in talks with the agency in an effort to reach a new deal. In addition, he apologized for a leaked internal memo, wherein he reportedly said that OpenAI's messaging about its own deal with the department is "just straight up lies."
[10]
Anthropic says it will sue Pentagon over supply chain risk label
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. A hot potato: Anthropic's battle with the American government has taken a new turn. The US has now officially designed the AI company as a supply chain risk. In response, Anthropic has promised to sue the Pentagon over the decision. The clash centers on two red lines Anthropic refused to drop during negotiations with the Department of Defense: using Claude for mass domestic surveillance of Americans and for fully autonomous weapons. Anthropic says those carveouts are narrow, reasonable, and have not affected any government mission to date. The Pentagon disagreed. A senior Pentagon official said the supply chain risk designation was "effective immediately." In a statement published after Defense Secretary Pete Hegseth said he was directing the department to apply the designation, Anthropic called the move legally unsound, and said it would set a dangerous precedent for any US company negotiating with the government. The firm added that it would fight any such designation in court rather than cave on the two restrictions. "Well, I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs, because they shouldn't have done that," President Trump said in a new interview with Politico. Anthropic doesn't have a history of being anti-defense. The company says it has supported US government work on classified networks since June 2024 and remains committed to national security projects. But it seems there are limits it won't cross. Anthropic CEO Dario Amodei wrote that the practical impact of the designation is narrower than officials have implied. He said the relevant law would only allow the restriction to apply to Claude's use in Defense Department contract work, not to every commercial relationship involving contractors that also happen to do business with the military. Essentially, Anthropic believes the Pentagon is trying to stretch the law well beyond its intended scope. The Information Technology Industry Council, whose members include several of the biggest names in US tech, warned that using emergency supply-chain powers in a procurement dispute could inject uncertainty into the market and make it harder for the government to access best-in-class tools and services. Anthropic says this is the first time such a measure has been publicly aimed at an American company rather than a foreign adversary or similarly high-risk entity. "From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes," a Pentagon official said. "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk." Also read: AI-driven warfare is here, and the Iran strikes show how fast it's advancing Anthropic had been in talks with the DoD recently, but the conversations came to nothing. According to a person familiar with the matter, the failure was partly due to how President Donald Trump and other members of his administration publicly berated the company. Amodei is one of the few big tech names not to have made large donations to Trump or publicly praise him, which some believe has contributed to the current situation. Microsoft has confirmed that it will continue working with Anthropic on non-defense-related projects. The Redmond giant will continue to embed Anthropic tech in products for clients - except for the DoD, of course.
[11]
Anthropic officially designated a supply chain risk by Pentagon
The US has officially deemed artificial intelligence (AI) firm Anthropic a supply chain risk -- the first time the government has given that label to a US firm. The Pentagon's designation is the latest escalation in a clash over Anthropic's refusal to give the government unfettered access to its AI tools over concerns they would be used for mass surveillance and autonomous weapons. In a statement, a senior Pentagon official said the supply chain risk designation was "effective immediately." Anthropic has previously said it would sue the Pentagon over such a designation. The AI developer had been in talks with the Department of Defense in recent days, even after their negotiations spilled into public view last week, sources familiar with discussions have told the BBC. A Pentagon official said: "From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes." "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk." Last week, US President Donald Trump said he would direct every federal agency to immediately stop using technology from AI developer Anthropic. "We don't need it, we don't want it, and will not do business with them again!" Trump wrote in a Truth Social post on Friday. Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
[12]
Pentagon Official Sees Little Chance to Revive Anthropic AI Deal
A top Pentagon official sees little chance of resuming negotiations with Anthropic PBC over military use of its artificial intelligence tools following the company's legal challenge of an unprecedented government move to declare the firm a supply-chain risk. Emil Michael, the under secretary of defense for research and engineering, said Monday that Anthropic's lawsuit was an "expected reaction" and that the company's attempt to reverse the supply-chain declaration wouldn't alter the Pentagon's decision. "I don't think there's a scenario where this gets resolved in that way," Michael said in an interview with Bloomberg News. Michael spoke hours after Anthropic sued to block the Pentagon and other agencies from declaring the firm a threat to the US supply chain and barring it from government contracts. In court filings earlier Monday, Anthropic claimed the moves had violated the company's rights to free speech and due process under the US Constitution. An Anthropic spokesperson had no immediate comment when asked Monday about Michael's remarks. A former Uber Technologies Inc. executive now overseeing a Pentagon effort to accelerate AI adoption, Michael had held weeks of tense negotiations with Anthropic Chief Executive Officer Dario Amodei over terms for using the firm's AI tools. Talks broke down roughly two weeks ago, after the company demanded assurances that its AI wouldn't be used for mass surveillance of Americans or autonomous weapons deployment. That prompted the Pentagon to declare San Francisco-based Anthropic a supply-chain risk, a move normally reserved for companies from adversarial nations. Until recently, Anthropic had provided the only AI system that could operate in the Pentagon's classified cloud, and its Claude Gov tool has become a favored option among defense personnel for its ease of use. The decision puts at risk a $200 million contract for Anthropic to provide the Pentagon with classified AI tools, and it could preclude the firm from partnering with other companies on their military work. Defense Secretary Pete Hegseth and President Donald Trump have outlined a six-month period for the military and US agencies to shift from Anthropic to other AI providers. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. In one of its court filings, Anthropic expressed concern that the government's actions have already hit its work with other federal contractors. It said that one vendor it has partnered with on custom applications warned that it may "suspend work or even remove Claude from existing deployments." Other contractors, Anthropic said, "are raising concerns, pausing collaborations, and considering terminating contracts." On Monday, Michael reiterated a key Pentagon complaint that Anthropic's public positions in calling for safeguards on military use indicated that the company wanted to have input on the chain of command and potentially operational control. The company has rejected that characterization and insisted that all military decisions rest with the Pentagon. Amodei has also said that Anthropic would like to keep working with the military but wants those two usage restrictions to be honored in any defense contract. Asked whether there could be any return to negotiations, Michael was blunt: "The talks are over. We're moving on."
[13]
Pentagon labels Anthropic a supply-chain risk
The US Department of War has issued the first such designation ever applied to an American company, demanding that defence contractors certify they do not use Claude. Anthropic says the action is unlawful and retaliatory. For months, the conversations between Anthropic and the US Department of War were framed, at least publicly, as a negotiation. The San Francisco AI company wanted written assurances that its Claude models would not be used for mass domestic surveillance of Americans or for fully autonomous weapons with no human involvement in targeting decisions. The Pentagon wanted unrestricted access to what it saw as a critical technology asset, and no private contractor setting the terms. On March 5, 2026, that negotiation ended. The Department of War formally informed Anthropic that it and its products had been designated a supply-chain risk, effective immediately. In doing so, the government applied, for the first time in the programme's history, a designation previously reserved for companies from adversarial nations, most notably China's Huawei, to an American firm founded in San Francisco. "DOW officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately," a senior department official told Bloomberg News, using the acronym for the Department of War, the name that Defense Secretary Pete Hegseth now favours for the Department of Defense. The practical consequences are immediate. Under 10 USC 3252, the supply-chain risk designation requires defence vendors and contractors to certify that they do not use Anthropic's Claude models in their work with the Pentagon. The provision was written to cut off technology from foreign adversaries, not domestic innovators. Anthropic's position was never that the military should not use AI. Its public statement on March 5 was precise about the two specific uses it sought to exclude: "the mass domestic surveillance of Americans and fully autonomous weapons." The company argued these were reasonable limits on any AI deployment, not just its own. The Pentagon's counter-argument was equally direct: it contended that mass surveillance of Americans is already illegal under existing law, and that fully autonomous weapons are already restricted by internal Defence Department policy. There was, in the government's view, no need to enshrine limits in a commercial contract, and Anthropic's insistence on doing so was read as an attempt to constrain military decision-making through a supplier relationship. "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons," Anthropic said in its public statement. CEO Dario Amodei confirmed separately that he sees "no choice" but to challenge the designation in court, adding: "We do not believe this action is legally sound." Amodei has also indicated that his refusal to publicly praise or financially support President Donald Trump contributed to the deterioration of the relationship with the Pentagon. Even as the designation was announced, the US military was relying on Claude in its ongoing operations in Iran. According to reporting by CNBC on March 5, Claude is one of the primary tools installed in Palantir's Maven Smart System, which military operators in the Middle East use to manage data for their operations. The contradiction, blacklisting a company's AI while using that same AI in active conflict, was not lost on observers. President Trump directed federal agencies to "immediately cease" all use of Anthropic's technology, though it remains unclear how that directive would interact with active deployments through third-party contractors like Palantir. The response from the broader tech industry was swift and divided. Hundreds of employees at Google and OpenAI signed an open letter urging their companies to support Anthropic in its standoff. Elon Musk, meanwhile, sided with the Trump administration, claiming on his social platform that "Anthropic hates Western Civilization." OpenAI, for its part, announced its own deal with the Department of War hours after Anthropic was blacklisted. Under the terms of that agreement, the military can use OpenAI models for "all lawful purposes", phrasing that some OpenAI employees have since described as deliberately ambiguous and potentially encompassing exactly the uses Anthropic had sought to prohibit. Dean Ball, a former Trump White House AI adviser, described the supply-chain risk designation as a "death rattle" of American strategic coherence, arguing that treating a domestic company worse than a foreign adversary reflected "thuggish tribalism" over sound policy. Anthropic has stated publicly that the designation under 10 USC 3252 can only extend to Claude's use in Department of War contracts, it cannot, the company argues, affect its commercial customers or other government agencies. The legal challenge, when filed, will test that reading against the government's interpretation. Amodei has also indicated, in remarks reported by CBS News, that he is attempting to "deescalate" and reach "some agreement that works for us and works for them." Bloomberg reported that talks had been quietly reopened even as the formal designation was announced. Whether those talks produce anything before a court battle begins is, at the moment of publishing, unknown. What is certain is the precedent. For the first time, an American AI company built on the argument that safety and usefulness are complementary, that responsible AI is better AI, has been formally classified alongside China's Huawei by the government it sought to serve.
[14]
Former Military Officials, Academics, and Tech Policy Leaders Denounce Pentagon's Tactics Against Anthropic
"The future of American innovation in AI, the rule of law, and the constitutional boundaries of executive power are all on the line, and they are yours to defend." Over two dozen former defense and intelligence officials, tech policy leaders, and academics have signed on to a letter addressed to members of Congress over the Pentagon's recent decision to list Anthropic as a supply chain risk. The letter was signed by former high-ranking officials and current tech experts from across the political spectrum and calls on Congress to establish clear policies governing the use of AI for domestic surveillance and autonomous lethal weapons systems, the two issues at the center of the conflict. Anthropic had refused to loosen those guardrails for the military, setting off Defense Secretary Pete Hegseth and President Donald Trump, who have tried to blacklist the AI company, demanding that other firms with government contracts no longer do business with them. The letter calls for the federal government's designation of the AI company as a supply chain risk an "inappropriate use of executive authority against Anthropic." Brad Carson, president of Americans for Responsible Innovation, and former Under Secretary of the Army, told Gizmodo in a statement that it was a dangerous precedent. "The use of this authority against a domestic American company is a profound departure from its intended purpose and sets a dangerous precedent," the letter reads. "Supply chain risk designations exist to protect the United States from infiltration by foreign adversaries â€" from companies beholden to Beijing or Moscow, not from American innovators operating transparently under the rule of law." The signatories to the letter include former CIA director Michael Hayden, retired Vice Admiral of the Navy Donald Arthur and former Deputy Assistant Secretary of Defense Diana Banks Thompson, among a host of other members of the military. Tech and education experts Lawrence Lessig and Randi Weingarten are also on the list, along with members of various tech-focused think tanks. The letter notes that caring about fully autonomous weapons and mass surveillance of weapons is a very mainstream thing: They are not fringe positions. The prohibition on fully autonomous lethal weapons is consistent with the laws of armed conflict, including principles of distinction and proportionality codified in the Geneva Conventions. The prohibition on mass domestic surveillance is grounded in the Fourth Amendment and in binding U.S. treaty obligations under the International Covenant on Civil and Political Rights. It also points out that blacklisting an American company weakens U.S. competitiveness, warning this is "not a marketplace any serious entrepreneur or investor can build around." The letter is addressed to members of both the House and Senate Armed Services Committees, including Republicans Sen. Roger Wicker and Rep. Mike Rogers as well as Democrats Sen. Jack Reed and Rep. Adam Smith. Anthropic's future is still in doubt. Hegseth still hasn't formally given Anthropic notice that it's a supply chain risk (aside from a tweet) and the latest reporting from CBS News suggests the AI company is still trying to work out a deal with the Pentagon.
[15]
Pentagon's chief tech officer says he clashed with AI company Anthropic over autonomous warfare
A top Pentagon official said Anthropic's dispute with the government over the use of its artificial intelligence technology in fully autonomous weapons came after a debate over how AI could be used in President Donald Trump's future Golden Dome missile defense program, which aims to put U.S. weapons in space. U.S. Defense Undersecretary Emil Michael, the Pentagon's chief technology officer, said he came to view the AI company's ethical restrictions on the use of its chatbot Claude as an irrational obstacle as the U.S. military pursues giving greater autonomy to swarms of armed drones, underwater vehicles and other machines to compete with rivals like China that could do the same. "I need a reliable, steady partner that gives me something, that'll work with me on autonomous, because someday it'll be real and we're starting to see earlier versions of that," Michael said in a podcast aired Friday. "I need someone who's not going to wig out in the middle." The comments came after the Pentagon formally designated S an Francisco-based Anthropic a supply chain risk, cutting off its defense work using a rule designed to prevent foreign adversaries from harming national security systems. Anthropic has vowed to sue over the designation, which affects its business partnerships with other military contractors. Trump has also ordered federal agencies to immediately stop using Claude, though the Republican president gave the Pentagon six months to phase out a product that's deeply embedded in classified military systems, including those used in the Iran war. Anthropic said it only sought to restrict its technology from being used for two high-level usages: mass surveillance of Americans or fully autonomous weapons. Michael, a former Uber executive, revealed his side of months-long talks with Anthropic CEO Dario Amodei in a lengthy conversation with Silicon Valley venture capitalists Jason Calacanis, David Friedberg and Chamath Palihapitiya, co-hosts of the "All-In" podcast. A fourth co-host, former PayPal executive David Sacks, is now Trump's AI czar and was not present for the episode but has been a vocal critic of Anthropic, including for its hiring of former Biden administration officials shortly after Trump returned to the White House last year. As talks hit an impasse last week, Michael lashed out at Amodei on social media, saying he "has a God-complex" and "wants nothing more than to try to personally control" the military. In the podcast, however, he positioned the dispute as part of a broader military shift toward using AI. Michael said the military is developing procedures for enabling different levels of autonomy in warfare depending on the risk posed. "This is part of the debate I had with Anthropic, which is we need AI for things like Golden Dome," Michael said, sharing a hypothetical scenario of the U.S. having only 90 seconds to respond to a Chinese hypersonic missile. A human anti-missile operator "may not be able to discriminate with their own eyes what they're going after," but an autonomous counterattack would be a low risk "because it's in space and you're just trying to hit something that's trying to get you." In another scenario, he said, "who could oppose if you have a military base, you have a bunch of soldiers sleeping, that you have a laser that can take down drones autonomously?" In response to the podcast comments, Anthropic pointed to an earlier Amodei statement saying "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner." Michael, the defense undersecretary for research and engineering, was sworn in last May and said he took over the military's "AI portfolio" in August. That's when he said he began scrutinizing Anthropic's contracts -- some of which dated from President Joe Biden's Democratic administration. Michael said he questioned Anthropic over terms of use that he deemed too restrictive. "I need to have the terms of service be rational relative to our mission set," he said. "So we started these negotiations. It took three months and I had to sort of give them scenarios, like this Chinese hypersonic missile example. They're like, 'OK, we'll give you an exception for that.' Well, how about this drone swarm? 'We'll give an exception for that.' And I was like, exceptions doesn't work. I can't predict for the next 20 years what (are) all the things we might use AI for." That's when the Pentagon began insisting Anthropic and other AI companies allow for "all lawful use" of their technology, Michael said. Anthropic resisted that change, while its competitors -- Google, OpenAI and Elon Musk's xAI -- agreed to them, though some still have to get their infrastructure prepared for classified military work, Michael said. The other sticking point for Anthropic was not allowing any mass surveillance of Americans. "They didn't want us to bulk-collect public information on people using their AI system," Michael said, describing the negotiations as "interminable." Anthropic has disputed parts of Michael's version of the talks and emphasized that the protections it sought were narrow and not based on any existing uses of Claude. The next stage of the dispute will likely happen in court.
[16]
Anthropic CEO says 'no choice' but to challenge Trump admin's supply chain risk designation in court
Anthropic CEO Dario Amodei confirmed that the U.S. government declared his company a supply chain risk on Thursday and said it has "no choice" but to challenge the designation in court. The startup has been at odds with the Department of Defense in recent weeks over how its artificial intelligence models, known as Claude, can be used. Anthropic wanted assurance that its technology would not be tapped for fully autonomous weapons or domestic mass surveillance, but the DOD wanted Anthropic to grant the agency unfettered access to Claude across all lawful purposes. "As we stated last Friday, we do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making -- that is the role of the military," Amodei wrote.
[17]
Pentagon labels AI company Anthropic a supply chain risk
Pages from the Anthropic website and the company's logo are displayed on a computer screen in New York on Thursday, Feb. 26, 2026. Patrick Sison/AP hide caption The Trump administration is following through with its threat to designate artificial intelligence company Anthropic as a supply chain risk in an unprecedented move that could force other government contractors to stop using the AI chatbot Claude. The Pentagon said in a statement Thursday that it has "officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately." The decision appeared to shut down the opportunity for further negotiation with Anthropic, nearly a week after President Donald Trump and Defense Secretary Pete Hegseth accused the company of endangering national security. Trump and Hegseth announced a series of threatened punishments last Friday, on the eve of the Iran war, after Anthropic CEO Dario Amodei refused to back down over concerns the company's products could be used for mass surveillance of Americans or autonomous weapons. Amodei said in a statement Thursday that "we do not believe this action is legally sound, and we see no choice but to challenge it in court." The Pentagon statement said, "this has been about one fundamental principle: the military being able to use technology for all lawful purposes. The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk." Amodei countered that the narrow exceptions Anthropic sought to limit surveillance and autonomous weapons "relate to high-level usage areas, and not operational decision-making." He said there were "productive conversations" with the Pentagon in recent days over whether it could keep using Claude or establish a "smooth transition" if no agreement was reached. Trump gave the military six months to phase out Claude, which is already widely embedded in military and national security platforms. Amodei said it's a priority to make sure warfighters won't be "deprived of important tools in the middle of major combat operations." Some military contractors were already cutting ties with Anthropic, a rising star in the tech industry that sells Claude to a variety of businesses and government agencies. Lockheed Martin said it will "follow the President's and the Department of War's direction" and look to other providers of large language models. "We expect minimal impacts as Lockheed Martin is not dependent on any single LLM vendor for any portion of our work," the company said. How the Defense Department will interpret the scope of the risk designation is unclear. Amodei said a notification Anthropic received from the Pentagon on Wednesday shows it only applies to Claude's use by customers as a "direct part of" their military contracts. Microsoft said its lawyers studied the rule and the company "can continue to work with Anthropic on non-defense related projects." The Pentagon's decision to apply a rule designed to address supply threats posed by foreign adversaries was met with broad criticism. Federal codes have defined supply chain risk as a "risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert" a system in order to disrupt, degrade or spy on it. U.S. Sen. Kirsten Gillibrand, a New York Democrat and member of the Senate Armed Services Committee and Senate Intelligence Committee, called it "a dangerous misuse of a tool meant to address adversary-controlled technology." "This reckless action is shortsighted, self-destructive, and a gift to our adversaries," she said in a written statement Thursday. Neil Chilson, a Republican former chief technologist for the Federal Trade Commission who now leads AI policy at the Abundance Institute, said the decision looks like "massive overreach that would hurt both the U.S. AI sector and the military's ability to acquire the best technology for the U.S. warfighter." Earlier in the day, a group of former defense and national security officials sent a letter to U.S. lawmakers expressing "serious concern" about the designation. "The use of this authority against a domestic American company is a profound departure from its intended purpose and sets a dangerous precedent," said the letter from former officials and policy experts, including former CIA director Michael Hayden and retired Air Force, Army and Navy leaders. They added that such a designation is meant to "protect the United States from infiltration by foreign adversaries -- from companies beholden to Beijing or Moscow, not from American innovators operating transparently under the rule of law. Applying this tool to penalize a U.S. firm for declining to remove safeguards against mass domestic surveillance and fully autonomous weapons is a category error with consequences that extend far beyond this dispute." While losing big partnerships with defense contractors, Anthropic experienced a surge of consumer downloads over the past week due to people siding with its moral stance. More than a million people signed up for Claude each day this week, the company said, lifting it past OpenAI's ChatGPT and Google's Gemini as the top AI app in more than 20 countries in Apple's app store. The dispute with the Pentagon has also further deepened Anthropic's bitter rivalry with OpenAI that started when ex-OpenAI leaders, including Amodei, started Anthropic in 2021. Hours after the Pentagon punished Anthropic last Friday, OpenAI announced a deal to effectively replace Anthropic with ChatGPT in classified military environments. OpenAI said it sought similar protections against domestic surveillance and fully autonomous weapons but later had to amend its agreements, leading CEO Sam Altman to say he shouldn't have rushed a deal that "looked opportunistic and sloppy." Amodei also expressed regret about his own part in that "difficult day for the company," saying Thursday he wanted to "directly apologize" for an internal note he sent to Anthropic staff that attacked OpenAI's behavior and suggested Anthropic was being punished for not giving "dictator-like praise" to Trump.
[18]
Pentagon Turns to Ex-Uber Executive in Anthropic Feud Over AI
Emil Michael made his name in Silicon Valley a decade ago as an aggressive dealmaker for a startup -- Uber Technologies Inc. -- as it wrangled with governments in pursuit of market domination. Now, Michael has switched sides in a battle involving a different startup -- this time taking a leading role in the Pentagon's dispute with artificial intelligence pioneer Anthropic PBC. As US Under Secretary of Defense for Research and Engineering, Michael has been negotiating with Anthropic and its chief executive officer, Dario Amodei, over how the defense department can use its AI models. The discussions, centered on Anthropic's aim to keep its technology from being used for mass surveillance of Americans and to power fully autonomous weapons, are at an impasse. The Pentagon formally notified Anthropic this week that it had determined the company to be a supply chain risk -- a designation typically used only for foreign adversaries. The episode has allowed Michael to reprise the some of the hardball tactics that defined his four-year tenure as chief business officer at Uber. The standoff has pitted the Defense Department against Anthropic, a leader in the industry, as well as a broad and vocal contingent of technologists worried over the use of AI in weapons. Even has he spars with Anthropic, Michael is simultaneously trying to build positive relationships with tech companies, reaching out to potential partners to accelerate the military's adoption of AI. Since he took the position in May, Michael has met with hundreds of tech companies, according to a department official. Part of the goal is to get the best AI technology into the hands of the government, work closely with a handful of leading players, and expand the universe of contractors the Defense Department typically deals with, the official said. Michael has also kept up his direct relationships with investors -- including some that back Anthropic, whom he has talked with in recent days, according to a person familiar with the matter who asked not to be identified discussing private conversations. During their chats, he has shared his perspective on the negotiations from the government's side, they added. Michael has been publicly critical of Anthropic, calling Amodei a "liar" with "a God-complex" in an X post last week. At Andreessen Horowitz's American Dynamism Summit on Tuesday, Michael said that issues with an unnamed model vendor went "well beyond what you've been hearing in the press in the last couple of weeks." He also said that the company had pushed for "dozens of restrictions. And yet these AI models were baked into some of the most sensitive and important places in the US military." Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. His fiery personality in government is in keeping with his reputation at Uber, where he served as former CEO Travis Kalanick's right-hand man and was a fixture of its early successes. During his four years at the company, he helped transform Uber from a scrappy startup on tenuous regulatory footing into a household name and mainstay of global transportation. He also assisted with raising more than $10 billion. He oversaw Uber's expansion into international markets such as China, and eventually Uber's sale of its Chinese operations to rival Didi Chuxing. Michael's streak of victories at the company were punctuated by controversies. He was ultimately ousted in 2017 following an investigation into the ride-sharing company's workplace culture helmed by former US Attorney General Eric Holder. Holder's report recommended Michael's removal from the company among other leadership changes, Bloomberg reported at the time. Kalanick left soon after. Michael had previously been involved in other high-profile scandals at Uber, including reports that he and other executives visited an escort-karaoke bar in 2017. He also suggested in 2014 that Uber could pay to dig up dirt on journalists critical of the company. He denies that he went after a reporter and in a statement at the time said he regretted the incident. Still, some Michael allies are glad to see a seasoned business operator in government. "You want someone in the Pentagon who really understands technology and knows how to navigate the technology world," said Joe Lonsdale, a conservative investor and a co-founder of Palantir Technologies Inc. And someone "who's young enough that they're still working 100 hours a week, super intensely." A former college Republican at Harvard University, Michael has prior government experience, too. Before joining Uber, Michael was a White House fellow under President Barack Obama and served as a special assistant to former Defense Secretary Robert Gates. During his tenure at Uber, he joined the Defense Business Board to lend his tech expertise to policy recommendations. In the years following his departure from Uber, and before his Defense Department appointment, Michael was the CEO of a special purpose acquisition company called DPCM Capital. His political donations, while limited, have crossed party lines. Most recently, he gave $1 million in 2024 to MAGA Inc., President Donald Trump's super political action committee, Federal Election Commission records show. Earlier, Michael contributed $2,700 to Hillary Clinton's 2016 presidential campaign.
[19]
Anthropic fights designation from Department of War as AI dispute escalates
The Department of Defense, known under the Trump administration as the Department of War, has just officially designated AI company Anthropic a "supply-chain risk" to national security. But Anthropic isn't buying it. "We do not believe this action is legally sound, and we see no choice but to challenge it in court," Anthropic CEO Dario Amodei wrote in a statement responding to the ongoing dispute. Amodei also emphasized that the designation does not affect the majority of Anthropic customers. Anthropic is "proud of the work" it has done alongside the federal government in "supporting frontline warfighters with applications such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more," Amodei said. The dispute began over the potential use of Anthropic's AI technology to carry out mass domestic surveillance, and to power autonomous weapons like drones. The relationship between Anthropic and the U.S. military deteriorated last week, after the AI company won a $200 million contact from the federal government -- but sought guarantees that its technology would not be used for surveillance, or weapons that can fire without humans in the loop. The U.S. government would not agree to Anthropic's terms and threatened to designate the company a supply-chain risk, which it has now done. Trump also issued an executive order that tells every federal agency to stop using Anthropic's AI. Amodei apologized for the leak of an internal memo, and confirmed recent reports that Anthropic and the Department of Defense have re-entered negotiations. If the two sides can't come to terms, Anthropic will assist the government in a transitional period, Amodei said. The CEO also referenced the U.S. government's deal with OpenAI, made in the wake of the Anthropic dispute. Even OpenAI "characterized" its deal with the U.S. government as "confusing," he said. OpenAI CEO Sam Altman was forced to address the deal after receiving significant blowback from users. "Our most important priority right now is making sure that our warfighters and national security experts are not deprived of important tools in the middle of major combat operations," Amodei wrote. "Anthropic will provide our models to the Department of War and national security community, at nominal cost and with continuing support from our engineers, for as long as is necessary to make that transition, and for as long as we are permitted to do so." As the Wall Street Journal previously reported, the U.S. military has already used Anthropic's Claude models to aid in carrying out strikes in Iran.
[20]
More than a million users a day are signing up for Claude -- as Anthropic hits out at its 'legally unsound' US government ban
* The US government has labeled Anthropic a supply chain risk * Anthropic says it will challenge the decision in the courts * Claude users shouldn't be affected -- and usage numbers are up The US government has now officially designated Anthropic as a supply chain risk over the AI company's unwillingness to sign an intelligence deal with the Pentagon, a move that Anthropic is planning to challenge in the courts. In an official blog post, Anthropic CEO Dario Amodei has described the decision as "legally unsound". It follows a turbulent week of reaction to Anthropic's decision to step away from partnership discussions with the US military, over concerns around mass surveillance and fully autonomous weapons. A company is declared to be a supply chain risk when US authorities believe that doing business with that company would compromise national security. It's the first time a US company has been officially given the label. The move shouldn't have much effect on Claude's users -- as Amodei says, the supply chain risk designation "exists to protect the government rather than to punish a supplier", and doesn't cover anything outside of official government usage. A boost in users While Anthropic and the White House try and settle their differences (and there are some indications that a Pentagon deal could still be signed), Claude is enjoying a healthy boost in user numbers -- perhaps due to its ethical stance over AI in the military. According to Anthropic's Mike Kreiger, more than a million people are now signing up for Claude every day. Claude doesn't publish its user numbers, but was thought to have around 20 million active monthly users at the start of the year. A significant proportion of those new arrivals might be fleeing from ChatGPT. Its developer OpenAI has signed a deal with the US military after Anthropic pulled out, a move that has been met with a wave of criticism from ChatGPT users. Based on a leaked internal memo, Anthropic's Amodei thinks that the ChatGPT deal is mostly "safety theater", and OpenAI CEO Sam Altman has himself described it as "rushed". It's likely that there's more to come from this story over the coming days. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[21]
Scoop: Anthropic CEO apologizes for leaked memo criticizing Trump
Why it matters: The dispute has raised fundamental questions over AI governance and cast a shadow over the industry's relationship with Washington. * Amodei won a legion of fans -- and Anthropic's Claude a flood of new users -- for his initial strong stance in a dispute over how AI could be used by the military. State of play: Despite the apology, Anthropic still plans to sue over the Pentagon's designation of the company as a supply chain risk, which Anthropic says is narrow and only restricts certain activities. "It was a difficult day for the company, and I apologize for the tone of the post," a new blog post from Amodei said Thursday, referring to an explosive internal memo to staff that put negotiations in jeopardy. * "It does not reflect my careful or considered views. It was also written six days ago and is an out of date assessment of the current situation," Amodei said in the post, a copy of which was obtained by Axios. * Amodei says Anthropic did not leak the post or ask anyone else to do it. * The company's "most important" goal now, he added, "is making sure that our war fighters and national security experts are not deprived of the important tools in the middle of war." The other side: "DOW officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately," a senior Pentagon official said in a statement. * "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk." Tension point: As of Thursday night, the Pentagon was still actively using Claude to provide support for military operations, including in Iran, according to a source familiar. Behind the scenes: The Pentagon's deadline for Anthropic to adhere to its "all lawful purposes" standard came and went last Friday at 5:01pm, but days passed and no formal designation of a supply chain risk had been sent.
[22]
The Most Disruptive Company in the World
Meanwhile, OpenAI's own military contract spurred a grassroots boycott. For some at OpenAI, trust had been breached. A top OpenAI researcher announced he was jumping to Anthropic. OpenAI's robotics team lead resigned, citing the new government contract. Altman, OpenAI's CEO, wrote he had been wrong to rush to get a Pentagon deal by Friday. "The issues are super complex, and demand clear communication." By Monday, Altman had conceded his actions the previous Friday had looked "opportunistic"; OpenAI said it had amended its deal to more clearly adopt the same red lines Anthropic wanted -- though legal experts say that without seeing the full contract, it is impossible to know if that's true. On March 4, Anthropic received a letter from the Department of Defense, confirming its designation as a supply-chain risk to national security. Anthropic said the letter was narrower than Hegseth's post suggested, barring contractors from using Claude only in defense contracts. But a second letter, addressed to Senate Intelligence Committee chair Tom Cotton and reviewed by TIME, reveals the department has also invoked a separate statute -- one that would empower agencies beyond the Pentagon to bar Anthropic from their contracts and supply chains. To take effect, it requires sign-off from senior Pentagon officials and gives Anthropic 30 days to respond. The fight with Anthropic will reverberate through the industry. "Some people in the Trump Administration will feel muscular and good about themselves, and they'll massage their biceps when they go home at night," says Dean Ball, who drafted Trump's AI action plan before joining the think tank Foundation for American Innovation. But it may dissuade companies from working with the Pentagon, or push them abroad, he says. "In the end, this is not good for the U.S. as a stable business environment," Ball says, "and that's the thing we depend on." Anthropic's leaders believe Claude will help build AI systems so powerful they prove decisive in determining the global balance of power. If that's the case, the stakes of its fight with the Pentagon may pale in comparison with what's to come. -- With reporting by Leslie Dickstein and Simmone Shah
[23]
Pentagon officially defines Anthropic as 'supply chain risk' | Fortune
The Trump administration is following through with its threat to designate artificial intelligence company Anthropic as a supply chain risk in an unprecedented move that could force other government contractors to stop using the AI chatbot Claude. The Pentagon said in a statement Thursday that it has "officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately." The decision appeared to shut down the opportunity for further negotiation with Anthropic, nearly a week after President Donald Trump and Defense Secretary Pete Hegseth accused the company of endangering national security. Trump and Hegseth announced a series of threatened punishments last Friday, on the eve of the Iran war, after Anthropic CEO Dario Amodei refused to back down over concerns the company's products could be used for mass surveillance of Americans or autonomous weapons. Amodei said in a statement Thursday that "we do not believe this action is legally sound, and we see no choice but to challenge it in court." The Pentagon statement said, "this has been about one fundamental principle: the military being able to use technology for all lawful purposes. The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk." Amodei countered that the narrow exceptions Anthropic sought to limit surveillance and autonomous weapons "relate to high-level usage areas, and not operational decision-making." He said there were "productive conversations" with the Pentagon in recent days over whether it could keep using Claude or establish a "smooth transition" if no agreement was reached. Trump gave the military six months to phase out Claude, which is already widely embedded in military and national security platforms. Amodei said it's a priority to make sure warfighters won't be "deprived of important tools in the middle of major combat operations." Some military contractors were already cutting ties with Anthropic, a rising star in the tech industry that sells Claude to a variety of businesses and government agencies. Lockheed Martin said it will "follow the President's and the Department of War's direction" and look to other providers of large language models. "We expect minimal impacts as Lockheed Martin is not dependent on any single LLM vendor for any portion of our work," the company said. How the Defense Department will interpret the scope of the risk designation is unclear. Amodei said a notification Anthropic received from the Pentagon on Wednesday shows it only applies to Claude's use by customers as a "direct part of" their military contracts. Microsoft said its lawyers studied the rule and the company "can continue to work with Anthropic on non-defense related projects." The Pentagon's decision to apply a rule designed to address supply threats posed by foreign adversaries was met with broad criticism. Federal codes have defined supply chain risk as a "risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert" a system in order to disrupt, degrade or spy on it. U.S. Sen. Kirsten Gillibrand, a New York Democrat and member of the Senate Armed Services Committee and Senate Intelligence Committee, called it "a dangerous misuse of a tool meant to address adversary-controlled technology." "This reckless action is shortsighted, self-destructive, and a gift to our adversaries," she said in a written statement Thursday. Neil Chilson, a Republican former chief technologist for the Federal Trade Commission who now leads AI policy at the Abundance Institute, said the decision looks like "massive overreach that would hurt both the U.S. AI sector and the military's ability to acquire the best technology for the U.S. warfighter." Earlier in the day, a group of former defense and national security officials sent a letter to U.S. lawmakers expressing "serious concern" about the designation. "The use of this authority against a domestic American company is a profound departure from its intended purpose and sets a dangerous precedent," said the letter from former officials and policy experts, including former CIA director Michael Hayden and retired Air Force, Army and Navy leaders. They added that such a designation is meant to "protect the United States from infiltration by foreign adversaries -- from companies beholden to Beijing or Moscow, not from American innovators operating transparently under the rule of law. Applying this tool to penalize a U.S. firm for declining to remove safeguards against mass domestic surveillance and fully autonomous weapons is a category error with consequences that extend far beyond this dispute." While losing big partnerships with defense contractors, Anthropic experienced a surge of consumer downloads over the past week due to people siding with its moral stance. More than a million people signed up for Claude each day this week, the company said, lifting it past OpenAI's ChatGPT and Google's Gemini as the top AI app in more than 20 countries in Apple's app store. The dispute with the Pentagon has also further deepened Anthropic's bitter rivalry with OpenAI that started when ex-OpenAI leaders, including Amodei, started Anthropic in 2021. Hours after the Pentagon punished Anthropic last Friday, OpenAI announced a deal to effectively replace Anthropic with ChatGPT in classified military environments. OpenAI said it sought similar protections against domestic surveillance and fully autonomous weapons but later had to amend its agreements, leading CEO Sam Altman to say he shouldn't have rushed a deal that "looked opportunistic and sloppy." Amodei also expressed regret about his own part in that "difficult day for the company," saying Thursday he wanted to "directly apologize" for an internal note he sent to Anthropic staff that attacked OpenAI's behavior and suggested Anthropic was being punished for not giving "dictator-like praise" to Trump.
[24]
Pentagon says it is labeling AI company Anthropic a supply chain risk 'effective immediately'
The Trump administration is following through with its threat to designate artificial intelligence company Anthropic as a supply chain risk in an unprecedented move that could force other government contractors to stop using the AI chatbot Claude. The Pentagon said in a statement Thursday that it has "officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately." The decision appeared to shut down the opportunity for further negotiation with Anthropic, nearly a week after President Donald Trump and Defense Secretary Pete Hegseth accused the company of endangering national security. Trump and Hegseth announced a series of threatened punishments last Friday, on the eve of the Iran war, after Anthropic CEO Dario Amodei refused to back down over concerns the company's products could be used for mass surveillance of Americans or autonomous weapons. The San Francisco-based company didn't immediately respond to a request for comment Thursday. It has previously vowed to sue if the Pentagon pursued what the company described as a "legally unsound" action "never before publicly applied to an American company." The Pentagon didn't reply to questions in time for publication. Some military contractors were already cutting ties with Anthropic, a rising star in the tech industry that sells Claude to a variety of businesses and government agencies. Lockheed Martin said it will "follow the President's and the Department of War's direction" and look to other providers of large language models. "We expect minimal impacts as Lockheed Martin is not dependent on any single LLM vendor for any portion of our work," the company said. It's not yet clear if the designation aims to block Anthropic's use by all federal government contractors or just those that partner with the military. The Pentagon's decision to apply a rule designed to address supply threats posed by foreign adversaries was quickly met with criticism from both opponents and some supporters of Trump's Republican administration. Federal codes have defined supply chain risk as a "risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert" a system in order to disrupt, degrade or spy on it. U.S. Sen. Kirsten Gillibrand, a New York Democrat and member of the Senate Armed Services Committee and Senate Intelligence Committee, called it "a dangerous misuse of a tool meant to address adversary-controlled technology." "This reckless action is shortsighted, self-destructive, and a gift to our adversaries," she said in a written statement Thursday. Neil Chilson, a Republican former chief technologist for the Federal Trade Commission who now leads AI policy at the Abundance Institute, said the decision looks like "massive overreach that would hurt both the U.S. AI sector and the military's ability to acquire the best technology for the U.S. warfighter." Earlier in the day, a group of former defense and national security officials sent a letter to U.S. lawmakers expressing "serious concern" about the designation. "The use of this authority against a domestic American company is a profound departure from its intended purpose and sets a dangerous precedent," said the letter from former officials and policy experts, including former CIA director Michael Hayden and retired Air Force, Army and Navy leaders. They added that such a designation is meant to "protect the United States from infiltration by foreign adversaries -- from companies beholden to Beijing or Moscow, not from American innovators operating transparently under the rule of law. Applying this tool to penalize a U.S. firm for declining to remove safeguards against mass domestic surveillance and fully autonomous weapons is a category error with consequences that extend far beyond this dispute." While losing its big partnerships with defense contractors, Anthropic experienced a surge of consumer downloads over the past week due to people siding with its moral stance. Anthropic has boasted of more than a million people signing up for Claude each day this week, lifting it past OpenAI's ChatGPT and Google's Gemini as the top AI app in more than 20 countries in Apple's app store. The dispute with the Pentagon has also further deepened Anthropic's bitter rivalry with OpenAI, which it announced a Friday deal with the Pentagon to effectively replace Anthropic with ChatGPT in classified environments. OpenAI CEO Sam Altman later said he's saying he shouldn't have rushed a deal that "looked opportunistic and sloppy."
[25]
Anthropic officially told by DOD that it's a supply chain risk even as Claude used in Iran
New York Times columnist Andrew Ross Sorkin and CEO and co-founder of Anthropic Dario Amodei speak onstage during the 2025 New York Times Dealbook Summit at Jazz at Lincoln Center on December 03, 2025 in New York City. The Department of Defense has officially informed Anthropic's leadership that the company and its products have been designated a supply chain risk, effective immediately, according to a senior department official. "From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes," the official told CNBC. "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk." Anthropic is the only American company ever to be publicly named a supply chain risk, as the designation has traditionally been used against foreign adversaries. The formal declaration will require defense vendors and contractors to certify that they don't use Anthropic's models in their work with the Pentagon. The formal designation marks the latest development in the ongoing clash between Anthropic and the Pentagon, which have been at odds over the startup's artificial intelligence models, known as Claude, can be used.
[26]
Anthropic vows court fight in Pentagon row
Washington (United States) (AFP) - Anthropic chief executive Dario Amodei has said the company has "no choice" but to challenge in court the Pentagon's formal designation of the artificial intelligence firm as a risk to US national security. The CEO, writing in a blog post on Thursday, insisted however that the ruling's practical scope is narrower than initially suggested, signaling that the designation would not have a catastrophic effect on the company. Amodei said the Department of War -- the name preferred by the Trump administration for the Department of Defense -- confirmed in a letter that Anthropic and its products, including its widely-used Claude AI model, have been deemed a supply chain risk. It is the first time a US company has ever been publicly given such a designation, a label typically reserved for organizations from foreign adversary countries, like Chinese tech company Huawei. Amodei, in his blog post, said the company disputes the legal basis of the action but sought to reassure customers. "It plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts," he wrote. The designation will require defense vendors and contractors to certify that they don't use Anthropic's models in their work with the Pentagon. But Amodei argued that under the relevant statute, the intention is "to protect the government rather than to punish a supplier" and requires the Department of Defense to use "the least restrictive means necessary." Microsoft, one of Anthropic's biggest partners, agreed with that reading, telling US media its lawyers studied the designation and concluded that Anthropic products, including Claude, can remain available to its customers other than the Department of War. 'Sloppy' The dispute erupted after Anthropic infuriated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems. Washington hit back, saying the Pentagon operates within the law and that contracted suppliers cannot dictate terms on how their products are used. Amodei also used the statement to apologize for an internal company memo leaked to the press this week, in which he told staff the actions against the company were politically motivated. "The real reasons" the Trump administration "do not like us is that we haven't donated to Trump (while OpenAI/Greg have donated a lot)," Amodei said, referring to Greg Brockman, the president of ChatGPT-maker OpenAI, who has donated $25 million to Trump. Amodei called the memo an "out-of-date assessment of the current situation," written under duress on a day that saw his company under extreme pressure from the government. OpenAI initially swooped in to replace Anthropic in its contract with the US military, but that move backfired when senior OpenAI staff expressed discomfort with the deal. OpenAI CEO Sam Altman later said the deal was "sloppy" and that he was working to revise it. The standoff with the Pentagon has had some silver lining for Anthropic, which was founded in 2021 by former staffers of OpenAI, with a focus on AI safety. The conflict has helped propel the Claude app to the top of download rankings on Apple and Google smartphones. Anthropic also indicated to AFP that the number of paying users of its Claude model had doubled since the beginning of the year and that its app is currently downloaded more than a million times a day.
[27]
US designates Anthropic a supply chain risk after AI military dispute
The Trump administration's unprecedented move against Anthropic over AI safeguards is forcing government contractors to reconsider their use of the company's chatbot Claude. The US administration is following through with its threat to designate artificial intelligence (AI) company Anthropic as a supply chain risk in an unprecedented move that could force other government contractors to stop using the AI chatbot, Claude. The Pentagon said in a statement Thursday that it has "officially informed Anthropic leadership that the company and its products are deemed a supply chain risk, effective immediately." The decision appeared to shut down the opportunity for further negotiation with Anthropic, nearly a week after President Donald Trump and defence secretary Pete Hegseth accused the company of endangering national security. Trump and Hegseth announced a series of threatened punishments last Friday, on the eve of the Iran war, after Anthropic CEO Dario Amodei refused to back down over concerns the company's products could be used for mass surveillance of Americans or autonomous weapons. Amodei said in a statement Thursday that "we do not believe this action is legally sound, and we see no choice but to challenge it in court." "This has been about one fundamental principle: the military being able to use technology for all lawful purposes," the Pentagon statement said. "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk," it added. Amodei countered that the narrow exceptions Anthropic sought to limit surveillance and autonomous weapons "relate to high-level usage areas, and not operational decision-making". He added there were "productive conversations" with the Pentagon in recent days over whether it could keep using Claude or establish a "smooth transition" if no agreement was reached. Trump gave the military six months to phase out Claude, which is already widely embedded in military and national security platforms. Amodei said it's a priority to make sure warfighters won't be "deprived of important tools in the middle of major combat operations." Some military contractors were already cutting ties with Anthropic, a rising star in the tech industry that sells Claude to a variety of businesses and government agencies. Defence company Lockheed Martin said it will "follow the President's and the Department of War's direction" and look to other providers of large language models. "We expect minimal impacts as Lockheed Martin is not dependent on any single LLM vendor for any portion of our work," the company said. How the US Defence Department will interpret the scope of the risk designation is unclear. Amodei said a notification Anthropic received from the Pentagon on Wednesday shows it only applies to Claude's use by customers as a "direct part of" their military contracts. Microsoft said its lawyers studied the rule and the company "can continue to work with Anthropic on non-defence related projects." Pentagon draws criticism for its decision The Pentagon's decision to apply a rule designed to address supply threats posed by foreign adversaries was met with broad criticism. Federal codes have defined supply chain risk as a "risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert" a system in order to disrupt, degrade or spy on it. US Senator Kirsten Gillibrand, a New York Democrat and member of the Senate Armed Services Committee and Senate Intelligence Committee, called it "a dangerous misuse of a tool meant to address adversary-controlled technology." "This reckless action is shortsighted, self-destructive, and a gift to our adversaries," she said in a written statement Thursday. Neil Chilson, a Republican former chief technologist for the Federal Trade Commission who now leads AI policy at the Abundance Institute, said the decision looks like "massive overreach that would hurt both the US AI sector and the military's ability to acquire the best technology for the US warfighter." Earlier in the day, a group of former defence and national security officials sent a letter to US lawmakers expressing "serious concern" about the designation. "The use of this authority against a domestic American company is a profound departure from its intended purpose and sets a dangerous precedent," said the letter from former officials and policy experts, including former CIA director Michael Hayden and retired Air Force, Army and Navy leaders. They added that such a designation is meant to "protect the United States from infiltration by foreign adversaries -- from companies beholden to Beijing or Moscow, not from American innovators operating transparently under the rule of law. Applying this tool to penalise a US firm for declining to remove safeguards against mass domestic surveillance and fully autonomous weapons is a category error with consequences that extend far beyond this dispute." Anthropic sees boost in consumer downloads While losing big partnerships with defence contractors, Anthropic experienced a surge of consumer downloads over the past week due to people siding with its moral stance. More than a million people signed up for Claude each day this week, the company said, lifting it past OpenAI's ChatGPT and Google's Gemini as the top AI app in more than 20 countries in Apple's app store. The dispute with the Pentagon has also further deepened Anthropic's bitter rivalry with OpenAI, which started when ex-OpenAI leaders, including Amodei, started Anthropic in 2021. Hours after the Pentagon punished Anthropic last Friday, OpenAI announced a deal to effectively replace Anthropic with ChatGPT in classified military environments. OpenAI said it sought similar protections against domestic surveillance and fully autonomous weapons, but later had to amend its agreements, leading CEO Sam Altman to say he shouldn't have rushed a deal that "looked opportunistic and sloppy." Amodei also expressed regret about his own part in that "difficult day for the company," saying Thursday he wanted to "directly apologise" for an internal note he sent to Anthropic staff that attacked OpenAI's behaviour and suggested Anthropic was being punished for not giving "dictator-like praise" to Trump.
[28]
Anthropic says that the Pentagon has declared it a national security risk
Anthropic said Thursday that it has been designated a threat to national security by the Defense Department, a striking move that bans the company from doing business with the U.S. military and could send shockwaves through America's AI industry. The designation, which the company said it received on Wednesday and specifically labels Anthropic a "supply-chain risk to national security," requires the Pentagon and its contractors to stop using Anthropic's AI services for all defense business. Defense Secretary Pete Hegseth first telegraphed the move Friday evening in a post on X. The move comes after months of increasingly tense negotiations over how the military should be able to use Anthropic's Claude AI systems. Though a relatively new technology, generative AI models like Claude have quickly been embraced by the Trump administration, including for military use. For the past several months, the Pentagon has been negotiating new contract terms with Anthropic, along with other leading American AI companies, to allow for more expansive military use of AI. While the Pentagon has sought to harness powerful AI systems for "any lawful use," Anthropic CEO Dario Amodei had wanted stronger guarantees the Pentagon would not use its AI technology for deadly autonomous weapons or mass domestic surveillance. Amodei confirmed the supply-chain risk label in a statement Thursday night and said the company did not agree with it, writing "we do not believe this action is legally sound, and we see no choice but to challenge it in court." "Anthropic has much more in common with the Department of War than we have differences," he wrote in the statement. "We both are committed to advancing US national security and defending the American people, and agree on the urgency of applying AI across the government. All our future decisions will flow from that shared premise." Until last week, Anthropic was the only AI company whose services were cleared for use on the Defense Department's classified networks. Hours after Hegseth announced he would seek to label Anthropic as a supply chain risk last week, OpenAI CEO Sam Altman announced his company had reached a new agreement with the Pentagon to use OpenAI's services in classified settings, which could allow OpenAI to replace much of Anthropic's current business with the Pentagon. Elon Musk's xAI and its Grok AI systems also struck a deal with the Pentagon last week to be cleared for use on classified networks. In the statement posted on its website Thursday evening, Amodei emphasized that the ban on Anthropic's business with the military did not apply to contracts with military suppliers for non-defense-related purposes. Anthropic has extensive business deals with many of America's leading tech companies, including Amazon and Microsoft, many of which also have large contracts with the Pentagon. A senior Defense Department official confirmed the supply-chain risk determination was effective immediately. "From the very beginning," the official told NBC News on Thursday, "this has been about one fundamental principle: the military being able to use technology for all lawful purposes. The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk." In his post announcing the move last Friday, Hegseth wrote that Anthropic would "continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service." "Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon," Hegseth said in the post. "Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic." In a statement last week during the strained contract negotiations and before Hegseth announced the move, Amodei noted that the supply-chain risk label, usually reserved for foreign adversaries and associated businesses, had "never before applied to an American company." Several legal observers have said the designation would likely not hold up legally and is instead meant to warn other companies to toe the Pentagon's line. The mere threat of such a designation already roiled Washington and the tech industry. Fearing fallout from the potential supply-chain-risk decision, defense experts, Anthropic rival OpenAI and members of Congress had tried to cool tensions between Anthropic and the Pentagon throughout this week. An influential tech advocacy group, whose members include Nvidia and Apple, sent a letter to Hegseth on Wednesday urging him to refrain from officially applying the supply chain risk label. Many industry investors fear that by targeting one of America's largest and most successful AI companies, the Defense Department is setting a dangerous precedent that will scare away investment and chill America's AI industry. Last Friday, just over an hour before a 5 p.m. ET deadline to reach an agreement set by Hegseth earlier in the week, President Donald Trump said he would move to bar Anthropic from other federal agencies. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," Trump wrote. The Pentagon already uses Anthropic's Claude systems as part of an agreement with data analytics company Palantir. According to recent reports from The Washington Post and The Wall Street Journal, Anthropic's AI systems have been used to help forces assess intelligence and identify targets in the ongoing war in Iran. NBC News has not confirmed those reports. Anthropic struck its first deal with Palantir in 2024, allowing the Defense Department to use Anthropic's services on classified networks, and had been awarded another $200 million contract in July to further "prototype frontier AI capabilities that advance U.S. national security." In earlier rounds of negotiation, Anthropic had agreed to let the Pentagon use its AI systems for cyber and missile defense purposes. Some experts noted an apparent disconnect between labeling one of America's largest AI companies a supply chain risk to national security while refraining from applying the same label to DeepSeek, a leading Chinese AI company that has been accused of unfair practices. DeepSeek did not respond at the time to requests from several news organizations for comment on the issue. "We're treating an American AI company worse than we're treating a Chinese Communist Party-controlled AI company," said Michael Sobolik, an expert on AI and China issues and senior fellow at the Hudson Institute. "We cannot hobble the most innovative, successful American companies for asking quintessentially American questions about military use and privacy." "The U.S. government risks cutting off the legs of one of our best AI companies in the early years of this AI race," Sobolik continued. "If we do that, where America's frontier models are qualitatively and quantitatively better than China's, it does seem like cutting off our nose to spite our own face." Tim Fist, director of emerging technology at the Washington, D.C.-based Institute for Progress think tank, said the new designation would be counterproductive for America's AI aspirations. "The supply chain risk designation, normally used on foreign adversaries, is both hurting one of America's top AI companies and making other companies much more hesitant to work with the federal government," Fist said in written comments. "The designation hurts the AI industry and thus US national security for essentially no gain."
[29]
Pentagon formally designates Anthropic a supply chain risk amid feud over AI guardrails
Eleanor Watson is a CBS News multi-platform reporter and producer covering the Pentagon. The U.S. military has formally designated artificial intelligence firm Anthropic a supply chain risk, a senior Pentagon official and a source directly familiar with the situation told CBS News, a sweeping move that could cut it off from military contracts. The Trump administration and Anthropic -- the only AI company deployed on the Pentagon's classified networks -- are at an impasse over Anthropic's push for guardrails that would explicitly ban the U.S. military from using its Claude model to conduct mass surveillance on Americans or power fully autonomous weapons. The Pentagon has said it needs the ability to use Claude for "all lawful purposes." Defense Secretary Pete Hegseth announced last week that Anthropic would be cut off from its government contracts and designated a supply chain risk, but Anthropic had not received formal notification of that step until Thursday. Hegseth said the military will phase out Anthropic over six months.
[30]
Anthropic "supply chain risk" designation could favor China
Why it matters: By penalizing a domestic leader for its safety standards, the U.S. is creating a market opening for cheaper, unregulated models from competing countries. State of play: Secretary of War Pete Hegseth said he was directing the DoW to label Anthropic a supply chain risk on X, a designation typically reserved for foreign adversaries. * President Trump instructed every federal agency to stop using Anthropic's technology "IMMEDIATELY" in a Truth Social post. * The designation effectively bars any company doing business with the Department of War from using Anthropic's technology, potentially wiping out dozens of major enterprise customers. The other side: DeepSeek, despite its ties to China, is not designated a supply chain risk. It's also rumored to release its latest model this week. * Its popularity is growing: Per Sensor Tower data, DeepSeek's U.S. downloads grew 20% in one day after OpenAI landed its Department of War contract. What they're saying: "Anthropic is being villainized in a way that these Chinese open source labs aren't," Brexton Pham, global co-head of AI Infrastructure at Cantor Fitzgerald told Axios. * "This very public kind of blow up or fallout will result in the United States government not being able to leverage some of the best models that the United States has," Cole McFaul, senior research analyst at CSET told Axios. Zoom in: The Pentagon has previously described Anthropic's models as superior to alternatives. * Anthropic's AI tools remain deeply embedded in military deployment, most recently used during the U.S. intervention in Iran. * Because the Pentagon has used Claude for longer, McFaul says, training context and efficiency is now at risk. * "This is a failure for the United States," he added. Zoom out: "Chinese models have already taken over the market," May Habib, CEO of AI firm Writer told Axios, as American AI startups are increasingly leaning on cheaper, open source Chinese models to fine tune their own. * This allows enterprise users to avoid the high price tag that comes with relying on leading U.S. AI companies. * It's not just startups: Airbnb CEO Brian Chesky has confirmed the company is using Chinese models because they are cheaper. Between the lines: If corporations and the government can use Chinese models, but can't use the top U.S. AI models, that could give China a competitive advantage. * The U.S. government's treatment of Anthropic could cause a "chilling effect to innovation" leaving an opening for Chinese labs with different ethical standards, Jennifer Huddleston, senior fellow in technology policy at Cato told Axios. * The spread of DeepSeek "provides a channel for promoting Chinese propaganda in the West," according to a report from the Estonian Foreign Intelligence Service. * DeepSeek's privacy policy also states that it stores user data on Chinese servers, governed under Chinese law, including government requests for data. Flashback: Anthropic has argued that Chinese labs are actively trying to extract insights from U.S. frontier models to improve their own. * If the Pentagon limits access to a domestic lab while foreign rivals continue advancing, critics say that could widen the competitive gap Washington is trying to avoid. Yes, but: The military has made clear in several other ways that it doesn't want employees using Chinese models. * The U.S. Navy banned the use of DeepSeek, and so did the Pentagon after discovering employees were using the chatbot on work computers. * A supply chain risk designation is separate from banning use of a company's product: A supply chain risk weaponizes procurement, forcing any company with government ties to separate from firms deemed supply chain risks. A ban simply prevents use. The bottom line: By restricting Anthropic, a leading U.S. AI firm, the government could give Chinese competitors -- already expanding in the U.S. market -- a strategic opening.
[31]
Anthropic at Risk of Huawei-Like Ban After Pentagon Punishment
Anthropic's CEO vowed to fight the designation in court, saying the company does not believe the action is legally sound, and the company has drawn support from tech groups across Silicon Valley. Anthropic PBC runs the risk of losing a wide range of US government business after the Defense Department declared it a supply-chain risk, a rare designation that until now has only been assigned to companies from adversary nations like China's Huawei Technologies Co. Such a penalty has never been imposed on an American company, let alone one at the leading edge of new technology that the government has declared a priority, according to contracting and national security specialists. The Pentagon's decision, they said, risks setting a dangerous precedent for companies seeking to innovate in areas like Anthropic in artificial intelligence. "Using this tool against a domestic AI firm sends a troubling signal that could chill innovation and weaken the very technology ecosystem the United States needs to stay competitive," said Morgan Plummer, Vice President of Policy at Americans for Responsible Innovation. "These authorities were designed to keep foreign adversaries out of our supply chains, not to punish American companies for building safeguards into their technology." The move threatens to unravel Anthropic's $200 million contract to provide the Pentagon with classified AI tools, and could bar it from partnering with other companies on defense work. While that's a fraction of the $20 billion in revenue the firm has projected for 2026, the supply-chain risk label threatens to cast a pall over the company, whose AI tools have quickly gained favor in the corporate world. The decision culminated weeks of tense negotiations over access to the company's technology. Talks broke down last week after the firm demanded assurances that its AI wouldn't be used for mass surveillance of Americans or autonomous weapons deployment, prompting President Donald Trump to order US agencies to cease work with Anthropic and Defense Secretary Pete Hegseth to threaten the rarely invoked supply-chain exclusion. To implement its finding, the Pentagon is relying on a measure known as section 3252 of the law governing the US armed forces that permits the Defense Department to bar a company as a contractor if it's found to imperil the supply chain. It defines risk as the potential that "an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert" the technology or service being provided. The provision requires the US defense secretary to follow procedural steps that include demonstrating the supply-chain risk and showing that less-intrusive measures weren't available. Hegseth has informed Congress of his decision in letters to top Republicans and Democrats on the House and Senate committees for armed services, appropriations and intelligence, according to correspondence seen by Bloomberg. "This determination is based in part on a risk analysis by the DoW and input from senior DoW personnel that the Covered Entity's restrictions on the use of its products and services introduces national security risks to the DoW's supply chain," Hegseth wrote, referring to Anthropic and using an acronym for the Department of War, the name he now favors for the Department of Defense. In a blog post Thursday, Anthropic Chief Executive Officer Dario Amodei vowed to fight the designation in court, saying "we do not believe this action is legally sound." Alan Rozenshtein, a professor at the University of Minnesota Law School, said that a supply-chain risk designation under section 3252 shouldn't even apply in Anthropic's case. In writing the law, he said, Congress was taking aim at foreign companies to address "malware or back doors or sabotage into government systems," not target an American business like San Francisco-based Anthropic. "If this counted as a supply chain risk, then anytime the government disagreed with any US company about any contract terms, it could call that company a supply chain risk and destroy it," Rozenshtein said. "I don't think that'd be constitutional." Get the Supply Lines newsletter. Get the Supply Lines newsletter. Get the Supply Lines newsletter. Trade wars, tariff threats and logistics shocks are upending businesses and spreading volatility. Understand the new order of global commerce. Trade wars, tariff threats and logistics shocks are upending businesses and spreading volatility. Understand the new order of global commerce. Trade wars, tariff threats and logistics shocks are upending businesses and spreading volatility. Understand the new order of global commerce. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. Trump and Hegseth have spelled out a six-month transition period to shift AI work to other providers, leaving a door open to more talks. The president often takes hard-line public stances on issues -- his frequent threats on tariffs, for example -- that he later softens. It's possible that the supply-chain designation is also a negotiating tactic, aimed at forcing Anthropic to ease the conditions it's seeking to impose on use of AI for warfare. For now, though, talks have stalled. In his post Thursday, Amodei said he'd been holding "productive" conversations with the Pentagon regarding the company's concerns. Yet Emil Michael, the US under secretary of defense for research and engineering who had been negotiating with Amodei over the past several weeks, said in an X post late Thursday that conversations with the company were over. "I want to end all speculation: there is there is no active @DeptofWar negotiation with @AnthropicAI," Michael wrote. Already, other US government agencies including the Treasury Department and General Services Administration have said they're dropping Anthropic in the wake of Trump's order. The company's archrival OpenAI announced an agreement of its own for classified Pentagon AI work hours after Trump and Hegseth demanded Anthropic's exit from government. Against Huawei, the US government moved nearly a decade ago to declare the Shenzhen, China-based telecommunications equipment maker a supply-chain risk and bar it from government procurement, then gradually escalated restrictions with measures from agencies including the Federal Communications Commission to block it from working with any US company. While Hegseth had threatened last week to bar other military contractors or their partners from conducting "any commercial activity with Anthropic," the Pentagon's official designation stopped short of that. Amodei said that the statute invoked is narrowly tailored enough to keep it from affecting other Anthropic business that's unrelated to specific Pentagon contracts. That offered some reassurance for customers and investors who feared the company could lose the ability to do any business with companies that worked with the Pentagon. Spokespeople for Microsoft Corp. and Alphabet Inc.'s Google said their companies had concluded that they can continue to work with Anthropic on non-defense projects. Even so, the designation means the company has to stop working with Palantir Technologies Inc., another military contractor. That includes Palantir's use of Anthropic's Claude in the digital mission control platform known as Maven Smart System, which has been deployed in the US military's Iran campaign. The decision also threatens to slow a broader Pentagon effort to accelerate adoption of AI across the US military. Until recently, Anthropic provided the only AI system that could operate in the Pentagon's classified cloud, and its Claude Gov tool has become a favored option among defense personnel for its ease of use. "It's a good capability" and removing it is "going to be painful for all involved," said Lauren Kahn, a senior research analyst at Georgetown's Center for Security and Emerging Technology. "I really think ultimately, who suffers is the war fighter." As the Pentagon fight escalated, Anthropic has drawn support across Silicon Valley. Tech groups representing major companies including Google and Apple Inc. have urged Trump to reconsider designating Anthropic a national security risk, arguing that would cause detrimental ripple effects for the rest of the industry. "While this is specifically about Anthropic, the results of this are going to be broader than Anthropic because it's sending an overall message to the AI community about how the Pentagon is viewing these products and standards," said Jennifer Huddleston, senior fellow in technology policy at Cato Institute. "What does this say about the government's willingness to intervene and take that choice away from these American companies?"
[32]
With multi-billions of dollars of revenue at risk, Anthropic sends in the lawyers against Trump 2.0's ban. What next?
As Anthropic admits a Pentagon blacklist of the company and its tech could cause the business to lose out on "multiple billions of dollars" in revenue, the firm has made good on its threat to legal up against Trump 2.0. Anthropic has filed a lawsuit against the US Department of Defense and other Federal agencies after being labelled a "supply chain risk" following its refusal to remove ethics clauses barring use of its AI for mass surveillance of domestic citizens and autonomous weapons launching. In a 48-page lawsuit filed in the US District Court for the Northern District of California, Anthropic calls Pentagon reprisal to its refusal to abandon its conduct clauses"unprecedented and unlawful." The first of these claims seems easily provable. This is the first time that the Pentagon has issued a supply chain risk designation against a US company, never mind one that only 24 hours earlier had been the only AI tech cleared by it for use in its most sensitive systems. Usually the black-mark is saved for Chinese firms. The lawsuits, which Anthropic filed in the northern district court of California and the US court of appeals for the Washington DC Circuit, come after the Pentagon formally issued the supply chain risk designation last Thursday, the first time the blacklisting tool has been used against a US company. As for unlawful, clearly that's one for the courts to decide. The risk designation's impact extends further with President Trump ordering that all Federal Government agencies have to stop using Anthropic tech within six months, and any other firm that wants to do business with the US Government needs to break its ties with Anthropic ASAP. On the face of it, that would appear to include a lot of leading tech firms, the likes of Oracle, NVIDIA, Google, Microsoft, Amazon, Salesforce etc, although several of these suppliers have moved to assure customers that this prohibition only applies to military/defense work, not general commercial activities. For its part, Anthropic has taken a similar tack, assuring customers: Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts -- it cannot affect how contractors use Claude to serve other customers. In practice, this means: If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude -- through our API, claude.ai, or any of our products -- is completely unaffected. If you are a Department of War contractor, this designation -- if formally adopted -- would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected. Again, presumably, the courts will rule on that. But for now, Anthropic says in its court filing that the Pentagon move is "harming Anthropic irreparably". As per the suit: Defendants are seeking to destroy the economic value created by one of the world's fastest-growing private companies, which is a leader in responsibly developing an emergent technology of vital significance to our Nation. What will be interesting to see, if this makes it to a courtroom without a TACO moment on the part of the Administration in Washington, is how that sits when compared to comments from Anthropic CEO Dario Amodei on CBS News earlier this month when he declared that "the impact of this designation is fairly small" and his company was "gonna be fine". On the other hand, CFO Krishna Rao warns in the filings that Anthropic faces losing between 50% and 100% of its US defense sector revenues, and suffer damage that would be "almost impossible to reverse", noting: Across Anthropic's entire business, and adjusting for how likely any given customer is to take a maximal reading, the government's actions could reduce Anthropic's 2026 revenue by multiple billions of dollars. To date, negotiations with financial sector customers, worth roughly $180 million combined, have been disrupted, while one fintech customer is said to have halved its pre-agreed budget commitment. Over a hundred enterprise customers are said to have contacted the firm with concerns about whether it can still use its tech. There's one other interesting angle emerging from the Anthropic legal filings, an argument that the Government blacklisting is a violation of the 1st Amendment and restricting Constitutional freedom of speech protections. Given that Trump 2.0 has been accusing other governments of similar things as soon as they attempt to regulate the activities of US tech firms abroad. It's an ironic accusation to lay at the door of the White House. But Anthropic argues: The Constitution does not allow the Government to wield its enormous power to punish a company for its protected speech. No Federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation. Free speech advocates have jumped on the case as well, with FIRE (Foundation for Individual Rights and Expression) teaming up with The Electronic Frontier Foundation, the Cato Institute, Chamber of Progress, and the First Amendment Lawyers Association, to file a a friend-of-the-court brief, arguing the Department of War's designation of Anthropic as a supply chain risk does indeed violate the First Amendment: This potentially ruinous sanction threatens not only Anthropic's business but also that of its partners and customers. If left in place, that sanction imposes a culture of coercion, complicity, and silence, in which the public understands that the government will use any means at its disposal to punish those who dare to disagree. The Pentagon's temper tantrum is a textbook violation of Anthropic's First Amendment rights. It adds: By bullying Anthropic, the Pentagon is violating the First Amendment...Permitting the government to dictate Anthropic's speech would violate the First Amendment, chill the rights of not just Anthropic but business leaders, innovators, and entrepreneurs across the country, and stifle the marketplace of ideas about AI as a transformative technology, how to harness it responsibly, and the privacy and civil liberties risks associated with its use by the national security apparatus. Elsewhere, Silicon Valley voices have been raised in Anthropic's support as staffers from some of its biggest rivals, including OpenAI and Google DeepMind, made public their signing of an amicus brief. An amicus brief is a legal filing submitted by third parties who are not directly involved in a court case but can be said to have expertise directly relevant to the subject of the litigation. Now this does not, as some headlines have suggested, mean that the companies have made this move. Employees in this case have signed in a personal capacity and do not represent the corporate views of their companies. But there are enough big hitters who've gone public to date to make this a potentially significant intervention, including Google DeepMind Chief Scientist Jeff Dean. The signatories argue that the "supply chain risk" designation is improper retaliation that harms the public interest, and that the concerns outlined by Anthropic around surveillance and autonomous weapons are legitimate and require a better response. And the crisis has even thrown up some unlikely bedfellows in its wake. After earlier insisting that it isn't appropriate for Anthropic to dictate public policy, US Republican Senator, current Chairman of the Senate Commerce Committee, and long-time die-hard supporter of Donald Trump, is now record as saying: I'll confess -- I have not seen a basis laid out for why the Government would be prohibited from using Anthropic. Claude is one of the many AI tools that can be very helpful...I don't think government should be picking winners and losers Others around the administration appear less willing to adjust their positions. While Anthropic continues to say that it is willing to "pursue every path toward resolution, including dialogue with the government", Emil Michael, Under-Secretary of Defense for Research and Engineering, is adamant when he tells Bloomberg News: I don't think there's a scenario where this gets resolved in that way. The next major development, barring one of those TACO moments that his critics accuse President Trump of, looks likely to come on 24th of this month, in the form of a preliminary hearing on whether to grant Anthropic's request for an injunction temporarily stopping the Pentagon from taking action. In a status conference held yesterday in California, Anthropic argued for an earlier date, citing fears that the administration would escalate things by invoking the US Defense Production Act which would allow it to commandeer the firm's tech and compel it to work to order.
[33]
Anthropic says it's in 'productive' discussions with Pentagon about Claude but still plans to sue on ban - SiliconANGLE
Anthropic says it's in 'productive' discussions with Pentagon about Claude but still plans to sue on ban Anthropic PBC has shared an update about its artificial intelligence safety feud with the Pentagon. In a Thursday blog post, Chief Executive Dario Amodei stated that the company has been "having productive conversations" with defense officials. He also addressed several other aspects of the disagreement. Last June, Anthropic PBC won a contract to provide the Pentagon with access to its AI models. The company stated that it wouldn't allow its software to be used for domestic surveillance or the development of fully autonomous weapons. In response, U.S. President Donald Trump ordered federal agencies to stop using Claude. The first focus of Amodei's Thursday blog post was not Trump's order, but rather a related move by U.S. Defense Secretary Pete Hegseth. Hegseth instructed the Pentagon to designate Anthropic as a supply chain risk. Such a designation limits the manner in which U.S. military suppliers can use the affected company's products. Amodei wrote in the Thursday blog post that the move won't affect the "vast majority" of Claude users. Furthermore, it's expected to have a limited impact on the customers who do have to make adjustments in response to the directive. "The law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain," Amodei wrote. "Even for Department of War contractors, the supply chain risk designation doesn't (and can't) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts." The affected suppliers include Microsoft Corp. and Google LLC. The companies have integrated Claude with several of their products, some of which are used by federal agencies. The companies stated this week that they will continue offering Claude access to customers other than the Pentagon. Anthropic plans to challenge its designation as a supply chain risk in court. The company stated last Friday that implementing the designation "would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government." In addition to Anthropic's regulatory status, Amodei's blog post addressed a leaked internal memo he sent on Tuesday. In the note, the executive attributed the company's feud with the Pentagon to his refusal to give "dictator-style praise to Trump." He also criticized rival OpenAI, which announced a contract with the Pentagon shortly after Anthropic was designated a supply chain risk. "It was a difficult day for the company, and I apologize for the tone of the post," Amodei wrote. The executive added that Anthropic has been having "productive conversations" with defense officials in recent days. Amodei wrote that the talks are focused on potential ways the company could continue working with the Pentagon in compliance with its AI safety policy. Additionally, Anthropic is working with officials to "ensure a smooth transition" in the evet that Claude is phased out of public sector networks. The Trump administration has given federal agencies 6 months to switch to a different AI provider.
[34]
Anthropic will fight US 'supply chain risk' designation in court
Anthropic confirms it has been designated a 'supply chain risk' by the US administration, and says it has no choice but to challenge in the courts. Despite ongoing talks between Anthropic and the US Department of Defense, Anthropic confirmed last night it had received a letter from defense secretary Pete Hegseth confirming the 'supply chain risk' designation that had been threatened. "Yesterday (March 4) Anthropic received a letter from the [Defense Department] confirming that we have been designated as a supply chain risk to America's national security," wrote co-founder and CEO Dario Amodei last night in an official statement. "We do not believe this action is legally sound, and we see no choice but to challenge it in court." Amodei was quick to point out that "even supposing it was legally sound", the limited application of the designation means the "vast majority" of its customers will be unaffected by the designation. He said the restriction clearly only applied to the use of Claude by customers as a direct part of contracts with the US defense department, "not all use of Claude by customers who have such contracts". "The Department's letter has a narrow scope, and this is because the relevant statute is narrow, too," wrote Amodei. "It exists to protect the government rather than to punish a supplier." As with previous statements, Amodei strikes a conciliatory tones saying Anthropic is committed to US national security, and will offer continuing support from its engineers to ensure a smooth transition "for as long as we are permitted to do so". Anthropic drew the ire of the US administration after a standoff with the Pentagon, where Anthropic refused to change its safeguards related to using its AI for fully autonomous weapons, or for mass surveillance of US citizens. With many in Silicon Valley supporting its relatively principled stand, and general users sending it to the top of the US Apple charts in recent days for free downloads - beating OpenAI's ChatGPT for the first time - its flagship Claude.ai and Claude Code apps went down for around three hours on 2 March due to "unprecedented demand". Claude Cowork in particular was already becoming the darling of AI enthusiasts in the professional world and Bloomberg reported on Wednesday that Anthropic was on track to generate annual revenue of almost $20bn, more than double its run rate from late 2025, signalling the rapid growth at the AI company which is today valued at around $380bn. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[35]
Defense experts defend Anthropic in letter to Congress, slam DoD for setting 'dangerous precedent'
Anthropic CEO Dario Amodei looks on after a meeting with French President Emmanuel Macron during the AI Impact Summit in New Delhi on February 19, 2026. A group of former defense and intelligence officials and policy experts sent a letter on Thursday to Congress calling for an investigation into the Pentagon's decision to designate Anthropic a supply chain risk. The bipartisan coalition of 30 people said in the letter, shared with CNBC, that the purpose of deeming an entity a supply chain risk is "to protect the United States from infiltration by foreign adversaries." The group characterized Defense Secretary Pete Hegseth's decision last Friday against Anthropic as a "profound departure" that "sets a dangerous precedent." Hegseth announced the directive on X, after President Donald Trump ordered federal agencies to stop using Anthropic, whose Claude models and services have skyrocketed in popularity, largely in the enterprise world. "Applying this tool to penalize a U.S. firm for declining to remove safeguards against mass domestic surveillance and fully autonomous weapons is a category error with consequences that extend far beyond this dispute," the group said in the letter, addressed to the chairs and ranking members of the Senate and House Armed Services Committees. Signatories include retired U.S. Navy Vice Admiral Donald Arthur, former deputy assistant secretary of defense Diana Banks Thompson, former Andreessen Horowitz general partner John O'Farrell, Kat Duffy of the Council on Foreign Relations and Inflection AI CEO Sean White. "For national security, the United States is in an AI race it cannot afford to lose," the letter said. "Blacklisting one of America's leading AI companies -- and requiring its thousands of contractors and partners to sever ties as well -- does not strengthen our competitive position. It weakens it." The group is urging Congress to "exercise its oversight authority against this inappropriate use of executive authority" and to implement legal guardrails "protecting the United States from foreign threats, not disciplining American companies for disagreeing with the executive branch." The letter lands a day after the Information Technology Industry Council (ITI), whose members include Nvidia, Google and Anthropic, sent a letter to Hegseth expressing similar concerns. "Contract disputes should be resolved through continued negotiation between the parties, or by the Department selecting alternate providers through established procurement channels," ITI said in the letter. "Emergency authorities such as supply chain risk designations exist for genuine emergencies and are typically reserved for entities that have been designated as foreign adversaries." Several defense tech companies have told their workforce to stop using Anthropic's Claude service following the White House's orders, CNBC reported.
[36]
Pentagon's chief tech officer says he clashed with AI company Anthropic over autonomous warfare
A top Pentagon official said Anthropic's dispute with the government over the use of its artificial intelligence technology in fully autonomous weapons came after a debate over how AI could be used in President Donald Trump's future Golden Dome missile defense program, which aims to put U.S. weapons in space. U.S. Defense Undersecretary Emil Michael, the Pentagon's chief technology officer, said he came to view the AI company's ethical restrictions on the use of its chatbot Claude as an irrational obstacle as the U.S. military pursues giving greater autonomy to swarms of armed drones, underwater vehicles and other machines to compete with rivals like China that could do the same. "I need a reliable, steady partner that gives me something, that'll work with me on autonomous, because someday it'll be real and we're starting to see earlier versions of that," Michael said in a podcast aired Friday. "I need someone who's not going to wig out in the middle." The comments came after the Pentagon formally designated S an Francisco-based Anthropic a supply chain risk, cutting off its defense work using a rule designed to prevent foreign adversaries from harming national security systems. Anthropic has vowed to sue over the designation, which affects its business partnerships with other military contractors. Trump has also ordered federal agencies to immediately stop using Claude, though the Republican president gave the Pentagon six months to phase out a product that's deeply embedded in classified military systems, including those used in the Iran war. Anthropic said it only sought to restrict its technology from being used for two high-level usages: mass surveillance of Americans or fully autonomous weapons. Michael, a former Uber executive, revealed his side of months-long talks with Anthropic CEO Dario Amodei in a lengthy conversation with Silicon Valley venture capitalists Jason Calacanis, David Friedberg and Chamath Palihapitiya, co-hosts of the "All-In" podcast. A fourth co-host, former PayPal executive David Sacks, is now Trump's AI czar and was not present for the episode but has been a vocal critic of Anthropic, including for its hiring of former Biden administration officials shortly after Trump returned to the White House last year. As talks hit an impasse last week, Michael lashed out at Amodei on social media, saying he "has a God-complex" and "wants nothing more than to try to personally control" the military. In the podcast, however, he positioned the dispute as part of a broader military shift toward using AI. Michael said the military is developing procedures for enabling different levels of autonomy in warfare depending on the risk posed. "This is part of the debate I had with Anthropic, which is we need AI for things like Golden Dome," Michael said, sharing a hypothetical scenario of the U.S. having only 90 seconds to respond to a Chinese hypersonic missile. A human anti-missile operator "may not be able to discriminate with their own eyes what they're going after," but an autonomous counterattack would be a low risk "because it's in space and you're just trying to hit something that's trying to get you." In another scenario, he said, "who could oppose if you have a military base, you have a bunch of soldiers sleeping, that you have a laser that can take down drones autonomously?" In response to the podcast comments, Anthropic pointed to an earlier Amodei statement saying "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner." Michael, the defense undersecretary for research and engineering, was sworn in last May and said he took over the military's "AI portfolio" in August. That's when he said he began scrutinizing Anthropic's contracts -- some of which dated from President Joe Biden's Democratic administration. Michael said he questioned Anthropic over terms of use that he deemed too restrictive. "I need to have the terms of service be rational relative to our mission set," he said. "So we started these negotiations. It took three months and I had to sort of give them scenarios, like this Chinese hypersonic missile example. They're like, 'OK, we'll give you an exception for that.' Well, how about this drone swarm? 'We'll give an exception for that.' And I was like, exceptions doesn't work. I can't predict for the next 20 years what (are) all the things we might use AI for." That's when the Pentagon began insisting Anthropic and other AI companies allow for "all lawful use" of their technology, Michael said. Anthropic resisted that change, while its competitors -- Google, OpenAI and Elon Musk's xAI -- agreed to them, though some still have to get their infrastructure prepared for classified military work, Michael said. The other sticking point for Anthropic was not allowing any mass surveillance of Americans. "They didn't want us to bulk-collect public information on people using their AI system," Michael said, describing the negotiations as "interminable." Anthropic has disputed parts of Michael's version of the talks and emphasized that the protections it sought were narrow and not based on any existing uses of Claude. The next stage of the dispute will likely happen in court.
[37]
Anthropic Reopens Pentagon Talks as Trump Weighs Supply Chain Risk Label
Anthropic previously secured a $200 million Pentagon contract, and its AI has been used in classified operations, including support for US air strikes on Iran, the Financial Times reports. Anthropic CEO Dario Amodei has reportedly reopened negotiations with the US Department of Defense in a last-minute effort to secure continued access to Pentagon contracts as the company faces the possibility of being labeled a supply chain risk by the Trump administration. Amodei has been holding discussions with Emil Michael, the US undersecretary of defense for research and engineering, to finalize terms governing the military's use of Anthropic's artificial intelligence models, the Financial Times reported, citing people familiar with the matter. A new agreement would allow the Pentagon to keep using the company's technology and could prevent a formal designation that would force contractors in the defense supply chain to cut ties with the AI developer, per the report. The talks follow a sharp breakdown in negotiations last week. Michael reportedly accused Amodei of being a "liar" with a "God complex," while discussions collapsed after the two sides failed to agree on language Anthropic said was necessary to prevent misuse of its technology. Related: Ex-OpenAI researcher's hedge fund reveals big Bitcoin miner bets in new SEC filing In an internal memo to staff seen by the FT, Amodei reportedly wrote that near the end of negotiations, the Pentagon offered to accept Anthropic's broader terms if the company removed a clause restricting the "analysis of bulk acquired data." He said this phrase was meant to guard against potential mass domestic surveillance, a scenario Anthropic treats as a red line, alongside the use of AI in lethal autonomous weapons. The dispute escalated after Defense Secretary Pete Hegseth warned that Anthropic could be designated a supply chain risk, a move that would effectively freeze the company out of US military procurement networks. The standoff comes despite Anthropic's existing ties to the defense sector. The company was awarded a contract worth up to $200 million by the US Defense Department in July 2025 and became the first AI provider whose models were used in classified environments and by national security agencies. As Cointelegraph reported, the US military even used Anthropic's Claude AI model to support a major air strike on Iran hours after President Donald Trump ordered federal agencies to stop using the company's systems. Related: Mining companies move deeper into AI, HPC as MARA may sell Bitcoin Meanwhile, in a Wednesday letter to Trump, tech groups warned that labeling a domestic AI firm a supply chain risk could undermine US leadership in AI. The groups argued that treating a US technology company "as a foreign adversary, rather than an asset," could discourage innovation and weaken America's ability to compete with China in the global AI race. Signatories included the Software & Information Industry Association, TechNet, the Computer & Communications Industry Association and the Business Software Alliance. These organizations represent hundreds of American tech companies, including AI chipmakers Nvidia, Alphabet's Google and Apple.
[38]
How the Anthropic-Pentagon Dispute Over AI Safeguards Escalated
March 11 (Reuters) - A standoff erupted between the U.S. Department of Defense and Anthropic in January after the AI lab refused to loosen safety guardrails on its systems, prompting the Pentagon to label it a 'supply-chain risk,' and putting the company's government contracts in jeopardy. The Claude maker's executives warned that the designation, which they view as retaliation for opposing the use of their technology in autonomous weapons and domestic surveillance, could slash their 2026 revenue by billions of dollars. Legal experts suggest the government's case may be undermined by a mismatch between the law invoked and Anthropic's conduct, internal contradictions in the Pentagon's behavior and evidence that its decision may have been driven by animus rather than security. Here is a timeline of the ongoing conflict: January 29 The Pentagon and Anthropic clash over eliminating safeguards that could allow the government to use its technology to target weapons autonomously and conduct U.S. domestic surveillance February 11 The Pentagon pushes AI companies, including Anthropic, to make their AI tools available in classified settings without many of the standard restrictions that the companies apply to other users February 14 The Pentagon considers ending its ties with Anthropic over the AI lab's insistence on keeping some limits on how the U.S. military uses its models February 23 U.S. Defense Secretary Pete Hegseth summons Anthropic CEO Dario Amodei to the Pentagon for talks on the military use of Claude February 24 The Pentagon asks Anthropic to get on board, or risk consequences, including being labeled a supply-chain risk February 25 The Pentagon asks defense contractors, including Boeing and Lockheed Martin, to assess their reliance on Anthropic February 26 Pentagon spokesperson Sean Parnell asks Anthropic to allow the Pentagon to use its technology for all lawful purposes, giving the company until 5:01 p.m. ET on February 27 to decide February 26 Anthropic says it will not accede to the Pentagon's request to eliminate safeguards from its AI systems February 27 U.S. President Donald Trump directs every federal agency to immediately cease all use of Anthropic's technology February 27 Hegseth directs the U.S. DoD to designate Anthropic a "supply-chain risk to national security" February 27 Anthropic says it will challenge in court the Pentagon's decision February 27 OpenAI announces deal to deploy technology in the DoD's classified network February 28 OpenAI says its latest agreement with the Pentagon includes three red lines: its technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems or for any high-stakes automated decisions March 2 The U.S. Departments of State, Treasury and Health and Human Services move to cease using Anthropic's Claude March 3 Lockheed Martin pledges to follow the DoD's direction, signaling a likely exodus of defense contractors removing Anthropic's tools from their supply chains to protect their federal contracts, legal experts say March 4 U.S. Treasury Secretary Scott Bessent tells CNBC that the agency will remove Anthropic from its government systems within days March 4 Big tech industry group pushes to de-escalate the clash, saying a supply-chain risk designation creates uncertainty for companies and could threaten the military's access to the best products and services March 5 The U.S. DoD formally designates Anthropic as a supply-chain risk March 6 Amazon says it is helping customers transition DoD workloads to alternative models on its cloud, while customers and partners could continue using Claude for all non‑Pentagon workloads March 6 The U.S. General Services Administration draws up strict rules for civilian artificial-intelligence contracts and terminates Anthropic's OneGov deal, which made Claude available to the federal government March 9 Anthropic sues to block the Pentagon from placing it on a national security blacklist, saying the designation is unlawful and violates its free speech and due process rights March 9 Anthropic executives say the U.S. government's blacklisting of the AI firm could cut its 2026 revenue by multiple billions of dollars and cause reputational harm March 10 Microsoft files a brief backing Anthropic's lawsuit, saying the DoD designation directly affects it and that a temporary restraining order is needed to avoid costly supplier disruptions and rushed rebuilding of products that depend on Anthropic (Reporting by Anhata Rooprai in Bengaluru; Editing by Arun Koyyur)
[39]
Military commanders given deadline to remove Anthropic products from systems
Senior Pentagon leadership and military commanders have been given a 180-day deadline to remove all of Anthropic's AI products from their systems, with the Defense Department alleging the company's technology poses an "unacceptable supply chain risk for use in all systems and networks." The order was issued in a March 6 internal memo, which was first obtained by CBS News and signed by Pentagon Chief Information Officer Kirsten Davies. The 2-page memo, which was confirmed to The Hill by a senior Pentagon official Wednesday morning, said that U.S. adversaries can "exploit vulnerabilities" of the Department of Defense's (DOD) daily operations and could cause "potential catastrophic risks to the warfighter." The memo also directs any other company with business ties to the DOD to halt using any of Anthropic's products within 180 days, or by Sept. 2, on work tied to the Pentagon's contracts. Davies said in the memo that she is the only official who can provide an exemption. "Exemptions will only be considered for mission-critical activities directly supporting national security operations where no viable alternative exists, and the requesting Component must submit a comprehensive risk mitigation plan for approval," she wrote. The memo was issued a day after the Pentagon formally designated Anthropic as a supply chain risk, a move it is challenging in court. Anthropic declined to comment on the DOD directive. The company has been locked in a high-profile feud with the Pentagon over safety guardrails on its AI models. After negotiations broke down late last month, President Trump directed federal agencies to halt using its technology, and Defense Secretary Pete Hegseth said he was labeling the AI firm a supply chain risk. The designation, which has typically been reserved for foreign adversaries, bars defense contractors from using Anthropic's products. After the Pentagon officially notified Anthropic of the designation last week, the company sued the Trump administration Monday, arguing both the label and Trump's order are "unprecedented and unlawful." The AI firm is challenging the administration's actions under the First Amendment, contending that the Constitution "does not allow the government to wield its enormous power to punish a company for its protected speech."
[40]
Pentagon Says It's Told Anthropic the Firm Is Supply-Chain Risk
The Pentagon said it has formally notified Anthropic PBC that it's deemed the artificial intelligence company and its products a risk to the US supply chain, according to a senior defense official. "DOW officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately," the official told Bloomberg on Thursday, using an acronym for the Department of War, the name that Defense Secretary Pete Hegseth now favors for the Department of Defense. Spokespeople for Anthropic had no immediate comment. The defense official didn't say when or by what means the Pentagon informed Anthropic of the designation. The Pentagon's finding risks disrupting both the company and the military, which has relied heavily on Anthropic's tools. Until recently, Anthropic provided the only AI system that could operate in the Pentagon's classified cloud. Its Claude Gov tool has become a favored option among defense personnel for its ease of use. Anthropic Chief Executive Officer Dario Amodei had been negotiating for weeks with Emil Michael, under-secretary of defense for research and engineering, to hammer out a contract governing the Pentagon's access to Anthropic's technology. But talks broke down last week after the startup demanded assurances that its AI wouldn't be used for mass surveillance of Americans or autonomous weapons deployment. Hegseth then declared Friday in a post on X that Anthropic posed a supply-chain risk, a designation typically reserved for US adversaries. "From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes," the defense official said Thursday. "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk."
[41]
Tech industry group expresses 'concern' to Pete Hegseth over supply chain risk label
The letter, written by the Information Technology Industry Council (ITI), doesn't name Anthropic, though the artificial intelligence company was given that label on Friday after failing to come to terms with the Defense Department. "We are concerned by recent reports regarding the Department of War's consideration of imposing a supply chain risk designation in response to a procurement dispute," ITI wrote in the letter. The ITI's other members include Microsoft, Apple and Amazon. "Contract disputes should be resolved through continued negotiation between the parties, or by the Department selecting alternate providers through established procurement channels," ITI said. "Emergency authorities such as supply chain risk designations exist for genuine emergencies and are typically reserved for entities that have been designated as foreign adversaries." Hegseth announced on X on Friday that the Pentagon would be labeling Anthropic a "supply chain risk to national security," shortly after President Donald Trump ordered every U.S. government agency to immediately stop using the company's technology.
[42]
Anthropic has strong case against Pentagon blacklisting, legal experts say - The Economic Times
Anthropic's lawsuit challenging its Pentagon blacklisting is likely to test the reach of an obscure law aimed at guarding military systems against sabotage, and legal experts say the artificial intelligence lab appears to have a strong case that President Donald Trump's administration overstepped. Anthropic said in its lawsuit filed on Monday that the Defense Department's decision to exclude the company from military contracts by designating it as a supply chain risk violated its free speech and due process rights and was aimed at punishing the company for its views on AI safety in warfare. The designation could cut Anthropic's 2026 revenue by multiple billions of dollars and cause reputational harm, company executives said Tuesday. In labeling Anthropic as a supply chain risk, the Pentagon invoked a rarely used law that allows it to bar companies from certain contracts if they risk exposing military information systems to enemy infiltration. The law has never been tested in court or used against a U.S. company, according to a Reuters review of legal databases. Courts often defer to the executive branch's judgment on national security, and that will likely be the centerpiece of the government's defense. But five national security law experts told Reuters the Pentagon may have overstepped. "It's not at all clear that the statute can even apply to an American company where there's no foreign entanglement," said University of Minnesota Law School professor Alan Rozenshtein. The Defense Department said it does not comment on pending litigation. 'Exquisite technology' Anthropic, which is incorporated and headquartered in the U.S., said it is not an "adversary," which the Trump administration has defined in executive orders to mean China, Russia, Iran, North Korea, Cuba and Venezuela, according to Anthropic's lawsuit. The company also said Defense Secretary Pete Hegseth gave no explanation for how Anthropic's Claude AI tool constituted a supply chain risk despite its continued use by the U.S. military and Hegseth's own praise of Claude as "exquisite" technology that the Defense Department would "love" to work with during a February 24 meeting with Anthropic cited in the lawsuit. The military has used Claude as recently as last month during strikes on Iran, according to Reuters reports. Hegseth designated Anthropic a national security supply chain risk on March 3 after the company refused to lift restrictions on Claude that prohibit the military from using it for autonomous weapons or domestic surveillance. In a February 27 social media post announcing the designation, Hegseth accused Anthropic of cloaking itself in the "sanctimonious rhetoric of 'effective altruism'" to "strong-arm the United States military into submission." Anthropic has said AI is not reliable enough for autonomous weapons and that it opposes domestic surveillance as a matter of principle. The Pentagon has said that Anthropic's restrictions could endanger American lives. 'Surveil, deny, disrupt' Under U.S. law, a supply chain risk is a threat that an adversary could sabotage, infiltrate or disrupt a federal information technology system or network. The law, known as Section 3252, invoked by the Pentagon, allows the defense secretary to exclude companies from certain contracts to guard against the risk that an "adversary" may "sabotage, maliciously introduce unwanted function" or otherwise "subvert" a military information system to "surveil, deny, disrupt, or otherwise degrade" its function. The Pentagon separately designated Anthropic a supply chain risk under a similar law that could eventually broaden contract exclusions to the civilian government. Anthropic filed a separate legal challenge to that designation on Monday. The Pentagon can only exclude companies under 3252 as a last resort, and other defense contractors are not required to stop working with them entirely. Reuters could not identify any other companies that have been publicly designated supply chain risks under 3252, although the obscure government procurement statute does not require public disclosure of designations. 'Bad blood' Amos Toh, a national security law expert at the left-leaning Brennan Center for Justice, said nothing about Claude's usage policies appeared to pose foreign sabotage or subversion risks. "These are basically safety protocols. You can debate whether these protocols are acceptable or not, but they run directly counter to the risk that the law is designed to regulate," Toh said. Anthropic's lawsuit said that the supply chain risk designation punishes the company for its views on AI safety in violation of the First Amendment of the U.S. Constitution, which protects free speech and expression. Legal experts said Trump and Hegseth's public attacks on Anthropic, including a social media post in which Trump called it a "RADICAL LEFT WOKE COMPANY," could bolster this argument. "A lot of things Hegseth has said and the Pentagon has done undermine their case and suggest there was personal animus and bad blood between the parties, and that the Pentagon had it out for Anthropic," said Joel Dodge, a law expert at Vanderbilt University. Anthropic also said Hegseth's supply chain risk order violated its Fifth Amendment right to due process by imposing "draconian punishments" without any "meaningful process," factual findings or opportunity for Anthropic to challenge them. Second-guessing the President Courts are generally reluctant to question federal agency decisions but are especially deferential to the executive branch's judgment on national security matters. This would likely be the centerpiece of the government's defense, according to legal experts, who said Justice Department lawyers could cite a long line of cases where courts have found that it is generally not the place of judges to second-guess how the president and the military he commands defend the country. The government could similarly argue that the president and his cabinet secretaries have broad authority to choose suppliers and that the military cannot rely on a vendor whose usage policies constrain military action. The Justice Department could also cite legal precedent stating that contract decisions are not First Amendment violations if they are supported by legitimate policy or operational reasons. Eric Crusius, an attorney and government contract specialist who is not involved in the case, said the government is trying to impose the "death penalty" on Anthropic and will need to show "there was no alternative and that they meticulously considered other options prior to pulling the trigger." 'Arbitrary and capricious' Anthropic said in its lawsuit that Hegseth's decision violated the Administrative Procedure Act, a law that allows courts to overturn actions that are found to be "arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law." Legal experts said the apparent contradictions in the government's position are strong evidence that Hegseth's decision was arbitrary. "The government was simultaneously threatening to use the (Defense Production Act) to force Anthropic to sell its services, using its services in active military operations, and saying it's too dangerous to use them in government contracts," said University of Minnesota Law School professor Alan Rozenshtein. "Not all of these things can be true," he said.
[43]
Analysis-Anthropic Has Strong Case Against Pentagon Blacklisting, Legal Experts Say
NEW YORK, March 11 (Reuters) - Anthropic's lawsuit challenging its Pentagon blacklisting is likely to test the reach of an obscure law aimed at guarding military systems against sabotage, and legal experts say the artificial intelligence lab appears to have a strong case that President Donald Trump's administration overstepped. Anthropic said in its lawsuit filed on Monday that the Defense Department's decision to exclude the company from military contracts by designating it as a supply chain risk violated its free speech and due process rights and was aimed at punishing the company for its views on AI safety in warfare. The designation could cut Anthropic's 2026 revenue by multiple billions of dollars and cause reputational harm, company executives said Tuesday. In labeling Anthropic as a supply chain risk, the Pentagon invoked a rarely used law that allows it to bar companies from certain contracts if they risk exposing military information systems to enemy infiltration. The law has never been tested in court or used against a U.S. company, according to a Reuters review of legal databases. Courts often defer to the executive branch's judgment on national security, and that will likely be the centerpiece of the government's defense. But five national security law experts told Reuters the Pentagon may have overstepped. "It's not at all clear that the statute can even apply to an American company where there's no foreign entanglement," said University of Minnesota Law School professor Alan Rozenshtein. The Defense Department said it does not comment on pending litigation. 'EXQUISITE' TECHNOLOGY Anthropic, which is incorporated and headquartered in the U.S., said it is not an "adversary," which the Trump administration has defined in executive orders to mean China, Russia, Iran, North Korea, Cuba and Venezuela, according to Anthropic's lawsuit. The company also said Defense Secretary Pete Hegseth gave no explanation for how Anthropic's Claude AI tool constituted a supply chain risk despite its continued use by the U.S. military and Hegseth's own praise of Claude as "exquisite" technology that the Defense Department would "love" to work with during a February 24 meeting with Anthropic cited in the lawsuit. The military has used Claude as recently as last month during strikes on Iran, according to Reuters reports. Hegseth designated Anthropic a national security supply chain risk on March 3 after the company refused to lift restrictions on Claude that prohibit the military from using it for autonomous weapons or domestic surveillance. In a February 27 social media post announcing the designation, Hegseth accused Anthropic of cloaking itself in the "sanctimonious rhetoric of 'effective altruism'" to "strong-arm the United States military into submission." Anthropic has said AI is not reliable enough for autonomous weapons and that it opposes domestic surveillance as a matter of principle. The Pentagon has said that Anthropic's restrictions could endanger American lives. 'SURVEIL, DENY, DISRUPT' Under U.S. law, a supply chain risk is a threat that an adversary could sabotage, infiltrate or disrupt a federal information technology system or network. The law, known as Section 3252, invoked by the Pentagon, allows the defense secretary to exclude companies from certain contracts to guard against the risk that an "adversary" may "sabotage, maliciously introduce unwanted function" or otherwise "subvert" a military information system to "surveil, deny, disrupt, or otherwise degrade" its function. The Pentagon separately designated Anthropic a supply chain risk under a similar law that could eventually broaden contract exclusions to the civilian government. Anthropic filed a separate legal challenge to that designation on Monday. The Pentagon can only exclude companies under 3252 as a last resort, and other defense contractors are not required to stop working with them entirely. Reuters could not identify any other companies that have been publicly designated supply chain risks under 3252, although the obscure government procurement statute does not require public disclosure of designations. 'BAD BLOOD' Amos Toh, a national security law expert at the left-leaning Brennan Center for Justice, said nothing about Claude's usage policies appeared to pose foreign sabotage or subversion risks. "These are basically safety protocols. You can debate whether these protocols are acceptable or not, but they run directly counter to the risk that the law is designed to regulate," Toh said. Anthropic's lawsuit said that the supply chain risk designation punishes the company for its views on AI safety in violation of the First Amendment of the U.S. Constitution, which protects free speech and expression. Legal experts said Trump and Hegseth's public attacks on Anthropic, including a social media post in which Trump called it a "RADICAL LEFT WOKE COMPANY," could bolster this argument. "A lot of things Hegseth has said and the Pentagon has done undermine their case and suggest there was personal animus and bad blood between the parties, and that the Pentagon had it out for Anthropic," said Joel Dodge, a law expert at Vanderbilt University. Anthropic also said Hegseth's supply chain risk order violated its Fifth Amendment right to due process by imposing "draconian punishments" without any "meaningful process," factual findings or opportunity for Anthropic to challenge them. SECOND-GUESSING THE PRESIDENT Courts are generally reluctant to question federal agency decisions but are especially deferential to the executive branch's judgment on national security matters. This would likely be the centerpiece of the government's defense, according to legal experts, who said Justice Department lawyers could cite a long line of cases where courts have found that it is generally not the place of judges to second-guess how the president and the military he commands defend the country. The government could similarly argue that the president and his cabinet secretaries have broad authority to choose suppliers and that the military cannot rely on a vendor whose usage policies constrain military action. The Justice Department could also cite legal precedent stating that contract decisions are not First Amendment violations if they are supported by legitimate policy or operational reasons. Eric Crusius, an attorney and government contract specialist who is not involved in the case, said the government is trying to impose the "death penalty" on Anthropic and will need to show "there was no alternative and that they meticulously considered other options prior to pulling the trigger." 'ARBITRARY AND CAPRICIOUS' Anthropic said in its lawsuit that Hegseth's decision violated the Administrative Procedure Act, a law that allows courts to overturn actions that are found to be "arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law." Legal experts said the apparent contradictions in the government's position are strong evidence that Hegseth's decision was arbitrary. "The government was simultaneously threatening to use the (Defense Production Act) to force Anthropic to sell its services, using its services in active military operations, and saying it's too dangerous to use them in government contracts," said University of Minnesota Law School professor Alan Rozenshtein. "Not all of these things can be true," he said. (Reporting by Jack Queen in New York; Editing by Noeleen Walder and Christopher Cushing)
[44]
Anthropic: Pentagon put billions of dollars at stake with supply chain risk designation
Lawyers for Anthropic warned a judge Tuesday that the Pentagon's decision to label its products a supply chain risk could cost the artificial intelligence (AI) firm billions of dollars in revenue. Michael Mongan, an attorney for Anthropic, said during Tuesday's status conference that the company is suffering an "irreparable injury" as a result of the designation, urging U.S. District Judge Rita F. Lin for a speedy hearing on its request for a temporary block on the supply chain risk designation. Anthropic sued the Trump administration earlier this week over the designation and President Trump's order for federal agencies to stop using Anthropic products. The designation, which is typically reserved for foreign adversaries, came after Anthropic and the Pentagon failed to come to an agreement on the use of AI for mass surveillance or autonomous weapons. Anthropic argues the Trump administration is retaliating against the company for its "protected viewpoint" on AI safety and the limitations of its own AI models. Mongan told Lin that more than 100 enterprise customers contacted Anthropic in recent days with concerns about working with the AI firm in the wake of the public clash. Anthropic's chief financial officer estimated harm to the 2026 revenue could range from hundreds of millions of dollars to billions of dollars, Mongan said. Justice Department lawyer James Harlow pushed back against the request to move the hearing up in date, pointing out the complexities of the case with nearly 20 agencies named as defendants. Mongan separately noted news reports have also circulated claiming the White House is preparing an executive order to formally instruct the federal government to remove Anthropic's AI from its operations. Asked whether the government could make a commitment against retaliatory actions in the meantime, Harlow said he is not "prepared to offer any commitments." Ultimately, Lin agreed to move the hearing on the temporary block from April 3 to March 24. Hours ahead of the status conference, technology giant Microsoft threw support behind Anthropic in the complaint, filing an amicus brief urging the court for a temporary block of the designation. A spokesperson for Microsoft told The Hill that the company needs "time and a process to find common ground" on the issue. Anthropic has argued the restrictions cannot prohibit anyone who can do business with the military from doing business with the AI firm. Microsoft, along with Google, Amazon and Apple said last week that Anthropic's AI tools will still be available on their platforms for work that does not involve the Pentagon. Anthropic also filed suit in the D.C. federal appeals court, requesting a review of the Pentagon's determination.
[45]
Pentagon Tech Chief Emil Michael Slams Anthropic As AI Weapons And Drone Autonomy Fight Escalates: 'Need Someone Who's Not Going To Wig Out' - Amazon.com (NASDAQ:AMZN), Alphabet (NASDAQ:GOOG)
The Pentagon's top technology official has criticized Anthropic for imposing restrictions on how its chatbot Claude can be used in military systems. On Friday, during an appearance on the "All-In" podcast, Emil Michael, the Pentagon's chief technology officer, said, "I need a reliable, steady partner that gives me something that'll work with me on autonomous." Michael, who is also the defense undersecretary, added, "I need someone who's not going to wig out in the middle." The clash between the U.S. Department of War and Anthropic has escalated after the Pentagon designated the San Francisco-based AI firm a supply chain risk. The dispute centers on Anthropic's policies limiting how its AI model Claude can be used. The company has said it does not want its technology deployed for fully autonomous weapons or mass surveillance of Americans. However, Pentagon officials argue such restrictions could hinder the military's ability to deploy AI systems in future conflicts. AI's Role In Future Warfare And Missile Defense Michael said the U.S. military is increasingly exploring AI-driven capabilities, including autonomous drone swarms, underwater vehicles, and automated defense systems, as rivals such as China invest heavily in similar technologies. He also pointed to potential AI use in the Golden Dome missile defense program, an initiative backed by President Donald Trump that aims to deploy space-based defenses against advanced missile threats. In a hypothetical scenario involving a Chinese hypersonic missile, Michael said the U.S. could have less than 90 seconds to respond, leaving little time for human decision-making. He argued that autonomous responses could sometimes present a lower operational risk. Anthropic Pushes Back, Legal Fight Looms Anthropic has disputed parts of the Pentagon's account and said its safeguards are narrowly focused on preventing risky uses of AI. CEO Dario Amodei earlier this week said that Anthropic intends to challenge the Pentagon's designation in court. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Image via Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[46]
Pentagon officially labels Anthropic a supply chain risk "effective immediately"
The Pentagon has designated US artificial intelligence company Anthropic as a supply chain risk "effective immediately," a decision that could force defense contractors to stop using its Claude chatbot. The decision follows a dispute between the Trump administration and Anthropic leadership over restrictions on how the military could use its AI systems. The conflict escalated after CEO Dario Amodei refused to remove safeguards designed to prevent the technology from being used for mass surveillance of Americans or fully autonomous weapons. President Donald Trump and Defense Secretary Pete Hegseth argued those limits could endanger national security and hinder military operations. Some contractors, including Lockheed Martin, have already begun distancing themselves from Anthropic. Meanwhile, rival OpenAI announced a new deal with the Pentagon to deploy ChatGPT in classified military environments, potentially replacing Anthropic's technology...
[47]
Big tech group backs Anthropic in fight with Pentagon over AI safeguards
A big tech industry group consisting of major Anthropic backers Amazon and Nvidia on Wednesday expressed concern over the Pentagon's decision to declare the artificial intelligence company a supply-chain risk as other investors raced to contain fallout from the lab's fight with the U.S. Defense Department. In a letter dated Wednesday, the Information Technology Industry Council, whose members include Nvidia, Amazon, Apple and OpenAI said, "We are concerned by recent reports regarding the Department of War's consideration of imposing a supply-chain risk designation in response to a procurement dispute." The letter does not name Anthropic. In recent days, CEO Dario Amodei has discussed the matter with some of Anthropic's major investors and partners, including Amazon CEO Andy Jassy, two of the people said. Venture capital firms including Lightspeed and Iconiq have also been in contact with Anthropic executives, two sources said.
[48]
Tech Groups Urge Trump to Drop Anthropic Supply-Chain Risk Label
Tech groups representing major companies including Alphabet Inc.'s Google and Apple Inc. are urging President Donald Trump to reconsider designating Anthropic PBC a national security risk, arguing the move would cause detrimental ripple effects for the rest of the industry. The tech industry's support brings new strength to the push against the Pentagon's decision to declare Anthropic a "supply-chain risk," after the company pushed for restrictions on military use of its AI tools for surveillance and autonomous weaponry.
[49]
Pentagon informed Anthropic it is a supply chain risk
The Pentagon slapped a formal supply-chain risk designation on artificial intelligence lab Anthropic on Thursday, limiting use of a technology that a source said was being used for military operations in Iran. The "supply-chain risk" label, confirmed by Anthropic in a statement, takes effect immediately and bars government contractors from using Anthropic's technology in their work for the US military. But companies can still use Anthropic's Claude on other projects unrelated to the Pentagon, CEO Dario Amodei wrote in the statement, adding that the restrictions apply only to the use of Anthropic AI in Pentagon contracts. The risk designation follows a months-long dispute over the company's insistence on safeguards that the Defense Department, which the Trump administration calls the Department of War, said went too far. In his statement, Amodei reiterated that the company would challenge the designation in court. In recent days, Anthropic and the Pentagon have discussed possible plans for the Pentagon to stop using Claude, Amodei said in the Thursday statement. The two sides have talked about how Anthropic might still work with the military without dismantling its safeguards, he added. Amodei also apologized for an internal memo published Wednesday by the tech news site The Information. In the memo, originally written last Friday, Amodei said Pentagon officials didn't like the company in part because "we haven't given dictator-style praise to Trump." The internal memo's publication came as Anthropic's investors were racing to contain the damage caused by the company's fallout with the Pentagon. The Defense Department did not immediately return requests for comment. US rebuke of tech company that had early ties with the Pentagon The action represented an extraordinary rebuke by the United States against an American tech company that had, earlier than its rivals, worked with the Pentagon. The action comes as the department continues to rely on Anthropic's technology to provide support for military operations, including in Iran, according to a person familiar with the matter. Claude is likely being used to analyze intelligence and assist with operational planning. A Microsoft spokesperson said that the company's lawyers have studied the designation and concluded that "Anthropic products, including Claude, can remain available to our customers -- other than the Department of War -- through platforms such as M365, GitHub, and Microsoft's AI Foundry." Microsoft can continue to work with Anthropic on non-defense-related projects, the spokesperson added. Amazon, an investor in Anthropic and a significant customer of the company's Claude model, did not immediately respond to a request for comment outside regular business hours. Palantir's Maven Smart Systems - a software platform that supplies militaries with intelligence analysis and weapons targeting - uses multiple prompts and workflows built with Anthropic's Claude code, Reuters earlier reported. Anthropic was the most aggressive of its rivals in courting US national-security officials. But the company and the Pentagon have been at odds for months over how the military can use its technology on the battlefield. This conflict came to public attention earlier this year. Anthropic has refused to back down on bans for its Claude AI to power autonomous weapons and mass US surveillance. The Pentagon has pushed back, saying it should be able to use this technology as needed, so long as it complies with US law. The "supply-chain risk" label now gives Anthropic a status that Washington has typically used for foreign adversaries. Similar US action was taken to remove Chinese tech giant Huawei from the Pentagon's supply chains.
[50]
Pentagon official says AI debate over Trump's Golden Dome missile defence program led to dispute with Anthropic
A top Pentagon official said Anthropic's dispute with the government over the use of its artificial intelligence technology in fully autonomous weapons came after a debate over how AI could be used in President Donald Trump's future Golden Dome missile defence program, which aims to put US weapons in space. U.S. Defence Undersecretary Emil Michael, the Pentagon's chief technology officer, said he came to view the AI company's ethical restrictions on the use of its chatbot Claude as an irrational obstacle as the US military pursues giving greater autonomy to swarms of armed drones, underwater vehicles and other machines to compete with rivals like China that could do the same. "I need a reliable, steady partner that gives me something, that'll work with me on autonomous, because someday it'll be real, and we're starting to see earlier versions of that," Michael said in a podcast aired Friday. "I need someone who's not going to wig out in the middle." The comments came after the Pentagon formally designated San Francisco-based Anthropic a supply chain risk, cutting off its defence work using a rule designed to prevent foreign adversaries from harming national security systems. Anthropic has vowed to sue over the designation, which affects its business partnerships with other military contractors. Trump has also ordered federal agencies to immediately stop using Claude, though the Republican president gave the Pentagon six months to phase out a product that's deeply embedded in classified military systems, including those used in the Iran war. Anthropic said it only sought to restrict its technology from being used for two high-level usages: mass surveillance of Americans or fully autonomous weapons. Michael, a former Uber executive, revealed his side of months-long talks with Anthropic CEO Dario Amodei in a lengthy conversation with Silicon Valley venture capitalists Jason Calacanis, David Friedberg and Chamath Palihapitiya, co-hosts of the "All-In" podcast. A fourth co-host, former PayPal executive David Sacks, is now Trump's AI czar and was not present for the episode but has been a vocal critic of Anthropic, including for its hiring of former Biden administration officials shortly after Trump returned to the White House last year. As talks hit an impasse last week, Michael lashed out at Amodei on social media, saying he "has a God-complex" and "wants nothing more than to try to personally control" the military. In the podcast, however, he positioned the dispute as part of a broader military shift toward using AI. Michael said the military is developing procedures for enabling different levels of autonomy in warfare depending on the risk posed. "This is part of the debate I had with Anthropic, which is we need AI for things like Golden Dome," Michael said, sharing a hypothetical scenario of the US having only 90 seconds to respond to a Chinese hypersonic missile. A human anti-missile operator "may not be able to discriminate with their own eyes what they're going after," but an autonomous counterattack would be a low risk "because it's in space and you're just trying to hit something that's trying to get you." In another scenario, he said, "who could oppose if you have a military base, you have a bunch of soldiers sleeping, that you have a laser that can take down drones autonomously?" In response to the podcast comments, Anthropic pointed to an earlier Amodei statement saying, "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner." Michael, the defence undersecretary for research and engineering, was sworn in last May and said he took over the military's "AI portfolio" in August. That's when he said he began scrutinising Anthropic's contracts - some of which dated from President Joe Biden's Democratic administration. Michael said he questioned Anthropic over terms of use that he deemed too restrictive. "I need to have the terms of service be rational relative to our mission set," he said. "So we started these negotiations. It took three months, and I had to sort of give them scenarios, like this Chinese hypersonic missile example. They're like, OK, we'll give you an exception for that.' Well, how about this drone swarm? We'll give an exception for that.' And I was like, exceptions doesn't work. I can't predict for the next 20 years what (are) all the things we might use AI for." That's when the Pentagon began insisting Anthropic and other AI companies allow for "all lawful use" of their technology, Michael said. Anthropic resisted that change, while its competitors - Google, OpenAI and Elon Musk's xAI - agreed to them, though some still have to get their infrastructure prepared for classified military work, Michael said. The other sticking point for Anthropic was not allowing any mass surveillance of Americans. "They didn't want us to bulk-collect public information on people using their AI system," Michael said, describing the negotiations as "interminable." Anthropic has disputed parts of Michael's version of the talks and emphasised that the protections it sought were narrow and not based on any existing uses of Claude. The next stage of the dispute will likely happen in court.
[51]
Anthropic Sees Clients Step Back Following Government Ban | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The artificial intelligence (AI) startup made that argument in a court hearing Tuesday (March 10) as it tried to get a judge to lift the Pentagon's supply-chain risk designation on the company, Bloomberg News reported. That designation came last month amid a dispute between Anthropic and the military after the company sought assurances that its tech would not be used for mass domestic surveillance or autonomous weapons. The supply-chain risk classification means that companies that wish to do business with the government are barred from working with Anthropic. According to Bloomberg, Anthropic attorney Michael Mongan told the court that the government's actions had caused more than 100 customers to express concerns about continuing to engage with his client. He said a financial services firm had halted its talks with Anthropic over a $50 million contract, while a pharmaceutical company sought to shave 10 months off the duration of its contract. And a FinTech company "explicitly tied" cutting its $10 million contract in half to Anthropic's dispute with the federal government. Anthropic's chief financial officer has calculated that the harm to the company's revenue could range from hundreds of millions of dollars to billions of dollars, Mongan said. The report said the judge has scheduled a hearing on the matter on March 24 on whether to remove the supply-chain risk designation. In the meantime, Bloomberg added, Mongan sought a commitment from the government that it would not take further actions against the company, something Justice Department lawyer James Harlow would not agree to. Anthropic has won the backing of the larger tech industry in its fight, the report said, with dozens of AI scientists and researchers submitting a letter to the judge on its behalf. Microsoft, which has become one of Anthropic's top customers, filed a motion in support of the company's case earlier this week. The company argued in its filing that a restraining order barring the designation would allow for a more orderly transition, avoid disrupting the military's use of AI and keep tech firms from having to immediately alter their product and contract configuration. A spokesperson for Microsoft told Reuters: "We believe everyone involved shares common goals, and we need time and a process to find common ground."
[52]
Pentagon Says It Is Labeling AI Company Anthropic a Supply Chain Risk 'Effective Immediately'
The Trump administration is following through with its threat to designate artificial intelligence company Anthropic as a supply chain risk in an unprecedented move that could force other government contractors to stop using the AI chatbot Claude. The Pentagon said in a statement Thursday that it has "officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately." The decision appeared to shut down the opportunity for further negotiation with Anthropic, nearly a week after President Donald Trump and Defense Secretary Pete Hegseth accused the company of endangering national security. Trump and Hegseth announced a series of threatened punishments last Friday, on the eve of the Iran war, after Anthropic CEO Dario Amodei refused to back down over concerns the company's products could be used for mass surveillance of Americans or autonomous weapons. The San Francisco-based company didn't immediately respond to a request for comment Thursday. It has previously vowed to sue if the Pentagon pursued what the company described as a "legally unsound" action "never before publicly applied to an American company." The Pentagon didn't reply to questions in time for publication. Some military contractors were already cutting ties with Anthropic, a rising star in the tech industry that sells Claude to a variety of businesses and government agencies. Lockheed Martin said it will "follow the President's and the Department of War's direction" and look to other providers of large language models. "We expect minimal impacts as Lockheed Martin is not dependent on any single LLM vendor for any portion of our work," the company said. It's not yet clear if the designation aims to block Anthropic's use by all federal government contractors or just those that partner with the military. The Pentagon's decision to apply a rule designed to address supply threats posed by foreign adversaries was quickly met with criticism from both opponents and some supporters of Trump's Republican administration. Federal codes have defined supply chain risk as a "risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert" a system in order to disrupt, degrade or spy on it. U.S. Sen. Kirsten Gillibrand, a New York Democrat and member of the Senate Armed Services Committee and Senate Intelligence Committee, called it "a dangerous misuse of a tool meant to address adversary-controlled technology." "This reckless action is shortsighted, self-destructive, and a gift to our adversaries," she said in a written statement Thursday. Neil Chilson, a Republican former chief technologist for the Federal Trade Commission who now leads AI policy at the Abundance Institute, said the decision looks like "massive overreach that would hurt both the U.S. AI sector and the military's ability to acquire the best technology for the U.S. warfighter." Earlier in the day, a group of former defense and national security officials sent a letter to U.S. lawmakers expressing "serious concern" about the designation. "The use of this authority against a domestic American company is a profound departure from its intended purpose and sets a dangerous precedent," said the letter from former officials and policy experts, including former CIA director Michael Hayden and retired Air Force, Army and Navy leaders. They added that such a designation is meant to "protect the United States from infiltration by foreign adversaries -- from companies beholden to Beijing or Moscow, not from American innovators operating transparently under the rule of law. Applying this tool to penalize a U.S. firm for declining to remove safeguards against mass domestic surveillance and fully autonomous weapons is a category error with consequences that extend far beyond this dispute." While losing its big partnerships with defense contractors, Anthropic experienced a surge of consumer downloads over the past week due to people siding with its moral stance. Anthropic has boasted of more than a million people signing up for Claude each day this week, lifting it past OpenAI's ChatGPT and Google's Gemini as the top AI app in more than 20 countries in Apple's app store. The dispute with the Pentagon has also further deepened Anthropic's bitter rivalry with OpenAI, which it announced a Friday deal with the Pentagon to effectively replace Anthropic with ChatGPT in classified environments. OpenAI CEO Sam Altman later said he's saying he shouldn't have rushed a deal that "looked opportunistic and sloppy."
[53]
Anthropic CEO vows to fight risk designation, insists 'vast majority' of customers won't be affected
Anthropic CEO Dario Amodei said Thursday that the company would fight the Pentagon's supply chain risk designation in court, while suggesting the "vast majority" of its customers would be unaffected. The Pentagon confirmed the designation, which Defense Secretary Pete Hegseth announced last week, in a letter to the AI firm Wednesday, Amodei noted. "As we wrote on Friday, we do not believe this action is legally sound, and we see no choice but to challenge it in court," the Anthropic CEO wrote in a statement Thursday night. The supply chain risk designation, which has typically been reserved for foreign adversaries, restricts defense contractors from using the company's products. However, despite Hegseth's initial comments, Anthropic has argued the restrictions cannot bar anyone who does business with the military from doing business with the AI firm. "The language used by the Department of War in the letter (even supposing it was legally sound) matches our statement on Friday that the vast majority of our customers are unaffected by a supply chain risk designation," Amodei added. "With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts," he continued. A Microsoft spokesperson similarly said Thursday that the company's lawyers have "concluded that Anthropic products, including Claude, can remain available to our customers -- other than the Department of War" and that it "can continue to work with Anthropic on non-defense related projects." This could be critical for Anthropic. Experts have warned the supply chain risk designation could have wide-reaching consequences for the company's business. Amodei also underscored Thursday that Anthropic has "been having productive conservations" with the Pentagon over the last several days about how it can continue to work with the department in ways that "adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible." The company has argued for two restrictions on the military's use of its technology -- mass domestic surveillance and fully autonomous lethal weapons. The Defense Department, by contrast, has sought language that would allow it to use the firm's AI tools for "all lawful purposes." The Anthropic CEO also apologized for an internal memo sent to employees last Friday, which suggested that the "real reasons" the Pentagon and Trump administration do not like the company is because they haven't donated to the president. "Anthropic did not leak this post nor direct anyone else to do so -- it is not in our interest to escalate this situation," he said. He noted that the message was written within hours of last Friday's events, in which President Trump directed federal agencies to stop using the company's tools, Hegseth announced the designation and OpenAI unveiled its own deal with the Pentagon. OpenAI's agreement to bring its AI tools to the military's classified systems appeared to have the same two restrictions that Anthropic had requested. In the internal memo, Amodei dismissed this agreement as "'safety theater' for the benefit of employees." Following broad pushback to the ChatGPT maker's initial terms, it announced additional protections Monday. "It was a difficult day for the company, and I apologize for the tone of the post," Amodei said Thursday. "It does not reflect my careful or considered views. It was also written six days ago, and is an out-of-date assessment of the current situation."
[54]
Department Of War Says 'No Active Negotiation' With Anthropic: 'End All Speculation'
The US Department of War has refuted any ongoing negotiations with Anthropic AI despite the AI startup's plans to challenge the government's designation of it as a national security "supply chain risk." Under Secretary of War Emil Michael took to X to put an end to all speculation, stating that there are no ongoing negotiations with Anthropic AI. This announcement comes after the Pentagon formally notified Anthropic AI that its AI products pose a risk to the U.S. supply chain. Anthropic AI Faces Pentagon Scrutiny Despite the Trump administration ordering federal agencies to stop using Anthropic's technology, it was reportedly used by U.S. Central Command during a major air operation against Iran just hours later. Dario Amodei, CEO of the privately held AI company, has been pushing to revive a Pentagon contract after talks collapsed. This was followed by Amodei's announcement that the company plans to challenge the U.S. government in court after the Pentagon labeled the AI startup a national security "supply chain risk." Anthropic, which created Claude, its flagship family of large language models (LLMs) and conversational AI assistants, is the only American company ever to be publicly named a supply chain risk. Photo courtesy: Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[55]
Microsoft backs Anthropic despite US 'supply-chain risk' label - The Economic Times
Microsoft has come out in support of the artificial intelligence (AI) company Anthropic, which is under scrutiny by the US Department of War. The Dario Amodei-led company was labelled a "supply chain risk" by the military body earlier today. Microsoft said it will deploy Anthropic's AI models in its products even after the US-based company's feud with the Pentagon. CNBC reported on Friday that the AI major's products, including Claude, continue to remain accessible to customers. "Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers other than the Department of War through platforms such as M365, GitHub, and Microsoft's AI Foundry," a Microsoft spokesperson told CNBC. The designation, media reports said, appears to have ended recent talks between Anthropic and the Pentagon on how its AI models could be used in defence settings. CEO Dario Amodei said the company will challenge the decision in court. Amodei-Trump feud In an internal message to employees, Amodei said the administration was targeting Anthropic because it refused to offer what he described as "dictator-style praise" to President Donald Trump, contrasting the company's stance with that of other AI firms, such as rival OpenAI. Later, Amodei apologised for how he handled a crisis after an internal memo strained the company's relationship with the US government. In an interview with The Economist, he said events involving Donald Trump, the Department of War, and another AI provider unfolded within hours. A day before, OpenAI chief executive Sam Altman took a swipe at rival AI firm Anthropic, saying it would be harmful for society if companies abandoned democratic norms simply because they disagreed with the political leadership in power. Altman added that governments should ultimately hold more authority than private companies and warned against corporate decisions that undermine democratic processes. Microsoft-Anthropic partnership Anthropic raised $30 billion in its latest funding round, more than doubling the Claude chatbot maker's valuation to $380 billion. Microsoft was reported to be one of the investors, the others being D E Shaw Ventures, ICONIQ, Nvidia, and MGX, among others. Anthropic has rapidly built its revenue base -- the company said its current revenue run rate is $14 billion. For Claude Code alone, the revenue run rate has grown to over $2.5 billion, more than doubling since the beginning of 2026. Anthropic's investors, such as Amazon and Nvidia, have earlier raised concerns over the Pentagon's move to potentially designate the AI startup as a supply-chain risk. In a letter, the Information Technology Industry Council warned that such a label could have wider implications for the tech sector, though it did not directly name Anthropic. Also Read: ETtech Explainer: Anthropic's rapid rise, Pentagon standoff and everything in between
[56]
Anthropic CEO Dario Amodei Says 'No Choice' But To Challenge US Government In Court After Claude Provider Labeled 'Supply Chain Risk' - Lockheed Martin (NYSE:LMT), Microsoft (NASDAQ:MSFT)
Anthropic plans to challenge the U.S. government in court after the Pentagon designated the AI startup a national security "supply chain risk," according to CEO Dario Amodei. Pentagon Designates Anthropic A National Security Risk In a statement on Thursday, Amodei said the company received an official notice from the Department of War confirming the designation. "Yesterday (March 4) Anthropic received a letter from the Department of War confirming that we have been designated as a supply chain risk to America's national security," he wrote. The label is significant because it typically requires defense contractors and vendors to certify they are not using the company's technology in work connected to the Pentagon. Historically, similar designations have largely targeted foreign companies, including Chinese telecom giant Huawei Technologies. Anthropic said it intends to contest the decision in court. "As we wrote on Friday, we do not believe this action is legally sound, and we see no choice but to challenge it in court," Amodei stated. In the statement, Amodei also said the company did not leak a reported internal memo criticizing the government, adding, "It is not in our interest to escalate this situation." Pentagon did not immediately respond to Benzinga's request for comments. Dispute Centers On Military Use Of AI The conflict stems from disagreements between Anthropic and the Pentagon over how the company's AI models -- known as Claude -- could be used by the military. Anthropic reportedly sought guarantees that its systems would not be used for fully autonomous weapons or mass domestic surveillance. The Defense Department, however, wanted broad access to the technology for lawful military purposes. Rising Tensions And Industry Fallout The dispute comes despite a $200 million Defense Department contract awarded to Anthropic last year, which made it one of the first AI labs to deploy models on classified government networks. Meanwhile, competitors such as OpenAI -- led by CEO Sam Altman -- and xAI, founded by Elon Musk, have moved to deepen their own partnerships with the Pentagon. Microsoft Reviews Pentagon Label On Anthropic Price Action: Shares of Microsoft closed at $410.68 on Thursday, up 1.35% and edged higher to $411.15 in after-hours trading, gaining another 0.11%, according to Benzinga Pro. According to Benzinga's Edge Stock Rankings, Microsoft is showing downward price trend across the short, medium and long-term timeframes, while its Quality score ranks in the 89th percentile. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo Courtesy: gguy on Shutterstock.com Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[57]
Pentagon labels Anthropic supply-chain risk days after CEO lashed out...
The Pentagon has reportedly told Anthropic it will be officially labeled a supply-chain risk - just days after the AI firm's CEO Dario Amodei circulated a letter to staffers bashing the Trump administration's efforts against the company. The Department of War on Thursday formally notified Anthropic executives that the company and its AI tools - including chatbot Claude - pose serious security risks, the Wall Street Journal reported, citing a senior Pentagon official. The official said the military would not let a vendor put members of the armed services at risk by restricting the lawful use of its technology - after Anthropic sought to bar the government from using its AI for mass domestic surveillance and fully autonomous weapons. Secretary of War Pete Hegseth has argued the exceptions Anthropic is seeking are too restrictive, insisting that the Pentagon should be allowed to use AI tools for "all lawful purposes." Amodei was reportedly holding talks with Emil Michael, the War Department's undersecretary for research and engineering, as part of a "last-ditch effort" to reach a contract - days after the exec accused rival OpenAI of spouting "just straight up lies" about Anthropic's dispute with the government. It's one of the first times an American company will be slapped with the supply-chain risk label, which could potentially ban any organization working with the military from partnering with Anthropic - possibly impacting investors like Lockheed Martin, Amazon and Google. The Pentagon, Anthropic and OpenAI did not immediately respond to The Post's requests for comment. In his 1,600-word memo to staffers Friday, Amodei said he believes the government has been targeting Anthropic because it opposed the White House's AI agenda and hasn't "given dictator-style praise to Trump." Amodei also said Anthropic was being punished because he didn't "donate to Trump" - "while OpenAI/Greg have donated a lot," referring to OpenAI president Greg Brockman, the Information reported. Amodei - who donated to Democratic former Vice President Kamala Harris' failed presidential campaign - blasted OpenAI and the Pentagon for allegedly smearing his company's name. "I want to be very clear on the messaging that is coming from OpenAI, and the mendacious nature of it," he wrote in the note. He added that "a lot of OpenAI and [Department of War] messaging just straight up lies about these issues or tries to confuse them," insisting that OpenAI's contract terms, for example, were never offered to Anthropic. Meanwhile, OpenAI announced Friday it would step in to provide AI services to the Pentagon - stoking backlash from AI workers concerned about the risk of new tech being used for surveillance of citizens. Altman was "presenting himself as someone who wants to 'set the same contract for everyone in the industry,'" while "behind the scenes" working with the Department of War to replace Anthropic "the instant we are designated a supply chain risk," Amodei wrote. OpenAI's deal includes safeguards that are "maybe 20% real and 80% safety theater," he added. During a Morgan Stanley technology conference on Thursday, Altman pushed back on the criticism - and took a few jabs at Anthropic. "The government is supposed to be more powerful than private companies," he said, adding that it's "bad for society" if companies start abandoning their commitment to the democratic process because "some people don't like the person or people currently in charge." Altman acknowledged, however, that the timing of OpenAI's deal - which came just hours after talks with Anthropic fell apart - "looked opportunistic and sloppy."
[58]
Pentagon Clash With Anthropic Raises AI Control Questions and NVIDIA Risk
Pentagon-Anthropic Standoff Shows How AI Policy Can Disrupt Chip Supply Chains The Pentagon's dispute with Anthropic has become a major test of how the United States may govern powerful artificial intelligence systems. The conflict moved beyond contract terms and raised broader questions about AI oversight, government authority, and commercial risk across the AI sector. It also created uncertainty for NVIDIA, which supplies the chips that support Anthropic's model training and deployment. The issue centers on a Pentagon order linked to a "supply chain risk" designation. The order barred military contractors, suppliers, and partners from commercial activity with Anthropic. This language triggered concern because Anthropic depends on commercial relationships for cloud infrastructure, enterprise sales, and funding. It also depends on NVIDIA GPUs to train and run models at scale.
[59]
Anthropic to challenge Pentagon's 'supply chain risk' label
CEO Dario Amodei said that Anthropic has received a letter from the US Department of War stating the AI company is considered a supply chain risk to America's national security. Amodei said the firm believes the action is not legally sound and plans to challenge it in court. Dario Amodei led Anthropic is taking his battle with the US Department of War to Court. On Friday Anthropic revealed that it had received a formal notice designating it a supply risk, a notice they aim to now challenge legally. "Anthropic received a letter from the Department of War confirming that we have been designated as a supply chain risk to America's national security. As we wrote on Friday, we do not believe this action is legally sound, and we see no choice but to challenge it in court," Amodei said in his press statement. "The Department's letter has a narrow scope, and this is because the relevant statute (10 USC 3252) is narrow, too. It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain. Even for Department of War contractors, the supply chain risk designation doesn't (and can't) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts," he said. Amodei stated that his concerns remain on domestic surveillance and fully autonomous weapons, but claimed to have had productive conversations with the Department of War recently. "I would like to reiterate that we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible. As we wrote on Thursday, we are very proud of the work we have done together with the Department, supporting frontline war fighters with applications such as intelligence analysis, modelling and simulation, operational planning, cyber operations, and more. As we stated last Friday, we do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making--that is the role of the military. Our only concerns have been our exceptions on fully autonomous weapons and mass domestic surveillance, which relate to high-level usage areas, and not operational decision-making," he wrote. Amodei said that his priority is to ensure that the national security apparatus is not deprived of AI tools from Anthropic in the middle of the West Asia operations. "Our most important priority right now is making sure that our war fighters and national security experts are not deprived of important tools in the middle of major combat operations. Anthropic will provide our models to the Department of War and national security community, at nominal cost and with continuing support from our engineers, for as long as is necessary to make that transition, and for as long as we are permitted to do so. Anthropic has much more in common with the Department of War than we have differences. We both are committed to advancing US national security and defending the American people, and agree on the urgency of applying AI across the government. All our future decisions will flow from that shared premise," he said. Meanwhile, Anthropic's tools continue to be used during the US' ongoing operation in the Middle East. A Reuters report suggested that the US used an array of weapons in the strikes conducted against Iran as a part of Operation Epic Fury, which included artificial intelligence services from Anthropic. According to the report the Pentagon used artificial intelligence services from Anthropic, including its Claude tools, during its attack on Iran.
[60]
Anthropic Is A Supply-Chain Threat, Pentagon Says As Trump Rift Deepens
Anthropic AI Under Fire: Pentagon Labels It Supply-Chain Threat, Trump Rift Deepens Anthropic PBC has been formally notified by the Pentagon that its AI products pose a risk to the U.S. supply chain, according to a senior defense official who spoke to Bloomberg. The official confirmed that the company and its products have been classified as a supply chain risk, effective immediately, though the specifics of the notification's timing and method remain unclear. Dario Amodei, CEO of Anthropic, engaged in weeks of discussions with Emil Michael, the under-secretary of defense for research and engineering, to establish a contract detailing how the Pentagon could utilize Anthropic's technology. It is unclear what the next steps may be for both the company and the military. U.S. Central Command reportedly used Anthropic's Claude AI during the Trump administration's major air operation against Iran, hours after the president ordered federal agencies to stop using Anthropic's technology. The military also used Claude in the mission that captured Venezuelan President Nicolas Maduro. Anthropic's Claude Gov platform was the only AI system capable of running in the Pentagon's classified cloud. The platform has been gaining popularity in the defense department given its user-friendly design. Anthropic did not respond to a request for comment. Photo: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[61]
How the Anthropic-Pentagon dispute over AI safeguards escalated
March 11 (Reuters) - A standoff erupted between the U.S. Department of Defense and Anthropic in January after the AI lab refused to loosen safety guardrails on its systems, prompting the Pentagon to label it a 'supply-chain risk,' and putting the company's government contracts in jeopardy. The Claude maker's executives warned that the designation, which they view as retaliation for opposing the use of their technology in autonomous weapons and domestic surveillance, could slash their 2026 revenue by billions of dollars. Legal experts suggest the government's case may be undermined by a mismatch between the law invoked and Anthropic's conduct, internal contradictions in the Pentagon's behavior and evidence that its decision may have been driven by animus rather than security. (Reporting by Anhata Rooprai in Bengaluru; Editing by Arun Koyyur)
[62]
Pentagon says it is labeling AI company Anthropic a supply chain risk 'effective immediately' - The Economic Times
The Pentagon has labelled Anthropic a supply-chain risk after its CEO Dario Amodei refused military use that could enable surveillance or autonomous weapons. Some contractors may stop using Claude. Critics call the action an overreach, while downloads surge and rivalry with OpenAI intensifies.The Trump administration is following through with its threat to designate artificial intelligence company Anthropic as a supply chain risk in an unprecedented move that could force other government contractors to stop using the AI chatbot Claude. The Pentagon said in a statement Thursday that it has "officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately." The decision appeared to shut down the opportunity for further negotiation with Anthropic, nearly a week after President Donald Trump and Defense Secretary Pete Hegseth accused the company of endangering national security. Trump and Hegseth announced a series of threatened punishments last Friday, on the eve of the Iran war, after Anthropic CEO Dario Amodei refused to back down over concerns the company's products could be used for mass surveillance of Americans or autonomous weapons. The San Francisco-based company didn't immediately respond to a request for comment Thursday. It has previously vowed to sue if the Pentagon pursued what the company described as a "legally unsound" action "never before publicly applied to an American company." The Pentagon statement said "this has been about one fundamental principle: the military being able to use technology for all lawful purposes. The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk." Some military contractors were already cutting ties with Anthropic, a rising star in the tech industry that sells Claude to a variety of businesses and government agencies. Lockheed Martin said it will "follow the President's and the Department of War's direction" and look to other providers of large language models. "We expect minimal impacts as Lockheed Martin is not dependent on any single LLM vendor for any portion of our work," the company said. It's not yet clear if the designation aims to block Anthropic's use by all federal government contractors or just those that partner with the military. The Pentagon's decision to apply a rule designed to address supply threats posed by foreign adversaries was quickly met with criticism from both opponents and some supporters of Trump's Republican administration. Federal codes have defined supply chain risk as a "risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert" a system in order to disrupt, degrade or spy on it. U.S. Sen. Kirsten Gillibrand, a New York Democrat and member of the Senate Armed Services Committee and Senate Intelligence Committee, called it "a dangerous misuse of a tool meant to address adversary-controlled technology." "This reckless action is shortsighted, self-destructive, and a gift to our adversaries," she said in a written statement Thursday. Neil Chilson, a Republican former chief technologist for the Federal Trade Commission who now leads AI policy at the Abundance Institute, said the decision looks like "massive overreach that would hurt both the U.S. AI sector and the military's ability to acquire the best technology for the U.S. warfighter." Earlier in the day, a group of former defense and national security officials sent a letter to U.S. lawmakers expressing "serious concern" about the designation. "The use of this authority against a domestic American company is a profound departure from its intended purpose and sets a dangerous precedent," said the letter from former officials and policy experts, including former CIA director Michael Hayden and retired Air Force, Army and Navy leaders. They added that such a designation is meant to "protect the United States from infiltration by foreign adversaries - from companies beholden to Beijing or Moscow, not from American innovators operating transparently under the rule of law. Applying this tool to penalise a U.S. firm for declining to remove safeguards against mass domestic surveillance and fully autonomous weapons is a category error with consequences that extend far beyond this dispute." While losing its big partnerships with defense contractors, Anthropic experienced a surge of consumer downloads over the past week due to people siding with its moral stance. Anthropic has boasted of more than a million people signing up for Claude each day this week, lifting it past OpenAI's ChatGPT and Google's Gemini as the top AI app in more than 20 countries in Apple's app store. The dispute with the Pentagon has also further deepened Anthropic's bitter rivalry with OpenAI, which announced a Friday deal with the Pentagon to effectively replace Anthropic with ChatGPT in classified military environments. OpenAI said it sought similar protections against domestic surveillance and fully autonomous weapons but later had to amend its agreements, leading CEO Sam Altman to say he shouldn't have rushed a deal that "looked opportunistic and sloppy."
[63]
Anthropic has strong case against Pentagon blacklisting, legal experts say
NEW YORK, March 11 (Reuters) - Anthropic's lawsuit challenging its Pentagon blacklisting is likely to test the reach of an obscure law aimed at guarding military systems against sabotage, and legal experts say the artificial intelligence lab appears to have a strong case that President Donald Trump's administration overstepped. Anthropic said in its lawsuit filed on Monday that the Defense Department's decision to exclude the company from military contracts by designating it as a supply chain risk violated its free speech and due process rights and was aimed at punishing the company for its views on AI safety in warfare. The designation could cut Anthropic's 2026 revenue by multiple billions of dollars and cause reputational harm, company executives said Tuesday. In labeling Anthropic as a supply chain risk, the Pentagon invoked a rarely used law that allows it to bar companies from certain contracts if they risk exposing military information systems to enemy infiltration. The law has never been tested in court or used against a U.S. company, according to a Reuters review of legal databases. Courts often defer to the executive branch's judgment on national security, and that will likely be the centerpiece of the government's defense. But five national security law experts told Reuters the Pentagon may have overstepped. "It's not at all clear that the statute can even apply to an American company where there's no foreign entanglement," said University of Minnesota Law School professor Alan Rozenshtein. The Defense Department said it does not comment on pending litigation. 'EXQUISITE' TECHNOLOGY Anthropic, which is incorporated and headquartered in the U.S., said it is not an "adversary," which the Trump administration has defined in executive orders to mean China, Russia, Iran, North Korea, Cuba and Venezuela, according to Anthropic's lawsuit. The company also said Defense Secretary Pete Hegseth gave no explanation for how Anthropic's Claude AI tool constituted a supply chain risk despite its continued use by the U.S. military and Hegseth's own praise of Claude as "exquisite" technology that the Defense Department would "love" to work with during a February 24 meeting with Anthropic cited in the lawsuit. The military has used Claude as recently as last month during strikes on Iran, according to Reuters reports. Hegseth designated Anthropic a national security supply chain risk on March 3 after the company refused to lift restrictions on Claude that prohibit the military from using it for autonomous weapons or domestic surveillance. In a February 27 social media post announcing the designation, Hegseth accused Anthropic of cloaking itself in the "sanctimonious rhetoric of 'effective altruism'" to "strong-arm the United States military into submission." Anthropic has said AI is not reliable enough for autonomous weapons and that it opposes domestic surveillance as a matter of principle. The Pentagon has said that Anthropic's restrictions could endanger American lives. 'SURVEIL, DENY, DISRUPT' Under U.S. law, a supply chain risk is a threat that an adversary could sabotage, infiltrate or disrupt a federal information technology system or network. The law, known as Section 3252, invoked by the Pentagon, allows the defense secretary to exclude companies from certain contracts to guard against the risk that an "adversary" may "sabotage, maliciously introduce unwanted function" or otherwise "subvert" a military information system to "surveil, deny, disrupt, or otherwise degrade" its function. The Pentagon separately designated Anthropic a supply chain risk under a similar law that could eventually broaden contract exclusions to the civilian government. Anthropic filed a separate legal challenge to that designation on Monday. The Pentagon can only exclude companies under 3252 as a last resort, and other defense contractors are not required to stop working with them entirely. Reuters could not identify any other companies that have been publicly designated supply chain risks under 3252, although the obscure government procurement statute does not require public disclosure of designations. 'BAD BLOOD' Amos Toh, a national security law expert at the left-leaning Brennan Center for Justice, said nothing about Claude's usage policies appeared to pose foreign sabotage or subversion risks. "These are basically safety protocols. You can debate whether these protocols are acceptable or not, but they run directly counter to the risk that the law is designed to regulate," Toh said. Anthropic's lawsuit said that the supply chain risk designation punishes the company for its views on AI safety in violation of the First Amendment of the U.S. Constitution, which protects free speech and expression. Legal experts said Trump and Hegseth's public attacks on Anthropic, including a social media post in which Trump called it a "RADICAL LEFT WOKE COMPANY," could bolster this argument. "A lot of things Hegseth has said and the Pentagon has done undermine their case and suggest there was personal animus and bad blood between the parties, and that the Pentagon had it out for Anthropic," said Joel Dodge, a law expert at Vanderbilt University. Anthropic also said Hegseth's supply chain risk order violated its Fifth Amendment right to due process by imposing "draconian punishments" without any "meaningful process," factual findings or opportunity for Anthropic to challenge them. SECOND-GUESSING THE PRESIDENT Courts are generally reluctant to question federal agency decisions but are especially deferential to the executive branch's judgment on national security matters. This would likely be the centerpiece of the government's defense, according to legal experts, who said Justice Department lawyers could cite a long line of cases where courts have found that it is generally not the place of judges to second-guess how the president and the military he commands defend the country. The government could similarly argue that the president and his cabinet secretaries have broad authority to choose suppliers and that the military cannot rely on a vendor whose usage policies constrain military action. The Justice Department could also cite legal precedent stating that contract decisions are not First Amendment violations if they are supported by legitimate policy or operational reasons. Eric Crusius, an attorney and government contract specialist who is not involved in the case, said the government is trying to impose the "death penalty" on Anthropic and will need to show "there was no alternative and that they meticulously considered other options prior to pulling the trigger." 'ARBITRARY AND CAPRICIOUS' Anthropic said in its lawsuit that Hegseth's decision violated the Administrative Procedure Act, a law that allows courts to overturn actions that are found to be "arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law." Legal experts said the apparent contradictions in the government's position are strong evidence that Hegseth's decision was arbitrary. "The government was simultaneously threatening to use the (Defense Production Act) to force Anthropic to sell its services, using its services in active military operations, and saying it's too dangerous to use them in government contracts," said University of Minnesota Law School professor Alan Rozenshtein. (Reporting by Jack Queen in New York; Editing by Noeleen Walder and Christopher Cushing)
[64]
Big Tech group tells Pentagon's Hegseth they are 'concerned' about Anthropic supply-chain risk designation
March 4 (Reuters) - A Big Tech industry group on Wednesday expressed concern to U.S. Secretary of Defense Pete Hegseth about his declaring AI company Anthropic a supply-chain risk, saying such a designation creates uncertainty for companies that could threaten the military's access to the best products and services. "We are concerned by recent reports regarding the Department of War's consideration of imposing a supply chain risk designation in response to a procurement dispute," the Information Technology Industry Council, whose members include Nvidia, Amazon.com and Apple, said in a letter dated Wednesday. The letter did not directly name Anthropic, instead focusing on the designation and its potential consequences. Last week, capping off a heated weeks-long dispute with Anthropic over technology guardrails on Claude tools used by the military, President Donald Trump announced a federal agency-wide ban on the company with a six-month phaseout period and Hegseth ordered Pentagon suppliers to purge Anthropic's prized AI tools from their supply chains. The letter, sent to Hegseth on Wednesday, also stated that the declaration threatens "to undermine the government's access to the best-in-class products and services from American companies that serve all agencies and components of the federal government," according to a copy seen by Reuters. The Department of Defense, which the Trump administration has renamed the Department of War, said it "will respond directly to the authors as appropriate," as it does with all correspondence. The letter is the first significant support Anthropic has received from the tech industry, a group of companies that includes its investors, suppliers and customers. In the letter, ITI CEO Jason Oxman said that contract disputes should be resolved through continued negotiation or selecting alternate providers through established channels. "Emergency authorities such as supply chain risk designations exist for genuine emergencies and are typically reserved for entities that have been designated as foreign adversaries," Oxman wrote. He said the department should work through the Federal Acquisition Security Council, which was created to evaluate risk in federal procurement, when considering whether private companies present legitimate supply chain risks. Many of the council's members have long partnered with the federal government and provide "mission-critical capabilities" to the Pentagon, Oxman added, suggesting the designation would be disruptive. "Our member companies strive to provide best-in-class solutions to meet the needs of U.S. departments and agencies," Oxman wrote in the letter, which was also copied to other parts of the government. "Removing parts of these solutions, as would be required based on recent reports, will be a complex endeavor." (Reporting by Karen Freifeld in New York; Editing by Chris Sanders and Matthew Lewis)
[65]
Anthropic CEO says company having productive conversations with Pentagon despite ban
Anthropic CEO also made it clear that the company may take legal action if needed. The ongoing tension between AI startup Anthropic and the United States Department of Defense has taken a new turn. Even after the Pentagon officially labelled the company as a supply chain risk, Anthropic CEO Dario Amodei says the company has been having productive talks with the department. The CEO also claimed that the company could sue the department. 'I would like to reiterate that we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible,' Amodei wrote in a lengthy post. Also read: OpenAI launches GPT 5.4 with better reasoning, coding and professional task support The comments came after the Pentagon confirmed that it had formally labelled Anthropic as a supply chain risk. The AI company is pushing back against how broadly the restriction is being interpreted. Amodei said the company believes the designation only applies to direct work related to defence contracts. 'With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts,' he wrote. Also read: Anthropic CEO criticises OpenAI's defense deal, questions safety claims Amodei also made it clear that the company may take legal action if needed. 'We do not believe this action is legally sound, and we see no choice but to challenge it in court.' Tech giant Microsoft confirmed that its customers can still access Anthropic's products, including Claude, through platforms such as Microsoft 365, GitHub and Microsoft's AI Foundry. 'Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers- other than the Department of War- through platforms such as M365, GitHub, and Microsoft's AI Foundry and that we can continue to work with Anthropic on non-defense related projects,' a Microsoft spokesperson said in a statement to Business Insider. In his statement, Amodei also addressed a controversy involving a leaked internal message where he criticised the White House after negotiations with the Pentagon fell apart. 'It was a difficult day for the company, and I apologise for the tone of the post. It does not reflect my careful or considered views,' Amodei wrote. 'It was also written six days ago, and is an out-of-date assessment of the current situation.' Also read: Google faces lawsuit alleging Gemini AI manipulated man into suicide: Here's what happened Despite the tensions, Amodei's post suggested that the company still wants to maintain a working relationship with the defence department. 'Anthropic has much more in common with the Department of War than we have differences,' he wrote.
Share
Share
Copy Link
The Pentagon officially designated Anthropic a supply chain risk after the AI company refused to allow its Claude system for autonomous weapons and mass surveillance. CEO Dario Amodei announced plans to challenge the unprecedented designation in court, warning the company could lose billions in revenue while arguing the label is legally unsound and overly punitive.
The Pentagon has officially labeled AI company Anthropic a supply chain risk, marking the first time an American technology firm has publicly received a designation typically reserved for foreign adversaries
2
. The decision follows weeks of failed negotiations between Anthropic and the US Defense Department over how much control the military should have over AI systems. CEO Dario Amodei drew firm red lines, refusing to allow Claude to be used for mass surveillance of Americans or fully autonomous weapons without human oversight1
.Source: Market Screener
The supply chain risk designation bars defense contractors from working with the Pentagon if they use Claude in their products
3
. This creates immediate operational challenges, as Anthropic has been the only frontier AI lab with classified-ready systems currently supporting U.S. military operations in Iran, where Claude serves as one of the main tools installed in Palantir's Maven Smart System2
.Dario Amodei announced that Anthropic sees "no choice but to challenge it in court," calling the Pentagon's actions "legally unsound" and "retaliatory and punitive"
1
4
. In a hearing before US District Judge Rita F. Lin in San Francisco, Anthropic told the court it could lose billions of dollars in revenue this year and urged quick action on its request to block the Trump administration's declaration5
.
Source: Digit
Amodei argued that the Department's letter is narrow in scope and exists to protect the government rather than punish a supplier. He emphasized that the law requires the Secretary of Defense to use the least restrictive means necessary, and that even for defense contractors, the designation doesn't limit uses of Claude unrelated to specific Pentagon contracts
1
. However, legal experts note that courts are reluctant to second-guess the government on national security matters, creating a high bar for Anthropic's court challenge1
.As Anthropic faces restrictions, OpenAI forged its own deal with the Pentagon to allow the military to use its AI systems for "all lawful purposes"
2
. The timing and ambiguous phrasing have sparked concern among OpenAI's own employees about potential uses for autonomous weapons deployment without adequate safeguards. OpenAI President Greg Brockman has been a staunch backer of President Trump, recently donating $25 million to the MAGA Inc. Super PAC2
.Amodei apologized for a leaked internal memo in which he characterized OpenAI's dealings as "safety theater" and noted "we haven't given dictator-style praise to Trump (while Sam has)"
4
. He claimed the memo, written within hours of the designation announcement, didn't reflect his "careful or considered views" and represented an "out-of-date assessment"1
.Related Stories
Hundreds of employees from OpenAI and Google have urged the Pentagon to withdraw its designation and called on Congress to push back on what they view as inappropriate use of authority against an American technology company
2
. Dean Ball, a former Trump White House AI advisor, called the designation a "death rattle" of the American republic, arguing the government has abandoned strategic clarity in favor of "thuggish" tribalism that treats domestic innovators worse than foreign adversaries2
.Despite the controversy, Claude AI user numbers continue to grow, with the app sitting at the top of free charts on the Google Play Store and Apple App Store in the US. Anthropic's chief product officer reported more than a million new customers a day
4
. Amodei emphasized that the vast majority of Anthropic's customers remain unaffected, as the designation applies only to Claude's use as a direct part of contracts with the Department of Defense1
. The company has committed to continue providing its models to the Pentagon at "nominal cost" for ongoing operations while the transition occurs1
.
Source: GameReactor
Summarized by
Navi
12 Feb 2026•Policy and Regulation

14 Feb 2026•Policy and Regulation

11 Mar 2026•Policy and Regulation

1
Technology

2
Entertainment and Society

3
Policy and Regulation
