38 Sources
38 Sources
[1]
New court filing reveals Pentagon told Anthropic the two sides were nearly aligned -- a week after Trump declared the relationship kaput | TechCrunch
Anthropic submitted two sworn declarations to a California federal court late Friday afternoon, pushing back on the Pentagon's assertion that the AI company poses an "unacceptable risk to national security" and arguing that the government's case relies on technical misunderstandings and claims that were never actually raised during the months of negotiations that preceded the dispute. The declarations were filed alongside Anthropic's reply brief in its lawsuit against the Department of Defense and come ahead of a hearing this coming Tuesday, March 24, before Judge Rita Lin in San Francisco. The dispute traces back to late February, when President Trump and Defense Secretary Pete Hegseth publicly declared they were cutting ties with Anthropic after the company refused to allow unrestricted military use of its AI technology. The two people who submitted the declarations are Sarah Heck, Anthropic's Head of Policy, and Thiyagu Ramasamy, the company's Head of Public Sector. Heck is a former National Security Council official who worked at the White House under the Obama administration before moving to Stripe and then Anthropic, where she runs the company's government relationships and policy work. She was personally present at the February 24 meeting where CEO Dario Amodei sat down with Defense Secretary Hegseth and the Pentagon's Under Secretary Emil Michael. In her declaration, Heck calls out what she describes as a central falsehood in the government's filings: that Anthropic demanded some kind of approval role over military operations. That claim, she says, simply isn't true. "At no time during Anthropic's negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role," she wrote. She also points out that the Pentagon's concern about Anthropic potentially disabling or altering its technology mid-operation was never raised during negotiations. Instead, she says, it appeared for the first time in the government's court filings, which gave Anthropic no opportunity to respond. Another detail in Heck's declaration sure to draw attention is that on March 4 -- the day after the Pentagon formally finalized its supply-chain risk designation against Anthropic -- Under Secretary Michael emailed Amodei to say the two sides were "very close" on the two issues the government now cites as evidence that Anthropic is a national security threat: its positions on autonomous weapons and mass surveillance of Americans. The email, which Heck attaches as an exhibit to her declaration, is notable for another reason. Two days later, after Amodei said the company was having "productive conversations" with the Pentagon, Michael posted on X that "there is no active Department of War negotiation with Anthropic." A week after that, he told CNBC there was "no chance" of renewed talks. Heck's point appears to be: If Anthropic's stance on those two issues is what makes it a national security threat, why was the Pentagon's own official saying the two sides were nearly aligned on exactly those issues just days after the designation was finalized? Ramasamy brings a different kind of expertise to the case. Before joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI deployments for government customers, including classified environments. At Anthropic, he's credited with building the team that brought its Claude models into national security and defense settings, including the $200 million contract with the Pentagon announced last summer. His declaration takes on the government's claim that Anthropic could theoretically interfere with military operations by disabling the technology or otherwise altering how it behaves, which Ramasamy says isn't technically possible. Per his telling, once Claude is deployed inside a government-secured, "air-gapped" system operated by a third-party contractor, Anthropic has no access to it; there is no remote kill switch, no backdoor, and no mechanism to push unauthorized updates. Any kind of "operational veto" is a fiction, he suggests, explaining that a change to the model would require the Pentagon's explicit approval and action to install. Anthropic, he says, can't even see what government users are typing into the system, let alone extract that data. Ramasamy also disputes the government's claim that Anthropic's hiring of foreign nationals makes the company a security risk. He notes that Anthropic employees have undergone U.S. government security clearance vetting -- the same background check process required for access to classified information, adding in his declaration that "to my knowledge," Anthropic is the only AI company where cleared personnel actually built the AI models designed to run in classified environments. Anthropic's lawsuit argues that the supply-chain risk designation -- the first ever applied to an American company -- amounts to government retaliation for the company's publicly stated views on AI safety, in violation of the First Amendment. The government, in a 40-page filing earlier this week, rejected that framing entirely, saying that Anthropic's refusal to allow all lawful military uses of its technology was a business decision, not protected speech, and that the designation was a straightforward national security call, not punishment for the company's views. The Pentagon case is not the only legal matter on Anthropic's docket this Tuesday. Separately on Friday, a federal judge tentatively ruled that Reddit's lawsuit against the company -- which accuses Anthropic of scraping its content without permission to train its AI -- should be sent back to state court, where Reddit originally filed it last June. A hearing to finalize that decision is also scheduled for Tuesday.
[2]
Pentagon's 'Attempt to Cripple' Anthropic Is Troublesome, Judge Says
The US Department of Defense appears to be illegally punishing Anthropic for trying to restrict the use of its AI tools by the military, US district judge Rita Lin said during a court hearing on Tuesday. "It looks like an attempt to cripple Anthropic," Lin said of the Pentagon designating the company a supply-chain risk. "It looks like [the department] is punishing Anthropic for trying to bring public scrutiny to this contract dispute, which of course would be a violation of the First Amendment." Anthropic has filed two federal lawsuits alleging that the Trump administration's decision to designate the company a security risk amounted to illegal retaliation. The government slapped the label on Anthropic after it pushed for limitations on how its AI could be used by the military. Tuesday's hearing came in a case filed in San Francisco. Anthropic is seeking a temporary order to pause the designation. The relief, Anthropic hopes, would help convince some of the company's skittish customers to stick with it just a bit longer. Lin can issue a pause only if she determines that Anthropic is likely to win the overall case. Her ruling on the injunction is expected in the next few days. The dispute has sparked a broader public conversation about how artificial intelligence is being deployed by the armed forces and whether Silicon Valley companies should give deference to the government in determining how the technology they develop is deployed. The Department of Defense, which also goes by the Department of War or DoW, has argued that it followed procedures and appropriately determined that Anthropic's AI tools could no longer be relied upon to operate as expected during crucial moments. It has asked Lin not to second guess its assessment about the threat it claims Anthropic poses to national security. "The worry is that Anthropic, instead of merely raising concerns and pushing back, will say we have a problem with what DoW is doing and will manipulate the software ... so it doesn't operate in the way DoW expects and wants it to," Trump administration attorney Eric Hamilton said during Tuesday's hearing. Lin said that it was Defense Secretary Pete Hegseth's role -- not hers -- to decide whether Anthropic is an appropriate vendor for the department. But Lin said it's up to her to determine whether Hegseth violated the law by taking steps beyond simply canceling Anthropic's government contracts. Lin said it was "troubling" to her that the security designation and directives more broadly limiting use of Anthropic's AI tool Claude by government contractors "don't seem to be tailored to stated national security concerns." As Anthropic's spat with the government escalated last month, Hegseth posted on X that "effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." But on Tuesday, Hamilton acknowledged that Hegseth has no legal authority to bar military contractors from using Anthropic for work unrelated to the Department of Defense. When asked by Lin why Hegseth would have posted that, Hamilton said, "I don't know." Lin further questioned Hamilton about whether the Pentagon had considered taking less punitive measures to move the department away from using Anthropic's tools. She described the supply-chain risk designation as a powerful authority typically reserved for foreign adversaries, terrorists, and other hostile actors. Michael Mongan, a WilmerHale attorney representing Anthropic, said it was extraordinary for the government to go after a "stubborn" negotiating partner with the designation. The Pentagon has said it is working to replace Anthropic technologies over the coming months with alternatives from Google, OpenAI, and xAI. It also said it has put measures in place to prevent Anthropic from engaging in any tampering during the transition. Hamilton said he didn't know if it was even possible for Anthropic to update its AI models without permission from the Pentagon; the company says it is not. A ruling in the other case, at the federal appeals court in Washington, DC, is expected to come soon without a hearing.
[3]
Anthropic Denies It Could Sabotage AI Tools During War
Anthropic cannot manipulate its generative AI model Claude once the US military has it running, an executive wrote in a court filing on Friday. The statement was made in response to accusations from the Trump administration about the company potentially tampering with its AI tools during war. "Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations," Thiyagu Ramasamy, Anthropic's head of public sector, wrote. "Anthropic does not have the access required to disable the technology or alter the model's behavior before or during ongoing operations." The Pentagon has been sparring with the leading AI lab for months over how its technology can be used for national security -- and what the limits on that usage should be. This month, Defense Secretary Pete Hegseth labeled Anthropic a supply-chain risk, a designation that will prevent the Department of Defense from using the company's software, including through contractors, over the coming months. Other federal agencies are also abandoning Claude. Anthropic filed two lawsuits challenging the constitutionality of the ban and is seeking an emergency order to reverse it. However, customers have already begun canceling deals. A hearing in one of the cases is scheduled for March 24 in federal district court in San Francisco. The judge could decide on a temporary reversal soon after. In a filing earlier this week, government attorneys wrote that the Department of Defense "is not required to tolerate the risk that critical military systems will be jeopardized at pivotal moments for national defense and active military operations." The Pentagon has been using Claude to analyze data, write memos, and help generate battle plans, WIRED reported. The government's argument is that Anthropic could disrupt active military operations by turning off access to Claude or pushing harmful updates if the company disapproves of certain uses. Ramasamy rejected that possibility. "Anthropic does not maintain any back door or remote 'kill switch,'" he wrote. "Anthropic personnel cannot, for example, log into a DoW system to modify or disable the models during an operation; the technology simply does not function that way." He went on to say that Anthropic would be able to provide updates only with the approval of the government and its cloud provider, in this case Amazon Web Services, though he didn't specify it by name. Ramasamy added that Anthropic cannot access the prompts or other data military users enter into Claude. Anthropic executives maintain in court filings that the company does not want veto power over military tactical decisions. Sarah Heck, head of policy, wrote in a court filing on Friday that Anthropic was willing to guarantee as much in a contract proposed March 4. "For the avoidance of doubt, [Anthropic] understands that this license does not grant or confer any right to control or veto lawful Department of War operational decision‑making," the proposal stated, according to the filing, which referred to an alternative name for the Pentagon. The company was also ready to accept language that would address its concerns about Claude being used to help carry out deadly strikes without human supervision, Heck claimed. But negotiations ultimately broke down. For the time being, the Defense Department has said in court filings that it "is taking additional measures to mitigate the supply chain risk" posed by the company by "working with third-party cloud service providers to ensure Anthropic leadership cannot make unilateral changes" to the Claude systems currently in place.
[4]
Judge Calls US Government Ban on Anthropic AI Tools 'Troubling'
A judge appeared skeptical of the Trump administration's rationale for banning the federal government's use of artificial intelligence technology from Anthropic PBC, a move that the Claude chatbot maker claims could cost it billions in lost revenue. During a hearing Tuesday in San Francisco, US District Judge Rita F. Lin said it was "troubling" that the Defense Department had designated Anthropic a supply-chain risk, a label normally only used to describe US adversaries. Lin said the decision by the government doesn't appear to be "tailored to the stated national security concern." Instead, the ban that followed the Pentagon's dispute with the company over AI safety concerns "looks like an attempt to cripple Anthropic," she said. The judge added that she's concerned about whether the government is punishing Anthropic for speaking out publicly about the conflict. Lin didn't rule on Anthropic's request to block the ban while its legal fight proceeds in court. The judge said she'd make a decision in the coming days. The case is Anthropic v. US Department of War, 26-cv-01996, US District Court, Northern District of California (San Francisco).
[5]
Pentagon appears to be 'punishing' Anthropic in violation of free speech, judge says
A US judge has said the Pentagon appears to have tried to punish Anthropic for going public with its contract dispute over the military use of AI, in violation of free speech protections. Judge Rita Lin, who is overseeing Anthropic's legal challenge to the defence department's move to brand the start-up a supply chain risk, said its actions looked like "an attempt to cripple" AI lab. "It looks like [the defence department] is punishing Anthropic for trying to bring public scrutiny to this contracting dispute, which would, of course, be a violation of the First Amendment," Lin said in a hearing on Tuesday. Lin's comments suggest she is sceptical of the government's arguments as she considers whether to impose an injunction against the Pentagon. She is expected to issue a decision as soon as this week. She said the department's actions were "troubling" and "don't really seem to be tailored to the stated national security concern". The dispute centres on the use of Anthropic's AI model Claude by the military. It has already been deployed in classified operations, including in the war against Iran and in the capture of Venezuelan leader Nicolás Maduro. But negotiations over its contract with the Pentagon broke down after Anthropic said it refused to have its technology used for lethal autonomous weapons and mass domestic surveillance. The Trump administration responded by labelling Anthropic a supply chain risk, a national security measure normally used for foreign companies based in adversary nations such as Russia or China. President Donald Trump told US agencies to stop using the start-up, while defence secretary Pete Hegseth posted on X that all military contractors must end commercial partnerships with Anthropic. After questions from Lin about the social media posts, a Pentagon lawyer told the court that Hegseth's post did not constitute a legal action and should not be interpreted broadly. Military contractors could still use Anthropic for work unrelated to the defence department, he added. Anthropic argued that the fracas has led to almost a month of "profound uncertainty" for its commercial partners as well as "irreparable and mounting" damages. A broad application of the ban might have cut Anthropic off from vital data centre infrastructure provided by companies such as Amazon and Microsoft and cut deeply into its revenues. Anthropic estimated that even a narrow view of the order could still mean hundreds of millions of dollars in annual revenue were at risk. The company has claimed that the designation violates the First Amendment of the US constitution and is retaliation for refusing to comply with the department's demands to loosen safeguards on its technology.
[6]
US judge to weigh Anthropic's bid to undo Pentagon blacklisting
March 24 (Reuters) - A U.S. judge is set to hear oral arguments on Tuesday in a lawsuit by Anthropic seeking to block the Pentagon's blacklisting of the artificial intelligence lab over its refusal to lift certain restrictions on its Claude AI model. Anthropic's lawsuit in California federal court alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated Anthropic a national security supply chain risk. The government can apply that label to companies that expose military systems to potential infiltration or sabotage by adversaries. Hegseth's unprecedented move, which followed Anthropic's refusal to allow the military to use Claude for U.S. surveillance or autonomous weapons, blocks Anthropic from certain military contracts. It could cost the company billions of dollars this year in lost business and reputational harm, Anthropic executives said on March 9. The company says AI models are not reliable enough to be safely used in autonomous weapons and that it opposes domestic surveillance as a violation of rights. ANTHROPIC DESIGNATION FIRST FOR U.S. COMPANY U.S. District Judge Rita Lin in San Francisco, an appointee of former Democratic President Joe Biden, is set to hold a hearing at 1:30 p.m. PT (2030 GMT) over Anthropic's request for an initial order blocking the designation while the case plays out. Anthropic's designation was the first time a U.S. company has been publicly designated a supply chain risk under an obscure government-procurement statute aimed at protecting military systems from foreign sabotage. In its March 9 lawsuit, Anthropic alleged the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute the designation, in violation of its Fifth Amendment right to due process. The lawsuit says the decision was unlawful, unsupported by facts and inconsistent with the military's past praise of Claude. The Justice Department countered that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing. The government said the designation stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety. Anthropic has a second lawsuit pending in Washington, D.C., over a separate Pentagon supply chain risk designation that could lead to its exclusion from civilian government contracts. Reporting by Jack Queen in New York; Editing by Noeleen Walder, Rod Nickel Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Government * Constitutional Law * Public Policy Jack Queen Thomson Reuters Jack Queen covers major lawsuits against the Trump administration involving urgent questions of executive power and how their resolution could affect the law and the legal profession in the years to come. Previously, he covered criminal and civil cases against Trump during the interim of his presidential terms, including gavel-to-gavel coverage of his historic hush money trial in New York and his civil fraud trial, which ended in a half-billion-dollar judgment. Jack has also covered high-profile defamation cases including the Dominion Voting Systems' lawsuit against Fox News, which settled for $787 million after intense pretrial litigation. Based in New York, he specializes in breaking news as well as analysis, explainers and other explanatory reporting.
[7]
Pentagon ban of Anthropic faces judge; Claude AI maker seeks injunction
If the preliminary injunction is awarded, the AI startup will be able to continue doing business with government contractors and federal agencies as its lawsuit against the Trump administration plays out in court. Without the injunction, the company has said, it could lose billions of dollars in business. The hearing on Anthropic's request, which will be conducted by U.S. District Judge Rita Lin, is set to begin at 4:30 p.m. ET. The hearing can be viewed via Zoom. Earlier in March, the Department of Defense designated Anthropic a so-called supply chain risk, meaning that use of the company's technology purportedly threatens U.S. national security. It was the first time an American company had been hit with that designation. The label, if allowed to continue, will require defense contractors, including Amazon, Microsoft, and Palantir, to certify that they do not use Claude in their work with the military. Palantir is continuing to use Claude in its work with the department as the legal battle plays out, CEO Alex Karp told CNBC on March 12. Anthropic's model is also being used in the war with Iran. Anthropic has argued that there is no basis to consider the company a supply chain risk. The company also said it is being unfairly retaliated against because it demanded that the DOD not use Claude for fully autonomous weapons or mass surveillance of Americans. The Pentagon insists it does not use the AI models for such purposes. Lin could issue a ruling from the bench about Anthropic's motion on Tuesday, or she could deliver a written ruling later.
[8]
Anthropic and Pentagon head to court as AI firm seeks end to 'stigmatizing' supply chain risk label
SAN FRANCISCO (AP) -- Artificial intelligence company Anthropic is asking a federal judge on Tuesday to temporarily halt the Pentagon's "unprecedented and stigmatizing" designation of the company as a supply chain risk. A hearing scheduled for Tuesday in a California federal court marks a critical step in the feud between Anthropic and the Trump administration over how the company's AI technology could be used in war. Anthropic sued earlier this month to stop the Trump administration from enforcing what the company calls an "unlawful campaign of retaliation" over its refusal to allow unrestricted military use of its technology. The company is asking U.S. District Judge Rita Lin for an emergency order that would temporarily reverse the Pentagon's decision to designate the AI company a "supply chain risk." Anthropic also seeks to undo President Donald Trump's order directing all federal employees, not just those in the military, to stop using its AI chatbot Claude. Lin is presiding over the case in federal court in San Francisco, where Anthropic is headquartered. The AI firm has also filed a separate and more narrow case in the federal appeals court in Washington, D.C. Lin sent both sides a number of questions she wants them to answer at a Tuesday hearing, including about discrepancies between Defense Secretary Pete Hegseth's formal directive declaring Anthropic a potential threat to national security, and what he posted about it on social media.
[9]
Anthropic showdown fractures Donald Trump's pact with Silicon Valley
The Pentagon's attack on Anthropic over the military use of AI has fractured the fragile truce that has reigned between Silicon Valley and Donald Trump's administration for much of the president's second term. The dissent began with Dean Ball, a former Trump official who last year wrote much of the administration's AI Action Plan. He excoriated the decision to brand Anthropic a supply chain risk, calling it "by a profoundly wide margin the most damaging policy move I have ever seen". Since then, Microsoft and trade groups representing Apple, Meta, OpenAI, Amazon and Google have written to the president or signed punchy legal briefs urging US courts to side with Anthropic after the start-up refused to give the American military unconstrained access to its models. They have been supported by an array of lobbyists and think-tanks, previously supportive of many of Trump's tech policies. "A lot of the tech industry is waking up and realising we have to draw a line in the sand here, before it affects the rest of us," said Alec Stapp, co-founder of the Institute for Progress, a pro-innovation think-tank in Washington, which co-signed one of the briefs. Big Tech's compact with the Trump administration had been based on an expectation that the president would champion American AI and clear the path for the industry to advance. But the Pentagon's punishment of Anthropic -- branding the $380bn start-up a "supply chain risk" akin to Chinese or Russian groups and pressuring its business partners to cut their ties -- has shattered that tacit agreement. Until recent weeks, Big Tech giants mostly bit their tongues as Trump upended their supply chains with tariffs, interfered with crucial work visas, tried to break them up with antitrust litigation and flip-flopped on export rules for AI chips. Groups such as Google and Meta donated to his inauguration fund or ballroom project, and their bosses flocked to Mar-a-Lago and the White House. Silicon Valley's confidence in speaking out has been bolstered by the apparent resilience of Anthropic's business despite its spat with the Pentagon. The company's chief financial officer, Krishna Rao, has warned the clash could weigh on revenue growth. But Anthropic's annualised revenues -- a projection based on the past four weeks -- shot from $9bn at the end of 2025 to $19bn earlier this month, according to an investor, and is continuing on the same trajectory. "Momentum has not been affected by the fight with the Pentagon. It's not impacted the business," the investor said. This pace would put the start-up within touching distance of arch-rival OpenAI, which hit $25bn last month. Anthropic declined to comment on its revenue growth. An executive at a rival lab noted that customers and clients had rewarded Anthropic for taking what has been seen as a stand on safety and ethics -- even if it is the only AI lab that has so far been used in US military operations. Even groups supportive of much of Trump's tech policy have come out to back Anthropic. The Foundation for American Innovation, a think-tank close to Silicon Valley's "tech right", helped organise a legal brief in the start-up's defence. It was the first such brief opposing the administration from a group whose alumni, including Ball, helped staff Trump's government and push his agenda in support of American AI, particularly in competition with China. Trump has toured the world sealing blockbuster AI deals with tech executives in tow, fought to prevent state-level regulation of the industry and promised to fast-track vast data centre construction. "It is very hard to imagine [AI] technology scaling, as a business, as an industry, even as a scientific endeavour, if ultimately the power of a state can be used to . . . 'murder' a company," said Tim Hwang, FAI's general counsel. Some of Anthropic's venture capital investors have also been interceding on its behalf during the Pentagon negotiations, trying to broker a deal, according to multiple people with knowledge of the talks. The broad designation against Anthropic was "causing immediate and substantial harm to the technology industry," a slew of groups representing the sector wrote to a federal judge this week. "Hundreds if not thousands of companies are trying to parse the meaning of social media posts, inconsistent and shifting Administration statements, and vague directions," the tech associations wrote. Employees from OpenAI and Google, including DeepMind's chief scientist Jeff Dean, joined the chorus of condemnation, claiming that the Pentagon's move would "chill open deliberation in our field about the risks and benefits of today's AI systems". OpenAI, whose co-founder Greg Brockman was the single largest donor to the Trump-aligned Super Pac MAGA Inc last year, is also a member of business group TechNet, which protested the Anthropic decision. One person who signed a brief supporting Anthropic said their broader strategy was to get allies within the Trump administration to rein in defence secretary Pete Hegseth. The Pentagon, the person argued, could have simply cancelled Anthropic's contract and went too far in trying to stop any company that does business with the US military from conducting "any commercial activity" with the start-up. The Institute for Progress's Stapp said there was still "a lot of disagreement" within tech circles about who should control how AI is used for things such as autonomous weapons and domestic surveillance. "But what you see a lot of agreement on is that [the retaliation against] Anthropic was excessive and uncalled for." The showdown has prompted even Anthropic's fiercest rival, OpenAI, to offer support. "We do not think Anthropic should be designated as a supply chain risk and we've made our position on this clear to the Department of War," OpenAI said last month. Alex Karp, the head of defence contractor Palantir, through which Anthropic's model is being used by the defence department, has said he believes the technology should never be used for domestic surveillance. Some prominent tech figures have taken the administration's side. Palmer Luckey, the founder of defence tech start-up Anduril, which has won several large contracts from the administration, last week said the Pentagon should have been even tougher. Investor Keith Rabois, whose venture capital firm Khosla Ventures was among the first to invest in OpenAI, called its rival Anthropic a "fraud" during the Pentagon negotiations. Others, including partners at Andreessen Horowitz, have criticised Amodei for trying to dictate how a democratically elected government should use AI. But they are in the minority. In February, days before its clash with the Pentagon became public, Anthropic raised $30bn from at least 40 investors, including Microsoft, Nvidia and Silicon Valley behemoths such as Sequoia Capital and Peter Thiel's Founders Fund. In Washington, representatives of large tech groups have warned of a slippery slope in which the government can suddenly turn on any company. "If you look at US procurement law . . . there is a great deal of process, transparency and business certainty," one person close to tech leaders said. "That's what differentiates the United States from other countries . . . where folks close to the regime get contracts."
[10]
Law expert: Anthropic suit is first major government-AI firm clash
Law expert: Anthropic suit is first major government-AI firm clash Anthropic goes to federal court in San Francisco on Tuesday (March 24) to challenge the Pentagon's move to label it a "national security supply chain risk," a designation typically reserved for foreign adversaries. Joel Dodge, the director of industrial policy and economic security at Vanderbilt Policy Accelerator, told Reuters that the legal dispute was "the first major clash" in what he sees as a long term power struggle between governments and AI companies.
[11]
Judge says government's Anthropic ban looks like punishment
Pages from the Anthropic website and the company's logo are displayed on a computer screen in New York on Thursday, Feb. 26, 2026. Patrick Sison/AP hide caption A federal judge in San Francisco said on Tuesday the government's ban on Anthropic looked like punishment after the AI company went public with its dispute with the Pentagon over the military's potential uses of its artificial intelligence model, Claude. U.S. District Judge Rita F. Lin made the remark at the outset of a hearing about Anthropic's request for a preliminary injunction in one of its lawsuits against the Pentagon, which has designated the company a supply chain risk, effectively blacklisting it. "It looks like an attempt to cripple Anthropic," Lin said, adding she was concerned that the government might be punishing Anthropic for openly criticizing the government's position. Lin said she expected to make a ruling in the next few days on whether to temporarily pause the government's ban until the court decides on the merits of the case. The hearing in the U.S. District Court for the Northern District of California is the latest development in a spat between one of the leading AI companies and the Trump administration, and it has implications for how the government can use AI more broadly. Anthropic CEO Dario Amodei announced in late February that he would not allow the company's Claude's AI model to be used for autonomous weapons, or to surveil American citizens. President Trump subsequently ordered all U.S. government agencies to stop using Anthropic's products. The Pentagon designated Anthropic as a "supply chain risk" earlier this month, citing national security concerns. That designation is normally reserved for entities deemed to be foreign adversaries that could potentially sabotage U.S. interests. Anthropic has filed two federal lawsuits alleging that this designation amounts to illegal retaliation against the company for its stance on AI safety. It argues that the label will cost it both customers and revenue, since it will bar Pentagon contractors from doing business with the company, as well. The lawsuits, filed in the U.S. District Court for the Northern District of California and the federal appeals court in Washington, D.C., allege the Trump administration violated the company's First Amendment right to speech and exceeded the scope of supply chain risk law. In today's hearing, lawyers for Anthropic said it was apparently the first time such a designation had been made against a U.S. company. Lin said the Pentagon has a right to decide what AI products it wants to use. But she questioned whether the government broke the law when it banned its agencies from using Anthropic, and when Defense Secretary Pete Hegseth announced that anyone seeking business with the Pentagon must cut relations with Anthropic. She said the actions were "troubling" because they did not seem to be tailored to the national security concerns in question, which could be addressed by the Pentagon simply ceasing to use Claude. Instead, she said, it looked like the government was trying to punish Anthropic. But a lawyer for the government argued that its actions were not retaliatory, and were based on Anthropic's disagreement with the government over how its AI model could be used -- not the company's decision to speak out about it. The government also argued that Anthropic is a risk because, theoretically, in the future the company could update Claude in a way that endangers national security. Anthropic did not respond immediately to an emailed request for comment. A Pentagon spokesperson said that the agency's policy is not to comment on ongoing litigation.
[12]
Anthropic and Pentagon face off in court over ban on company's AI model
Sign up for the Breaking News US email to get newsletter alerts in your inbox Anthropic is facing off against the Department of Defense in a federal court on Tuesday afternoon, as the artificial intelligence company seeks a temporary pause on the government's decision to bar the US military and any contractors from using its technology. The two sides have been locked in an escalating feud over Anthropic's refusal to allow its Claude AI chatbot to be used for domestic mass surveillance and fully autonomous lethal weapons. Donald Trump has ordered all US government agencies to stop using Anthropic's tools, which the company is also contesting. Representatives for the AI firm and the government are appearing in a northern California district court, where Judge Rita Lin presided over the hearing for a temporary injunction. The hearing is one of the first steps in Anthropic's lawsuit against the defense department, which it filed earlier this month after the secretary of defense, Pete Hegseth, declared the company a supply chain risk - a designation that Anthropic alleges will cause irreparable harm and cost hundreds of millions or more in revenue. Anthropic's suit and judge Lin's decision will have widespread ramifications for both the company and the US government, which has come to extensively rely on Claude over the past year for a variety of uses, including in its military operations against Iran. The standoff between the defense department and Anthropic, especially the former's move to categorize a US company as a supply chain risk for the first time ever, has also created significant tension in Silicon Valley's close relationship with the Trump administration. Anthropic declined to comment on the lawsuit. The defense department has previously stated that as a matter of policy it does not comment on litigation. Anthropic alleges that the government violated the company's first amendment rights by designating it a supply chain risk, arguing that the decision was an attempt at punishing the company for displeasing the president and for not complying with the defense department's request to loosen safety guardrails on Claude. "These actions are unprecedented and unlawful. The constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic stated in its California suit. Anthropic has argued that its AI model is not reliable enough to be used for the purposes of mass domestic surveillance or fully automated lethal weapons, while its CEO, Dario Amodei, has expressed concerns about AI being used in authoritarian ways. US defense officials and Trump have meanwhile framed the company's actions as a politically biased betrayal of the country, with Trump calling it a "A RADICAL LEFT, WOKE COMPANY" in a post on his social media platform, Truth Social. Despite the defense department striking deals in recent weeks with rival firms OpenAI and Elon Musk's xAI to allow them to operate in a classified environment, disentangling federal agencies from their use of Claude is an enormous undertaking that would take months of disruption to complete. The company's technology is deeply intertwined with government operations, including in the military, where it is reportedly being used to select and analyze targets of missile strikes in Iran.
[13]
Judge questions Pentagon's "troubling" Anthropic actions
Why it matters: The Trump administration is looking to remove Claude from federal agencies and prevent companies that do business with the Pentagon from working with the AI lab. * Agencies have started to do so, and Anthropic says some companies are rethinking contracts. What they're saying: "I don't know if it's murder, but it looks like an attempt to cripple Anthropic," said U.S. District Judge Rita Lin. * Lin referred to three Trump administration actions: President Trump's ban on Anthropic, Defense Secretary Pete Hegseth's requirement that Pentagon contractors cut commercial ties with the company, and its designation as a supply chain risk. * "What is troubling to me about these three actions is that they don't really seem to be tailored to the stated national security concern. If the worry is about the integrity of the operational chain of command, [the Pentagon] could just stop using Claude," she said. Driving the news: Anthropic is seeking to pause the designation, block its enforcement by federal agencies, and roll back actions already taken. * Anthropic argues that the court should restore the status quo from before the designation occurred, while the case -- including First Amendment and procurement law claims -- is litigated. * The company says that would help ease ongoing reputational damage and provide commercial partners more certainty. Friction point: Essentially, the company argues, there should be a return to the status quo as of Feb. 26, before Trump and Hegseth said on social media that Anthropic would "immediately" be blacklisted. * The Pentagon's lawyer argued that the social media posts are not legally binding. * The judge said she found that argument "pretty surprising ... obviously the statement is front and center in this lawsuit." The Pentagon argues in court filings that Anthropic has asked for an "operational veto" of the Pentagon's decision-making and that Anthropic has full control over Claude's availability and performance. * Department officials say that would be inappropriate and dangerous in sensitive operations. * Anthropic denies it has operational control over the model once deployed in classified settings. What's next: Anthropic asked for a decision by March 26, but the court is not bound by that date. Editor's note: This is a breaking news story. Please check back for updates.
[14]
Trump administration says Anthropic refusal was 'not protected speech' in US court
In a new filing, the Trump administration backs Hegseth's designation * Pentagon defends blacklisting Anthropic as lawful national security move * Company's lawsuit claims designation violates free speech and due process * Court battle looms as experts say Anthropic may have a strong case The Trump administration said the Pentagon did not violate Anthropic's speech protections under the US Constitution's First Amendment, when it blacklisted the AI company earlier this year. In a court filing that the administration filed with the court earlier this week, it essentially backed Defense Secretary Pete Hegseth's designation that Anthropic was a national security supply chain risk, and deemed blacklisting as justified and lawful, Reuters reported. In the last couple of months Anthropic, the company behind the famed Claude Artificial Intelligence solution, was in negotiations with the Pentagon over lucrative deals that would see Claude and other tools integrated into different US Department of Defense (DOD) projects. Responding with a lawsuit The negotiations allegedly broke down after Anthropic declined to remove the guardrails that were set up to protect the technology from being used for autonomous weapons or domestic surveillance. Soon after, the company was deemed a national security supply chain risk, to which Anthropic responded with a lawsuit. In the lawsuit filed on March 9, the AI company said the "unprecedented and unlawful" designation violated its free speech and due process rights. At the same time, it said the designation also broke federal law that requires agencies to follow certain procedures when making these kinds of decisions. "It was only when Anthropic refused to release the restrictions on the use of its products -- which refusal is conduct, not protected speech -- that the President directed all federal agencies to terminate their business relationships with Anthropic," it says in the filing. "No one has purported to restrict Anthropic's expressive activity," it was stated. Anthropic asked the California federal court to block the Pentagon's decision until a ruling is made. Reuters says that "some legal experts" believe the company has a "strong case". The company responded to the filing saying "seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners." Via Reuters Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[15]
Judge says Pentagon's Anthropic claims look like retaliation
On Tuesday, a federal judge signaled that the Trump administration's claims that Anthropic is a "supply chain risk" may not hold up in court, but instead will be recognized as punishment for going public with its Pentagon dispute. The case erupted into public view late last month, after Anthropic apparently refused to let the Pentagon use its Claude AI without restrictions -- specifically, without contractual guardrails keeping the government from using Claude for mass domestic surveillance of Americans conducted without warrants and for deployment in fully autonomous weapons systems. In response, the Pentagon moved to designate Anthropic a "supply chain risk," a label previously reserved for foreign adversaries like Chinese telecom firms -- even as Pentagon officials continued to negotiate the contract with the company, with Pentagon CTO Emil Michael describing the two sides as "very close" to reaching an agreement even as the "supply chain risk" designation was being finalized and President Donald Trump and officials were attacking the company on social media. Anthropic then sued, arguing the designation was unconstitutional retaliation for publicly disclosing the dispute. And on Tuesday, a federal judge appeared to agree -- at least preliminarily. Lin's language echoes warnings legal experts have also raised. "If you give the government a license to kill companies, then companies are always going to be under threat of execution, and therefore they will always feel like they need to do what the government says," Matthew Seligman, founder of Grayhawk Law and a former Harvard Law lecturer, recently told Quartz. The worry is about that kind of power, and this administration's use of that kind of power. "If the [Department of Defense] walks up to a company and says, 'We want to use your technology, and if you don't let us, we're going to kill your company' -- that's a very unsettling place to be." The implications for investors are just as serious. "If you're an investor, and you know that any one of your portfolio companies could be killed at any time if they don't go along with whatever request the Department of Defense makes of them, that introduces a huge amount of risk," Seligman said -- particularly if you believe a current or future administration won't use that power with restraint. An unusually broad coalition of companies and organizations filing amicus briefs, siding with Anthropic, further underlines a widespread view that the government should not have the power to summarily execute companies. "Nearly all support Anthropic's position seeking an injunction of the supply-chain risk designation," Fortune reported, including Microsoft $MSFT, retired military personnel, and a host of engineers and developers at OpenAI and Google $GOOGL. Yet the Pentagon continues to use Claude to this day, including in its war on Iran. The judge has not yet issued an official ruling. Anthropic has requested a decision by March 26th, but the court isn't bound by that date. Still, the hearing landed well for Anthropic, whose lawyers argued the designation was "inconceivable" as a good-faith security finding. Deputy Assistant Attorney General Eric Hamilton acknowledged in court that the Defense Department failed to follow required protocol for the supply-chain designation -- including briefing Congress and exploring less restrictive alternatives.
[16]
'Attempted corporate murder' -- Judge calls on Anthropic and Department of War to explain dispute over supply chain risk | Fortune
The case -- which involves a historic first in that the Pentagon, renamed the Department of War (DOW), labeled a U.S.-led business as a supply-chain risk to national security -- is rooted in a contract negotiation that escalated quickly. The DOW wanted to add a blanket "all lawful use" clause to its contracts with the AI firm so the military could use Anthropic's Claude tool for any legal purpose. Anthropic balked at the military using Claude for lethal autonomous warfare and mass surveillance of Americans. Anthropic, led by founder Dario Amodei, said it hasn't thoroughly tested those uses and doesn't believe they work safely. The DOW claimed those guardrails were unacceptable and that military commanders need latitude to make determinations on missions. On Feb. 27, President Trump posted on Truth Social directing "EVERY" federal agency to "IMMEDIATELY CEASE" all use of Anthropic's tools. That same day in a post on X, Secretary of War Pete Hegseth labeled Anthropic a "supply-chain risk" and said "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." The risk label is usually reserved for nation states, foreign adversaries, and other threats. Anthropic followed with a lawsuit on March 9, alleging the government "retaliated against it" for expressing its views on safety guardrails and had violated the First Amendment in doing so. It also claimed the government violated the process laid out in the Administrative Procedure Act and the Fifth Amendment's right to due process. The government said the administration's actions were in response to Anthropic's refusal to implement those terms in the contract during the negotiation and argued free speech wasn't at issue in the case. Deputy Assistant Attorney General Eric Hamilton said the government has unrestricted power to determine which companies it will contract with. Hamilton said Anthropic's conduct had raised concerns that future software updates could be used as a "kill switch" in military operations. District Judge Rita F. Lin was skeptical and in her opening statements described the case as a "fascinating public policy debate" over Anthropic's position versus the government's military needs, but said her role wasn't to "decide who is right in that debate." Rather, Lin said the real question to be decided by the court was whether the government "violated the law" when it went beyond just not using Anthropic's AI services and finding a more permissible AI vendor to work with. "After Anthropic went public with this contracting dispute, defendants seemed to have a pretty big reaction to that," Lin said. The reactions included banning Anthropic from ever having a government contract -- excluding other entities like the National Endowment for the Arts from using it to design a website; Hegseth's directive that anyone who wants to do business with the U.S. military sever their commercial relationship with Anthropic; and, designating Anthropic as a supply-chain risk. "What is troubling to me about these reactions is that they don't really seem to be tailored to the stated national security concern," said Lin. If the concern is about chain of command, DOW could just stop using Claude and go on its way, she said. "One of the amicus briefs used the term attempted corporate murder," she added. "I don't know if it's murder, but it looks like an attempt to cripple Anthropic. And specifically my concern is whether Anthropic is being punished for criticizing the government's contracting position in the press." The amicus, friend-of-the-court, briefs in the case have drawn a variety of voices including from Microsoft, retired military officers, and engineers and researchers from OpenAI and Google. Nearly all support Anthropic's position seeking an injunction the supply-chain risk designation. The brief Lin referred to came from investors and the "Freedom Economy Business Association." The brief referred to an X post written by Dean Ball, Trump's former senior policy advisor for AI and emerging tech. "Nvidia, Amazon, Google will have to divest from Anthropic if Hegseth gets his way," Ball wrote. "This is simply attempted corporate murder. I could not possibly recommend investing in American AI to any investor; I could not possibly recommend starting an AI company in the United States." The American Federation of Government Employees, a union of 800,000 federal workers, said in its amicus brief that the Trump administration had a pattern of using national security concerns as a pretext for retaliation against free speech. Microsoft wrote that a ban on Anthropic would hurt its own business, and could chill future defense-industry investment and engagement with AI. The Human Rights and Technology Justice Organization brief didn't take a position who should win in court, but argued against militarized AI broadly, and stating that its use could lead to catastrophic human rights risks.
[17]
'An attempt to cripple Anthropic': US judge questions Anthropic ban
"He's [President Trump] trying to teach the AI industry to fall into line like everybody else," Computer scientist Ben Goertzel tells Euronews. The US government's ban on Anthropic appears punitive, following the company's public dispute with the Pentagon over its refusal to allow unrestricted military use of its Claude AI model. Anthropic made its case before a San Francisco federal court on Tuesday, seeking an injunction against the US government's decision to blacklist it as a national security risk. The District Judge Rita F. Lin said at the outset of the hearing that "it looks like an attempt to cripple Anthropic," adding she was concerned the government could be punishing Anthropic for openly criticising the government's position, US media reported. US President Donald Trump and Defense Secretary Pete Hegseth publicly declared in February that it was cutting ties with the artificial intelligence (AI) company after it refused to allow unrestricted military use of its Claude AI model. The restrictions in dispute include the use of lethal autonomous weapons without human oversight and mass surveillance of Americans. In response, the US government labelled Anthropic a "supply chain risk to national security" and ordered federal agents to stop using Claude. On March 9, Anthropic filed two lawsuits against the government over its designation as a supply chain risk. One is a case for reconsideration of the supply chain risk and the other alleges the Trump administration violated the company's First Amendment right to speech. Lin told the courtroom that the Pentagon has a right to decide on the AI products it uses but she questioned whether the government broke the law by banning agencies from using Anthropic, and when Hegseth announced that those seeking relations with the Pentagon should cut ties with Anthropic, NPR reported. A lawyer for the government said the Pentagon's actions were not retaliatory and based on how Anthropic's AI model could be used and not on the company's decision to go public about the disagreement. NPR also reported that Anthropic could be at risk in the future because it could update its Claude AI model in a way that endangers national security. Euronews Next reached out to Anthropic for comment but did not receive a reply at the time of publication. What the ruling would mean for AI companies Being a supply chain risk usually only applies to foreign companies. "It seems inappropriate to apply that designation here," said Ben Goertzel, a computer scientist and CEO of SingularityNet and The Artificial Superintelligence Alliance. "It means the executive branch can just reinterpret words and laws however they feel like," he told Euronews Next. Goertzel added that if the strongest version of the supply chain risk designation were applied, and it meant Anthropic could not sell software to any company that did any government business, that would be "extremely bad for the company". He said Antropic would survive financially as there is a lot more business outside of government projects and "they would get a lot of boost among the percent of the country that's not a big fan of Donald Trump." But he said the immediate impact of upholding the supply chain risk designation is that it would "disincentivise other companies from standing up to the Trump administration". "He's [President Trump] trying to teach the AI industry to fall into line like everybody else," Goertzel said. Judge Lin said she expected to make a ruling in the coming days on whether to temporarily pause the government's ban while the court continues to examine the broader case.
[18]
Judge calls Pentagon's moves against AI firm Anthropic "troubling": "It looks like an attempt to cripple Anthropic"
Joe Walsh is a senior editor for digital politics at CBS News. Joe previously covered breaking news for Forbes and local news in Boston. A judge sharply questioned a lawyer for the federal government on Tuesday over the Pentagon's efforts to cut Anthropic out of its classified systems -- the latest development in a dispute between the company and the Trump administration over AI guardrails. The back-and-forth revolves around Anthropic's push to bar the military from using its AI model Claude to surveil Americans or power fully autonomous weapons. The Trump administration has said it needs the ability to use Claude for "all lawful purposes." When the two sides were unable to come to an agreement, the Pentagon designated Anthropic a "supply chain risk" and moved to stop private companies from using Claude on military contracts, leading Anthropic to sue. Anthropic, which argues the Pentagon's action was an unconstitutional attempt to punish it for speech, is asking the judge to block the supply chain risk designation, as well as President Trump's order for all federal agencies to stop using Anthropic. In a Tuesday afternoon hearing in San Francisco, U.S. District Judge Rita Lin appeared skeptical of the government's actions, calling them "troubling" and saying they "don't really seem to be tailored to the stated national security concern." "If the worry is about the integrity of the operational chain of command, DOW could just stop using Claude," Lin said, using the acronym for the Department of War, the administration's term for the Defense Department. "It looks like defendants went further than that because they were trying to punish Anthropic. One of the amicus briefs used the term 'attempted corporate murder.' I don't know if it's murder, but it looks like an attempt to cripple Anthropic." Early in the hearing, Justice Department attorney Eric Hamilton conceded that the supply chain risk designation does not stop companies that contract with the military from using Anthropic's model on non-military-related work -- a move Anthropic has argued would be illegal. Defense Secretary Pete Hegseth had written on social media last month that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." Upon questioning from Lin about Hegseth's post, Hamilton confirmed that the Defense Department will not terminate any federal contractors because they have relationships with Anthropic that are separate from their work with the Pentagon. He also said he wasn't aware of a law that gives the department that kind of power. Anthropic's attorney, Michael Mongan, argued that Hegseth's post has still created "profound uncertainty" and harmed the business, noting that it has been viewed millions of times. The law that was used against Anthropic defines a supply chain risk as a "risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert" a national security system. Hamilton said Tuesday the government decided to label Anthropic a supply chain risk because the company's negotiating position and discussions with military officials made the Pentagon unable to trust Anthropic, and sparked concerns about a "risk of future sabotage." He suggested the military is worried about Anthropic trying to "manipulate" its software or install a "kill switch." Lin questioned that stance, and said the government appears to be saying that a company can be designated a supply chain risk because it is "stubborn" and "asks annoying questions." In Tuesday's hearing, Mongan denied that the company has the ability to change, shut off, surveil or otherwise influence its software once it is approved by the government and deployed for use. He also argued that if Anthropic poses a serious risk, it doesn't make sense that the government appeared open to striking a deal with the company until the very end. "A saboteur is not going to get into a public spat," Mongan said. "They're just going to accept the contractual term proposed by the government and then go and do ... nefarious things." Lin said Tuesday that she plans to rule on the matter in the coming days. The conflict between the government and Anthropic -- which was the only AI firm whose technology was deployed in classified U.S. military systems -- highlights a broader debate over acceptable uses of AI and how extensively the technology should be regulated. Anthropic CEO Dario Amodei has said he wants to work with the military, but he has vowed to stick to two "red lines" banning mass surveillance of Americans and fully autonomous weapons that can carry out strikes without human input. He argues that AI's potential to surveil people is "getting ahead of the law," and said "the reliability is not there yet" for autonomous weapons. "I think we are a good judge of what our models can do reliably and what they cannot do reliably," Amodei said in an interview last month with CBS News. The Pentagon has said it has no interest in using Anthropic's technology for mass surveillance or fully autonomous weapons, and argues those uses are already illegal and banned under existing military policies, respectively. But the military has said its decisions about lawful uses of AI technology shouldn't be up to private companies, and has accused Anthropic of trying to impose its own values onto the government. Pentagon Chief Technology Officer Emil Michael said last month that Amodei has a "God-complex" and "wants nothing more than to try to personally control the US Military." On Tuesday, Lin called that dispute a "fascinating public policy debate" but not the focus of the case, noting that both sides agree the Pentagon can choose not to use Anthropic. Instead, she said, she plans to focus on whether the government's moves to label Anthropic a supply chain risk are legal.
[19]
US govt says Anthropic AI an 'unacceptable risk' to military
San Francisco (United States) (AFP) - Artificial intelligence company Anthropic posed an "unacceptable risk" to military supply chains, the US government insisted Tuesday, as it defends against the tech firm's challenge to a designation as dangerous. Anthropic's Claude AI model has been in the spotlight in recent weeks both for its alleged use in identifying targets for US bombing in Iran and the company's refusal that its systems be used to power mass surveillance in the United States or lethal fully autonomous weapons systems. Justifying its decision to cut ties with Anthropic in response to a legal complaint from the firm, the Pentagon -- dubbed the Department of War (DoW) by the Trump administration -- said it "became concerned that allowing Anthropic continued access to DoW's technical and operational warfighting infrastructure would introduce unacceptable risk into DoW supply chains," in a court document seen by AFP. "AI systems are acutely vulnerable to manipulation," the government added in the filing to a California federal court. "Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic -- in its discretion -- feels that its corporate 'red lines' are being crossed," it said. Anthropic's refusal to agree that its AI tech could be deployed by the military for "any lawful use" therefore posed an "unacceptable risk to national security," the document read. "Anthropic's behavior more generally caused the Department to question whether Anthropic represented a trusted partner," the government said. Classification as a "supply chain risk," which Anthropic has challenged in a case against the Pentagon and other arms of the federal government, in theory means that all government suppliers would be barred from doing business with the company. The designation is typically reserved for organizations from foreign adversary countries, such as Chinese tech giant Huawei. Other major American tech firms such as Microsoft, which itself both uses Anthropic's Claude model and supplies the US military, have weighed in on the AI company's side. "This is not the time to put at risk the very AI ecosystem that the administration has helped to champion," Microsoft said in an amicus brief filed with the court last week.
[20]
The Secretary of War didn't really mean it, contends US government lawyer as Anthropic gets its first day in court over Trump 2.0's risk to the nation designation. So what did he mean?
So how did Anthropic's first day in court facing off against the US Department of War (DoW) go? Well, we won't find out for a few days yet, but presiding District Judge Judy Lin of the California Northern District Court was very clear about what she sees as at being at stake here - and made clear that she doesn't see her role as being "to decide who is right" in the "fascinating public policy debate" around ethical AI use. But there is still a lot riding on the outcome of her decision on whether to grant Anthropic an injunction to block the blanket ban put in place by Trump 2.0 on use of its tech, a ban which the supplier claims would cost it "multiple billions of dollars" and is an over-reach of government authority. Lin noted: One of the amicus briefs [filed by third parties in support of Anthropic] used the term 'attempted corporate murder'. I don't know if it's murder, but it looks like an attempt to cripple Anthropic. And specifically my concern is whether Anthropic is being punished for criticizing the government's contracting position in the press. She added: After Anthropic went public with this contracting dispute, defendants seemed to have a pretty big reaction to that...What is troubling to me about these reactions is that they don't really seem to be tailored to the stated national security concern. In court, Anthropic lead counsel was Michael Mongan from law firm WilmerHale, while Deputy Assistant Attorney General Eric Hamilton appeared for the defence on behalf of the government. In what was a neat and concise hearing, Lin ran through a series of questions she wanted answers to, beginning with a tweet sent out by Secretary of War Pete Hegseth on 27 February that announced: Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic...This decision is final. Now, several weeks later, she wanted to know, does the DoW contend that this is a legitimate legal order? Hamilton performed a retreat on that one, admitting that Hegseth's comment had no legal authority and is not an agency action. It was just a social media post announcing that an action was being taken and that's how everyone ought to have understood it, he insisted. So when the Secretary of War said, without being prompted, that this was his final decision and was what is going to happen, this was not in fact a "correct statement of what the DoW is about to do", clarified Lin. No, it isn't, admitted Hamilton. That obviously begs the immediate question, which Lin was quick to ask, of how Anthropic or anyone else is supposed to know that, even though Hesgeth said it in public, he didn't really mean it? The DoW line is that the comment was followed up by a letter several days later making clear what the official position was, while further information was provided in court filings, although, of course, timing-wise that only happened after Anthropic began legal action. So, does this mean that the US Government now concedes that Hegseth exceeded his authority with that social post, asked Lin: Why did Secretary Hegseth say this if it has no legal effect and he didn't intend it to happen? For the defence, Hamilton said he didn't know. President Donald Trump had weighed into the row between Anthropic and the DoW with a blanket ban on the US Government doing work with any third party using Anthropic tech, a sweeping statement that left a lot of questions in its wake about how far this applied up and down the supply chain. Lin wanted her own clarification here: Suppose i'm a contractor who supplies toilet paper to DoW. I'm not going to be terminated for using Anthropic? The DoW still has concerns about sub-contractors using Anthropic tech for DoW work, but not non-DoW work, at this point, replied Hamilton. That being so, what does that mean for Anthropic's claim that the US Government's actions risk costing it billions as customers and partners hesitate to work with it? For the plaintiffs, Mongan said the row had caused "immediate harm to economic and reputational interests". While conceding that the DoW had indeed sent a letter to Anthropic to set out its actions in more detail, he pointed out that this did not arrive until 20 days after Hegseth's now deemed invalid public pronouncement. He also argued that one reason why Anthropic needs "injunctive relief" and "authoritative clarity" from the court is that Hegseth has not publicly withdrawn his comment. But when asked by Judge Lin what there is to stop the DoW from changing its view on this, Hamilton replied that DoW isn't going "to change its interpretation of the social media post". Attention also turned to whether the DoW's actions have been excessively punitive. Lin noted that that there's a difference between the DoW stopping working with Anthropic and deeming it overnight to be a supply chain risk to the security of the nation, using language involving sabotage and adversaries usually reserved for terrorist organizations and hostile foreign powers. This is all hypothetical, admitted Hamilton, conceding that the DoW position here is based on concerns about what might happen at some future date rather than anything that actually has occurred. But, for example, what might happen when the tech needs upgrades, he wanted to know: Sabotage could be introduced through the updates. So, the implication here is that the DoW has acted as it has on a hunch? Or, as Lin returned to one of her most basic questions, is this really all about the DoW just being annoyed because Anthropic has been "stubborn" in not acceding to government demands to ditch its ethical use red lines and "asks annoying questions"? Acting stubbornly is not enough to justify such a risk designation, admitted Hamilton, but he insisted Anthropic is doing more than that, and that as a result in raising concerns about how DoW uses its technology. It all comes down to trust, he argued, and Anthropic has destroyed that trust and made it impossible to be regarded as "reliable and trustworthy partners" by raising questions around lawful uses of its technology: If Anthropic is pushing back now, what's going to happen in the future, when our warfighters need it most? That's unacceptable risk. The imposition of the sweeping supply chain risk designation was done for ease and efficiency, he went on, providing "one tool to address a risk" that could be used across agencies, rather than having to go through individual contracts and designate risk via what Hamilton called "a patchwork approach" that is "not a preferred option in the context of a national security risk". To which claim, Mongan hit back: This is a supply chain risk designation in search of a rationale. Whether Lin sees DoW's behavior as a 'sledgehammer to crack a nut' approach or a valid strategy seems likely to be at the kernel of her ruling later in the week. For now, we can only wait to see what happens next.
[21]
The next AI fight is First Amendment rights for the chatbots
The fight between Anthropic and the Pentagon looks at first like a fight about AI safety -- a principled tech company drawing ethical lines in the sand. It is at least partly that. But it's also a First Amendment case. It's a test of whether the executive branch can summarily execute its vendors for "noncompliance." It's an investor risk story for everyone who put hundreds of billions into AI companies on the assumption that the U.S. government would be a customer, not a corporate murderer. And it's a dress rehearsal for every painful question that humanity hasn't figured out how to answer about the most powerful information technology it has ever built. What's the legal status of AI? Who's in charge of it? When -- not if -- something goes wrong, who's responsible? In other words, this fight is even bigger than it looks. And it's even stranger than it seems. The Pentagon followed through in early March, effectively blacklisting Anthropic from government contracts. Anthropic sued, warning the designation could cost it billions. A hearing on whether to grant Anthropic temporary relief is scheduled for Tuesday. A more specific triggering incident has since been widely reported: After the January raid that captured the Venezuelan leader Nicolás Maduro, an Anthropic executive contacted Palantir $PLTR -- through which Claude was integrated into Pentagon systems -- asking how its AI had been used. Palantir flagged the inquiry to Pentagon officials, who read it as disapproval of a classified operation, kicking off the failed negotiations that preceded the rift. Pentagon CTO Emil Michael confirmed many of the details to The Wall Street Journal. "There is no chance," he said. "There's no partnership that can be had." What Michael didn't say publicly was revealed in a court filing last Friday: Michael emailed Amodei on March 4 -- the day after the Pentagon finalized the supply-chain designation -- to say the two sides were "very close" on the exact two issues the government now cites as evidence that Anthropic poses a national security threat. The email has now become evidence, suggesting if not proving that the supply-chain threat designation was a bargaining chip, rather than a straightforward flagging of risk. If the two sides were "very close" even as the designation was being finalized, how much of a security risk could the Pentagon actually regard Anthropic to be? The core of that argument rests on what kind of machine an AI model actually is, said Matthew Seligman, founder of Grayhawk Law, who's taught at Harvard Law, and was a fellow at Stanford Law School's Constitutional Law Center. "What Anthropic is arguing, in essence, is that they are different from a traditional defense contractor because what they are offering the government is a speech machine -- one whose outputs are information, not explosions," Seligman told Quartz. "The big question is whether Anthropic's technology is more appropriately analogized, for First Amendment purposes, to Lockheed Martin $LMT -- or to a defense analyst. And I think that really highlights the fact that these AI models don't fit comfortably into either of the traditional legal categories. "The law is going to have to develop an understanding of how to analyze them," he added, "as has happened many times over the centuries when the law has had to adapt to a new technology that made its old categories obsolete." "If you give the government a license to kill companies, then companies are always going to be under threat of execution, and therefore they will always feel like they need to do what the government says," Seligman said. The worry is about that kind of power, and this administration's use of that kind of power. "If the [Department of Defense] walks up to a company and says, 'We want to use your technology, and if you don't let us, we're going to kill your company' -- that's a very unsettling place to be." The implications for investors are just as serious. "If you're an investor, and you know that any one of your portfolio companies could be killed at any time if they don't go along with whatever request the Department of Defense makes of them, that introduces a huge amount of risk," Seligman said -- particularly if you believe a current or future administration won't use that power with restraint. This context is poorly understood because it is genuinely dense and incredibly detailed, a web of First Amendment jurisprudence, interpretation, precedent, and case law. But experts point to a common theme: the potential for constitutional protections to emerge that shield the AI industry from regulation. Stephenie Brown, a lawyer who teaches business law and AI at Virginia Commonwealth University, put it plainly, telling Quartz that First Amendment protection is "the gold standard" for avoiding regulation. "It doesn't just shield against one lawsuit," she said. It limits large categories of potential regulation wholesale. State-level oversight becomes constitutionally fraught. Federal rules must clear a much higher bar. The protection, if it's granted, may be vast. While it may be dystopian to consider that AI could be granted some constitutional rights, it's also less of a reach than it might seem. In an interview, Mary Ann Franks, a professor in intellectual property, technology, and civil rights law at the George Washington University, traced the corporate capture of First Amendment doctrine, outlining how interpretation has been expanded to include some rights for corporations. This is, in part, the result of a strategic pivot on the part of Republicans, Franks said. "In the 80s and 90s, Republicans were like, 'Wait a minute, this is actually great for us -- because there's a version of free speech that isn't about letting the dirty hippies talk. It's about letting tobacco companies talk.'" The First Amendment, once associated with labor organizers and civil rights activists, became a deregulatory instrument. Tech companies, Franks said, are simply the latest beneficiaries. "It's a really sexy, catchy thing to do when you can disguise your profit motive or your selfishness as, 'Oh, no, we're respecting a very transcendental principle of freedom of expression.' It really works on people." The result, she argues, is a First Amendment now flipped, inverted in crucial ways. "If the First Amendment was supposed to protect the people from the government, all it is doing right now is protecting the government from the people," Franks said. The Trump administration has made its position clear, arguing that AI development should move fast, and that the federal government -- not state legislatures, not courts, and certainly not safety-minded contractors -- should set the terms. In December, Trump issued an executive order directing Attorney General Pam Bondi to establish a task force to legally challenge state AI laws deemed too restrictive, and instructing the Commerce Department to withhold federal funds from states that don't comply. The order explicitly moves away from the Biden administration's focus on safety and equity, instead prioritizing rapid development. Virginia, which stands to lose almost $1.5 billion in broadband funding, is already drafting its AI legislation with one eye on Washington. But it's just one state among many weighing the loss of federal funds. The irony? While the Trump administration is aggressively blocking formal AI regulation, it is simultaneously demonstrating, through the Anthropic case, exactly why such regulation is necessary. What Trump is pursuing isn't a hands-off approach to AI -- it's control of AI at the executive level, unchecked by either Congress or the courts. Seligman, the former Harvard law lecturer, described the Anthropic fight as an extension of the larger executive power grab. "This is by far the most aggressive administration in certainly recent and probably all of American history with respect to executive power," he said. "The fact that there are more challenges to executive action is a reflection of the fact that there's been more aggressive executive action." The companies that have sued, he noted, have largely prevailed. Meanwhile, even inside the AI industry, the absence of a coherent regulatory framework is widely recognized as a problem. Nick Tiger, associate general counsel at the AI company Pearl, argued that regulation is needed, and that the infrastructure for it already -- it just hasn't yet been marshalled. "You've got organizations already in place that can do this," he said. "The Consumer Financial Protection Bureau could get involved and give guidance and rules about what is a misleading AI customer experience, when you have to disclose. The FTC regulates all these commercial transactions." Tiger stopped short of calling for a new agency, though. "I don't necessarily think we need to create a Department of AI or something like that," he said. The major obstacle, Tiger argued, is a knowledge gap on both sides of the table. "There are regulators who don't understand the advances in technology, and there's a lot of misinformation when they've got constituents in their ear telling them things that aren't true. But then on the other side, AI engineers don't understand all of the public policy nuances." The result, he predicted, will necessarily be some kind of draw. "It's just going to be a push and pull until we eventually land somewhere in the middle." Right now, procurement is playing the role one might expect regulators to play, if not acting as actual regulators might act. The Defense Department remains the federal government's largest technology buyer, and its contract requirements effectively become industry standards -- spreading well beyond military systems into the broader commercial market. Which is why, following the "supply chain risk" designation, at least 100 of Anthropic's customers, from pharma to fintech, have already moved to pause or cancel their contracts. In other words, the Anthropic-Pentagon dispute, as consequential as it is, will not clear up all or even many of the First Amendment questions that AI raises, including the most existential ones. "There's an understandable urge to have these big-picture questions resolved early and definitively by a single court case," he said. "But that's not going to happen."
[22]
Anthropic v. US Department of War: What to know about AI court battle
Anthropic heads to a San Francisco federal court on Tuesday to seek an injunction against the US government's decision to blacklist it as a national security risk. The standoff between Anthropic and the US government is expected to come to a head on Tuesday, when the artificial intelligence company will argue its case for a preliminary injunction against the Department of War and the White House in federal court. The move comes after US President Donald Trump and Defense Secretary Pete Hegseth publicly declared in February that it was cutting ties with the artificial intelligence (AI) company after it refused to allow unrestricted military use of its Claude AI model. The restrictions in dispute include the use of lethal autonomous weapons without human oversight and mass surveillance of Americans. In response, the US government labelled Anthropic a "supply chain risk to national security" and ordered federal agents to stop using Claude. The case illustrates the ethical, business, and legal crossover of advanced AI and raises questions over who should determine the limits of the technology: the tech companies guided by internal safety principles or public authorities that act in the name of national security and geopolitical interests? Here is everything to know about the hearing. The hearing will take place before the US District Judge Rita Lin in San Francisco. The judge fast-tracked the hearing from an April 3 date. Anthropic is fighting against the designation of being a "supply chain risk," with the company's co-founder and CEO Dario Amodei saying that Anthropic has "no choice but to challenge it in court". On March 9, Anthropic filed two lawsuits against the government over its designation. One is a case for reconsideration under the existing statute of the Pentagon's designation. Anthropic argues the blacklisting is "unprecedented and unlawful," as historically it had applied only to foreign adversaries such as Huawei and cannot be legally weaponised against a domestic company over a disagreement over policy. The other lawsuit argues that the blacklisting raises concerns over the First Amendment, meaning the right of free speech and the right to protest. In 2025, Anthropic signed a $200 million contract with the Pentagon to deploy its technology within classified systems. In negotiations after the contract, Anthropic said it did not want its AI systems to be used for mass surveillance and that its technology was not ready to use for firing decisions of weapons. In a March 17 court filing, the Department of War said that it was concerned that Anthropic might "attempt to disable its technology or preemptively alter the behaviour of its model" before or during "warfighting operations" if the company "feels that its corporate 'red lines' are being crossed". Anthropic argued that this concern did not come up during negotiations and only appeared in the government's court filings. Euronews Next has reached out to Anthropic but did not receive a reply at the time of publication. Judge Lin will hear arguments on whether to grant Anthropic temporary relief. AI scientists and researchers, including those from large companies OpenAI, Google, and Microsoft, as well as legal groups, have filed briefs in support of Anthropic. The Pentagon has instead shifted its attention to work with other AI companies, including OpenAI, xAI, and Google.
[23]
Judge says Pentagon's Anthropic ban looks like 'attempt to cripple' company
A federal judge in California hammered the Pentagon on Tuesday for its decision to label Anthropic a supply chain risk, signalling skepticism over what she described as a "troubling" move from the federal government. U.S. District Judge Rita Lin, based in San Francisco, suggested during Tuesday's hearing that the Defense Department's determination "looks like an attempt to cripple Anthropic." Lin added she is specifically concerned about whether the AI company is "being punished for criticizing the government's contracting position." The hearing centered on Anthropic's request to the court to temporarily halt the Pentagon's supply chain risk designation. The AI company argues the designation, typically reserved for foreign adversaries, will cause it "irreparable harm." Anthropic accused the Trump administration of retaliating against the company for what it believes are "protected viewpoints" regarding how its AI technology can safely and reliably be used. The Pentagon and Anthropic's contract negotiations last month following a disagreement over guardrails on the use of AI for autonomous weapons or mass domestic surveillance. Lin in opening remarks noted it is not the court's role to determine what AI can safely be used for, but whether the government violated the law by labelling the Pentagon a supply chain risk. The court is also considering the directive from President Trump, who ordered federal agencies to "immediately cease" using Anthropic's products through a social media post. "What is troubling to me about these reactions is that they don't really seem to be tailored to the national security concern," Lin said. "If the worry is about the integrity of the operational chain of command, DOW [Department of War] could just stop using Claude. It looks like defendants went further than that because they were trying to punish Anthropic." Lin went through most of the questions she gave the parties on Monday, including whether Defense Secretary Pete Hegseth's Feb. 27 social media post about the designation had any legal effect. In that post, Hegseth announced the designation, and that effective immediately, "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." Hegseth added in that post "this decision is final," but Eric Hamilton, deputy assistant attorney general for the Justice Department's Civil Division, appeared to contradict the secretary's wording. "No entity would face liability for noncompliance with the post," Hamilton told the court Tuesday. "This was a social media announcement that DOW would be taking action." Lin said she found the DOJ's position "pretty surprising," asking again whether Hegseth's statement is "not true" and a "false statement." The DOJ argued Hegseth's previous wording, which stated he was "directing" the Department of War to designate Anthropic a supply chain risk, implied pending action. "You're standing here saying 'we said it, but we didn't really mean it,'" Lin added. The DOJ said it clarified Hegseth's statement in court filings in California, as well as the case in the D.C. Appeals Court. Anthropic's attorney, Michael Mongan, said he appreciated the "concessions" of the DOJ, but that it is 25 days after Hegseth's directive was published. "It went out in a very public way," Mongan said. "Last time I looked, it was read over by 13 million people." Mongan argued the average person, military contractor, or prospective consumer would take Hegseth's social media post as a "final decision" as that is "exactly what it says." The two sides clashed over whether the Pentagon met congressional notification requirements, the definition of the word "adversary," and why the Trump administration chose the designation over just cutting the contract. Lin requested Anthropic hand over evidence of federal civilian agencies' terminating Anthropic by later Tuesday night, and gave the DOJ one day to respond. Lin said she will have a ruling on the preliminary injunction in the coming days.
[24]
US Judge to Weigh Anthropic's Bid to Undo Pentagon Blacklisting
March 24 (Reuters) - A U.S. judge is set to hear oral arguments on Tuesday in a lawsuit by Anthropic seeking to block the Pentagon's blacklisting of the artificial intelligence lab over its refusal to lift certain restrictions on its Claude AI model. Anthropic's lawsuit in California federal court alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated Anthropic a national security supply chain risk. The government can apply that label to companies that expose military systems to potential infiltration or sabotage by adversaries. Hegseth's unprecedented move, which followed Anthropic's refusal to allow the military to use Claude for U.S. surveillance or autonomous weapons, blocks Anthropic from certain military contracts. It could cost the company billions of dollars this year in lost business and reputational harm, Anthropic executives said on March 9. The company says AI models are not reliable enough to be safely used in autonomous weapons and that it opposes domestic surveillance as a violation of rights. ANTHROPIC DESIGNATION FIRST FOR U.S. COMPANY U.S. District Judge Rita Lin in San Francisco, an appointee of former Democratic President Joe Biden, is set to hold a hearing at 1:30 p.m. PT (2030 GMT) over Anthropic's request for an initial order blocking the designation while the case plays out. Anthropic's designation was the first time a U.S. company has been publicly designated a supply chain risk under an obscure government-procurement statute aimed at protecting military systems from foreign sabotage. In its March 9 lawsuit, Anthropic alleged the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute the designation, in violation of its Fifth Amendment right to due process. The lawsuit says the decision was unlawful, unsupported by facts and inconsistent with the military's past praise of Claude. The Justice Department countered that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing. The government said the designation stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety. Anthropic has a second lawsuit pending in Washington, D.C., over a separate Pentagon supply chain risk designation that could lead to its exclusion from civilian government contracts. (Reporting by Jack Queen in New York;Editing by Noeleen Walder, Rod Nickel)
[25]
Anthropic's (first) day in court against Trump 2.0, and the Judge has some difficult questions for Secretary of War Pete Hesgeth to answer...
As Anthropic and the US Government get into the courtroom today i (first of many?), the United States District Court of the Northern District of California has provided some insight into what the Department of War is going to have to do to justify its actions against the AI tech provider. The story so far - Anthropic was the only provider of AI tech that was mandated to have good enough security to be used on the most sensitive systems in the Pentagon. But the firm aired two ethical objections - firstly, its tech should never be used for mass domestic surveillance of US citizens; secondly, AI shouldn't be allowed to act by itself when it comes to making battlefield decisions and metaphorically pressing any big red buttons. This was all in the contract signed by the US Government, but last month the Department of War decided this wasn't good enough and demanded Anthropic let it do whatever it wanted with the tech, with no built-in ethical constraints, or else! The vendor stood its ground, although indicating publicly that it was ready to try to reach a workable compromise, but would not cast aside its ethical red lines. So on 27 February, Secretary for War Pete Hesgeth posted on social media that: Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. This decision is final. He would later, on 3 March, announce that Anthropic, until then the only AI provider trusted at the highest levels of the services, was now to be considered "a supply chain risk" to the nation. Having eventually been served official notification rather than policy-by-X, Anthropic decided to legal up. Today's court hearing centers on a request for a preliminary injunction to guarantee the firm can continue doing business with Federal Government contractors and agencies, without which, it says, it risks losing bilions of dollars of business. Anthropic's wider claim that while the Department is within its rights to cancel a direct contract with any supplier, it has not acted properly or legally in terms of the actions taken by Hesgeth, and endorsed by President Donald Trump, which include giving the whole of the US Federal Government six months notice to kick Anthropic out as well as banning any government contractor from using its tech. There are certainly some areas of considerable uncertainty unaddressed following Hesgeth's online pronouncements. For example, as a pre-hearing court filing from US District Judge Rita Lin noted yesterday: Under the Hegseth Directive, a law firm that gave advice to the Department of War would have to stop using Claude in unrelated matters for other clients. That would not be required by the supply chain risk designation. Hesgeth's actions are likely to come under close scrutiny by the Court, which has warned it wants answers to several questions: Is the Hegseth Directive an accurate statement of the Department's immediate intended course of action? Do Defendants [US Government] agree that Secretary Hegseth lacked authority to enter a directive of this breadth under Section 3252 or any other statute? If Defendants concede that the Hegseth Directive has no legal effect, how does Anthropic still face irreparable harm from it? What, if any, legal authority supports that view? The court filing goes on to note that US law does provide that Secretary Hegseth could designate Anthropic as a supply chain risk, but only after providing notice to various Congressional committee and would require the notice to contain "a discussion of less intrusive measures that were considered and why they were not reasonably available to reduce supply chain risk." Did this happen, asks the Court or: Do Defendants concede that Secretary Hegseth's letters to the congressional committees did not contain a discussion of those required topics? The Department of War is also called upon to explain in more detail just how far ranging it thinks its ban on using Anthropic will extend: Everyone agrees that the Department would be free to terminate any direct contract with Anthropic. However, Defendants contend that this would be insufficient to mitigate the risk, because the Department also needs to prohibit the use of Claude in its national security systems in situations where Anthropic is a sub-contractor. Does designating Anthropic as a supply chain risk sweep more broadly than that, though? For example, if a contractor for the Department uses Claude Code as a tool to write software for the Department's national security systems, would that contractor face termination as a result? And, perhaps most pertinently to the wider debate around the ethics and use of AI tech in combat scenarios, the filing notes that the term "supply chain risk" means "the risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert" the operation of the Department'of War's national security systems. That descriptor would usually be applied to terrorists or foreign hostiles, states the filing. That being so, what exactly is the US Government's beef with Anthropic, which has merely aired its wishes around two general areas of usage - mass domestic surveillance and autonomous engagement with launching missiles and the like? The court document asks: Do Defendants agree that usage restrictions that are publicly announced or directly communicated to the Department do not themselves constitute acts of "sabotag[ing], maliciously introduc[ing] unwanted function, or otherwise subvert[ing]" an IT system? What evidence in the record shows that Anthropic had ongoing access to or control over Claude after delivering it to the government, such that Anthropic could engage in such acts of sabotage or subversion? It adds: Presumably most IT vendors could, if they wanted to, update their systems or bury unwanted functions in their software without detection. Is it Defendants' view that Section 3252 allows the Department to designate an IT vendor a supply chain risk on the sole basis that the vendor acted stubbornly or refused to agree to contracting terms, causing the Department to question its trustworthiness? The first of what will inevitably be many, many days in court before this is cleared up - if it ever actually is. All rise for Judge Linn! Meanwhile the US Government is moving on regardless. OpenAI's Sam Altman was quick to move in to fill the ethics-shaped void left by his Anthropic counterpart by signing his firm up within hours of the crisis kicking off. Now it seems, according to a leaked 9 March memo from Deputy Secretary of Defense Steve Feinberg, Palantir's Maven command-and-control AI platform will become an official program of record for the Pentagon. It's also being reported in the US that the Trump 2.0 Administration is overhauling procurment contract language to prevent future occurances of vendors airing their consciences, designating systems will be able to be used "for any lawful government purpose".
[26]
US judge to weigh Anthropic's bid to undo Pentagon blacklisting
A legal battle is unfolding as AI firm Anthropic challenges the Pentagon's decision to label it a national security risk. The Defense Department blacklisted Anthropic for refusing to allow its Claude AI for military surveillance or autonomous weapons. Anthropic argues the designation is an overreach and violates its rights. A US judge is set to hear oral arguments on Tuesday in a lawsuit by Anthropic seeking to block the Pentagon's blacklisting of the artificial intelligence lab over its refusal to lift certain restrictions on its Claude AI model. Anthropic's lawsuit in California federal court alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated Anthropic a national security supply chain risk. The government can apply that label to companies that expose military systems to potential infiltration or sabotage by adversaries. Hegseth's unprecedented move, which followed Anthropic's refusal to allow the military to use Claude for U.S. surveillance or autonomous weapons, blocks Anthropic from certain military contracts. It could cost the company billions of dollars this year in lost business and reputational harm, Anthropic executives said on March 9. The company says AI models are not reliable enough to be safely used in autonomous weapons and that it opposes domestic surveillance as a violation of rights. ANTHROPIC DESIGNATION FIRST FOR U.S. COMPANY US District Judge Rita Lin in San Francisco, an appointee of former Democratic President Joe Biden, is set to hold a hearing at 1:30 p.m. PT (2030 GMT) over Anthropic's request for an initial order blocking the designation while the case plays out. Anthropic's designation was the first time a U.S. company has been publicly designated a supply chain risk under an obscure government-procurement statute aimed at protecting military systems from foreign sabotage. In its March 9 lawsuit, Anthropic alleged the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute the designation, in violation of its Fifth Amendment right to due process. The lawsuit says the decision was unlawful, unsupported by facts and inconsistent with the military's past praise of Claude. The Justice Department countered that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing. The government said the designation stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety. Anthropic has a second lawsuit pending in Washington, D.C., over a separate Pentagon supply chain risk designation that could lead to its exclusion from civilian government contracts.
[27]
Why cutting Anthropic from government could be harder than expected
The Trump administration will head to court this week to defend its decision to ban the federal government from using Anthropic products, but removing the technology itself may prove to be the bigger battle. The public clash between the Pentagon and Anthropic is forcing the federal government to confront just how deeply embedded AI has become in Washington -- from the top of the chain in agencies down to private contractors. Like most AI firms, Anthropic's presence in government work runs deeper than just contracts. Even as tech firms compete, they also routinely do business with each other, and industry observers warn that entanglement complicates efforts to scrub the technology from all government systems. "There are multiple reasons why these apparent competitors might have an interest in mutually supporting each other, because it is part of one big ecosystem," said Sarah Kreps, the director of the Tech Policy Institute in the Cornell Brooks School of Public Policy. "If one company fails, then there is a way in which the trust of the entire enterprise could be imperiled," Kreps added. In an unprecedented move, the Pentagon designated Anthropic a supply chain risk earlier this month, restricting the military and defense contractors from using the company's product. President Trump also directed federal civilian agencies to "immediately cease" using the technology via a social media post. Anthropic sued the Trump administration, asking a federal court in California to issue a temporary halt to the supply chain risk designation and Trump's directive. A judge will hear from both sides Tuesday in a hearing over the request. The contract at the heart of the Pentagon clash reflects the interdependent tech ecosystem the Trump administration finds itself trying to unwind by removing Anthropic. Nearly two years ago, Anthropic struck a partnership with longtime government contractor Palantir to host its AI models in government agencies, allowing the AI firm to skip the standard, lengthy security authorization process, known as FedRAMP. Palantir also has Anthropic integrated into its own systems, which are used across the government. CEO Alex Karp told CNBC earlier this month that the company plans to add other models amid the dispute, but that it is still using Anthropic. Claude is also offered on the Copilot platform hosted by Microsoft, one of the largest IT contractors for the government. Meanwhile, Google, which has its own series of AI models called Gemini, has also invested billions into Anthropic, owning at least 14 percent of the AI firm, while Amazon invested $8 billion into the company and hosts Anthropic in its Amazon Bedrock platform. Further, Anthropic has emerged as a standout AI tool for its abilities, including Claude Code, a coding tool that helps people or companies build features, fix bugs and automate tasks. "Claude Code, in tech circles, [is] all that people have been talking about for months now," said Michael Boyce, former director of the Department of Homeland Security's AI Corps program. "It's an amazing tool. While there are other strong competitors in the space, it continues to be field defining." "What Claude does is different from what any other platform or any other system has been able to do," Kreps added, stating the available alternatives are "not as good." Anthropic's request has received support from large swaths of the technology industry, which have their own products and interests to consider, too. Confusion remains over just how far down the designation will stretch. "Companies are stepping up because they don't want to be next," Franklin Turner, the co-chair of McCarter & English's Government Contracts practice group, told The Hill. "To allow this kind of thing to go unchallenged, I think a lot of folks believe [it] would be probably irresponsible from a corporate standpoint." Microsoft was one of the first tech firms to publicly back Anthropic's request to temporarily halt the supply chain risk designation. A spokesperson for the company said "everyone involved shares common goals" and time is needed to "find common ground." Other technology companies, think tanks and trade associations followed Microsoft's lead, submitting briefs in the California case, along with Anthropic's separate suit in the D.C. court of appeals. The App Association, a global trade association for small and mid-sized tech companies, cited various scenarios of concern raised by its members in its amicus brief. In one case, the association said a two-person startup selling logistics software to a Defense Department prime contractor used Claude Code to write its entire testing software, but it is "functionally indistinguishable from hand-written code." "It may be plausible for a small developer contracted to work with the Department of War to abstain from using Claude in its own processes, but is quite implausible for that developer to know whether any of the tools it uses were coded by others using Claude," the association wrote, using the Trump administration's preferred name for the Department of Defense. In another brief, dozens of workers with OpenAI and Google, including Google chief scientist Jeff Dean, argued the U.S.'s "thriving AI ecosystem leads the rest of the world largely due to the competition and flow of ideas between different AI companies." Some of this support has come through back-channel discussions, The New York Times reported, citing numerous unnamed current and former technology company employees. Emil Michael, undersecretary of Defense for research and engineering, acknowledged the complexities of eliminating Anthropic as a vendor from government systems. "You can't just rip out a system that's deeply embedded overnight," Michael told CNBC. The Pentagon set a 180-day deadline to remove all of Anthropic's AI products from its systems and directed any other company with business ties to the Defense Department to halt using any of the firm's products on work tied to defense contracts in the same timeline. Turner predicted companies will face "herculean hurdles" for the "supply chain cleansing exercise." "The upshot is it's going to be very hard for most contractors to certify that Anthropic is nowhere in their supply chain," Turner said, adding Anthropic "produces all sorts of things, and can be used to produce all sorts of things, including open source code, algorithmic open source code." Pentagon chief information officer Kirsten Davies said she will grant exceptions for "mission-critical activities directly supporting national security operations where no viable alternative exists," but it is not clear how flexible this will be. Some civilian agencies are already tackling the removal, including the Department of Health and Human Services (HHS), which was one of the first to disable Claude for thousands of employees. In a notice to agency AI leaders obtained by The Hill, the HHS's Office of the Chief Information Officer asked staff to list its current or planned use of Claude, including whether contractors or subcontractors use the models and if vendor systems embed the technology. "In every agency, there is some vendor being used, or subcontractor that is using a service that has an Anthropic model somewhere," one HHS leader told The Hill on the condition of anonymity.
[28]
AI Battle Heats Up: Judge Weighs Injunction In Anthropic V. Department Of Defense
San Francisco Federal Court District Judge Rita Lin has given Anthropic until 6 p.m. PST Tuesday to provide a declaration stating that multiple government agencies have terminated or ceased the use of Anthropic's Claude model following the U.S. Department of Defense's designation of the company as a national security risk. The live virtual court hearing noted that the Office of Personnel Management, the Nuclear Regulatory Commission, as well as several other agencies terminated its use of Anthropic technology. Attorneys for the DOD have until 6 p.m. tomorrow to provide counter-evidence to this claim. Anthropic asked Lin to temporarily halt the DOD decision to blacklist its AI model, Claude, as well as President Donald Trump's order prohibiting federal agencies from using the technology, CNBC reported. The artificial intelligence company stated that the blacklist is causing "severe, immediate and irreparable financial and reputational harm to the company." Last week, in the ongoing lawsuit, the DOD flagged new national security risks tied to Anthropic's hiring of foreign personnel, including workers from China. Anthropic employs "a large number of foreign nationals to build and support its LLM products, including many from the People's Republic of China (PRC), which increases the degree of adversarial risk should those employees comply with the PRC's National Intelligence Law," the court filing stated. More than 30 employees from Google, OpenAI and Google DeepMind filed an amicus brief supporting Anthropic's lawsuit against the DOD. The DOD's attorneys have asked the court to deny the preliminary injunction on the basis that a private company should not be allowed to make any decisions regarding how military missions are conducted. "Anthropic revealed itself to be an untrustworthy and unreliable partner in recent negotiations," said Eric Hamilton, an attorney representing the DOD in the case. Hamilton further argues that if the court grants injunctive relief to Anthropic, it should immediately pause that order while an appeal is pursued. At a minimum, a request for a seven-day temporary stay to allow time to seek relief from a higher court, adding that Anthropic does not oppose this. The attorney also urged the court to rule on the stay request at the same time it issues any injunction. Hamilton also emphasized any order should make clear the government is not obligated to continue using Anthropic's services and may end the relationship. Anthropic is requesting to return to the status quo of the morning of Feb. 27 and is asking the court to enter a preliminary injunction. (The date is significant because it marks the deadline from which the lawsuit stems, when Anthropic was required to remove safety restrictions on its AI model, Claude, for use by the DOD in autonomous weapons and domestic surveillance.) "There's a range of lawful actions that the defendants could have taken. What they can't do is engage in unconstitutional retaliation for our protected speech. They can't impose an immediate prospective debarment of Anthropic for all future government contracting that is not supported by any lawful executive authority," the attorney for Anthropic stated. Lin stated that she is taking the matter under submission and expects to deliver a verdict in the next few days. Anthropic did not respond to a request for comment. The DOD stated that "they do not comment on pending litigation." Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[29]
Anthropic challenges Pentagon's national security risk claim in reply to suit - The Economic Times
The company has said that the government's concerns are based on misunderstandings and retaliation for its stance on AI safety.Anthropic has submitted sworn declarations in a California federal court challenging the Pentagon's claim that it poses an "unacceptable risk to national security." The filings, cited by TechCrunch, accompany the company's reply brief and argue that the government's case is based on technical misunderstandings and mischaracterisations of its position during earlier negotiations. A hearing is set for March 24 before Judge Rita Lin. The dispute began last month, when Donald Trump and Defense Secretary Pete Hegseth announced they were ending engagement with Anthropic. This followed the company's refusal to permit unrestricted military use of its artificial intelligence (AI) systems. Soon after, the Pentagon labelled Anthropic a supply-chain risk, the first time such a classification has been applied to a US-based AI firm. Two senior executives at Anthropic -- Sarah Heck, head of policy, and Thiyagu Ramasamy, head of public sector -- filed declarations disputing the government's account. According to the report, Heck strongly rejected a claim made by the Pentagon: that Anthropic had sought an approval role in military operations. She said this was incorrect and wrote, "At no time during Anthropic's negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role." Heck also stated that concerns about Anthropic potentially altering or disabling its technology during active operations were never raised in discussions with officials. She said these issues appeared only later in court filings, leaving the company without an opportunity to address them at the time. Her declaration highlights another development. On March 4, a day after the Pentagon finalised its supply-chain risk designation, Under Secretary Michael (Only one name?) emailed Anthropic CEO Dario Amodei, saying both sides were "very close" to agreement on two key issues -- autonomous weapons and surveillance of Americans -- which are now being cited as grounds for the national security concern. In a separate filing, Ramasamy challenged the suggestion that Anthropic could interfere with military systems. He explained that once its Claude AI is deployed within secure, "air-gapped" government environments, the company has no access or control. There is no kill switch or backdoor, and any updates must be approved and installed by the Pentagon. He also noted that Anthropic personnel working on such projects are cleared through US government security processes, which he said distinguishes the company in classified settings. Anthropic's lawsuit argues that the Pentagon's decision is retaliation for its public stance on AI safety and violates its First Amendment rights. The government has rejected this argument, maintaining that the designation is purely a national security measure.
[30]
Trump administration argues Pentagon's Anthropic ban is justified, lawful
The Trump administration is doubling down on its decision to cut ties with Anthropic, arguing in a new court filing that the move is "lawful and reasonable" and not a violation of free speech, as the artificial intelligence (AI) firm alleges. The Department of Justice (DOJ), in an expected court filing Tuesday, urged a federal judge in California to reject Anthropic's request for a preliminary injunction on the Pentagon's labeling of the AI company as a supply chain risk. DOJ attorneys said Anthropic's terms of service "have become unacceptable to the executive branch," after the AI firm pressed for specific restrictions on the use of its technology for autonomous weapons and domestic mass surveillance. The Pentagon maintains the federal government can use its AI services for "any lawful purpose." "If it were any other way, an AI provider might gain influence over how DOW conducts operations and which missions it chooses," the DOJ wrote, adding that throughout negotiations, "Anthropic's behavior more generally caused the Department to question whether Anthropic represented a trusted partner with whom the department was willing to contract in this highly sensitive area." The DOJ suggested Anthropic could try to disable its technology or "preemptively alter" the behavior of its model during warfighting, stating the Pentagon sees that as an "unacceptable risk to national security." Anthropic CEO Dario Amodei said during negotiations that the company understands the DOD, "not private companies, makes military decisions." "We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner," Amodei said late last month. After negotiations fell apart earlier this month, Anthropic filed suit against the Trump administration for the supply chain risk designation, alleging the Pentagon retaliated against the company for its viewpoints on AI safety and the limitations of its AI models. The DOJ suggested Anthropic's First Amendment claim is unlikely to succeed, arguing the company's refusal to accept the government's contractual term is conduct, not speech. "To conclude otherwise 'would extend First Amendment protection to every commercial transaction on the ground that it communicates to the customer information about a product or service,'" the DOJ filing stated. The federal government also maintained Anthropic's speech was not a "motivating factor" for the actions, as Anthropic argues. "Even assuming a retaliatory motive, the government would have acted the same," the filing stated, adding later, "'The challenged actions have a legitimate ground in national security concerns, quite apart from any retaliatory animus.'" Anthropic is asking a federal court in California to reverse the Pentagon's decision and an appeals court in D.C. to review the designation. The DOJ said Defense Secretary Pete Hegseth's determination is not contrary to the law and is within the covered scope of the secretary's authority. When reached for comment Wednesday, a company spokesperson told The Hill, "We are reviewing the government's filing and look forward to presenting our response to the court." "As we shared last week, seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners," the spokesperson added.
[31]
Anthropic Faces Pentagon Scrutiny Over Foreign Workforce Risks
The U.S. Department of Defense (DoD) has flagged new national security risks tied to Anthropic's hiring of foreign personnel, including workers from China. What Are The Implications Of Foreign Hiring Risks? Anthropic employs "a large number of foreign nationals to build and support its LLM products, including many from the People's Republic of China (PRC), which increases the degree of adversarial risk should those employees comply with the PRC's National Intelligence Law," the court filing stated. While other major U.S. artificial intelligence labs working with the DoD may have similar risks, their strong security practices and history of responsible, trustworthy behavior help reduce those risks. "Anthropic's case, however, is different," Pentagon undersecretary Emil Michael wrote in the declaration. "Anthropic's leadership demonstrated an intent to prevent the U.S. military's lawful use of their LLM product, Claude, despite the company's publicly stated knowledge that adversarial nation states have a practice of stealing Anthropic's LLM technology for their own unrestricted use," the document stated. The DoD also argues that Anthropic's leadership "insisted on imposing restrictions on DoW's [Department of War] lawful military capability development, operations and intelligence missions, even though it would impair the capabilities of the U.S. military relative to our adversaries." "Determinations about lawful military uses, however, must rest solely with the DoW and not with a private company," Michael's declaration noted. The DoD further alleges that Anthropic's leadership acted in "bad faith" by leaking unclassified information from private discussions with the DoD to the media. The Legal Battle That Could Reshape AI Defense Dynamics The declaration went on to state that if Anthropic were to interfere during a military operation, either by shutting off access to the AI model or altering its functionality, that interference could cause "serious harm to national security and loss of human life." "Anthropic leadership's adversarial behavior has elevated the supply chain risks to a saturation point," the declaration continued. The filing noted the Pentagon will discontinue all use of Anthropic's defense products within 180 days, the filing stated. Last week, more than 30 employees from Google, OpenAI and Google DeepMind filed an amicus brief supporting Anthropic's lawsuit against the DoD. The next hearing on the matter will be held on March 24. Anthropic and the Pentagon did not respond to a request for comment. Photo: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[32]
Hegseth wants Pentagon to dump Anthropic's Claude, but military users say it's not so easy
After a dispute between Anthropic and the Pentagon over guardrails for how the military could use its artificial intelligence tools, Defense Secretary Pete Hegseth designated the company a supply-chain risk on March 3, barring its use by the Pentagon and its contractors following a six-month phase-out. Pentagon staffers, former officials and IT contractors who work closely with the US military say they are reluctant to give up Anthropic's AI tools, which they view as superior to alternatives, despite orders to remove them. After a dispute between Anthropic and the Pentagon over guardrails for how the military could use its artificial intelligence tools, Defense Secretary Pete Hegseth designated the company a supply-chain risk on March 3, barring its use by the Pentagon and its contractors following a six-month phase-out. But the move is running into resistance, with some military users dragging their feet and others preparing to revert to Anthropic's platform in anticipation of the dispute being resolved. "Career IT people at DoD hate this move because they had finally gotten operators comfortable using AI," said one IT contractor. "They think it's stupid." The contractor said Anthropic's Claude AI model "is the best," while xAI's Grok often produced inconsistent answers to the same query. Recertifying systems could take months The complaints suggest uprooting Anthropic from the Pentagon's networks will be neither quick nor painless. One contractor said recertifying systems that run on Anthropic's products for military use could take months. Some Pentagon officials, staff and contractors spoke anonymously because they were not authorized to speak publicly. The Defense Department, Anthropic and xAI did not respond to requests for comment. AI tools have become essential for the US military, which uses them for tasks ranging from targeting weapons and helping plan operations to handling classified material and analyzing information. Anthropic announced a $200 million defense contract in July 2025 and quickly became embedded in the military's workflow. Claude became the first AI model approved to operate on classified military networks, and officials familiar with its use said adoption was strong. Within the federal government, Anthropic's models were widely viewed as more capable than rival offerings. has previously reported that the Pentagon used Claude tools to support US military operations during the conflict with Iran, and sources said the technology remains in use despite the blacklisting. One expert described that as "the clearest signal" of how highly the Pentagon values the tool. Furthermore, "It's a substantial cost to replace those models with alternatives," said Joe Saunders, the CEO of government contractor RunSafe Security. Saunders added that those alternative systems would go through a long process to recertify them for use on classified or military networks. In the case of an existing system being replaced with a new one, certification could take 12 to 18 months, he said. "It's not just costly, it's a loss of productivity," added Saunders, who helped the military incorporate AI chatbots. Orders to stop using Claude are filtering through the Pentagon. One official said staff are complying because "no one wants to end their career over this," but described the shift as wasteful. Tasks previously handled by Claude, such as querying large datasets for information, are in some cases now being done manually with tools such as Microsoft Excel, the official said. Anthropic's Claude Code tool was widely used within the Pentagon to write software code, several of the people said. Losing that tool has left developers frustrated, another senior official said, but added they should not rely on a single tool. Tough transition Removing Claude will be a major undertaking. For example, Palantir's Maven Smart Systems - a software platform that supplies militaries with intelligence analysis and weapons targeting - uses multiple prompts and workflows that were built using Anthropic's Claude Code, according to two people familiar with the matter. Palantir, which holds Maven-related contracts with the Defense Department and other US national security agencies that have a potential value of more than $1 billion, will have to replace Claude with another AI model and rebuild parts of its software, one of the sources said. Some staff are "slow-rolling" their replacement of Claude because they are actively using it to create workflows, which are series of automated tasks, a Pentagon technologist said. Developers are frustrated because shifting to new AI agents would mean losing the agents they created to sift through vast amounts of data. The Defense Department has ordered contractors, including major defense firms, to assess and report their reliance on Anthropic products and to begin winding them down. Officials and contractors say they now face a strategic question: whether to pivot quickly to OpenAI, Google or xAI, or to unwind Anthropic in a way that allows for a rapid return if the Pentagon reinstates it. One chief information officer at a federal agency said it plans to slow‑roll the phase‑out, betting that the government and Anthropic will reach an agreement before the six‑month deadline. "What we are seeing play out here is the tension of adoption, both inside the Pentagon as well as the political level," said Roger Zakheim, director of the Ronald Reagan Presidential Foundation and Institute.
[33]
'Nobody really knows:' Pentagon clash with Anthropic throws agencies into limbo
Federal agencies and their contractors have been left in limbo as the Trump administration moves to cut off Anthropic from government systems without formal orders amid a brewing legal battle with the AI company. As agency leaders grapple with informal directives from President Trump and the Pentagon, the situation is exposing the challenges and costs of removing a major AI vendor from federal supply chains after an aggressive push to embed the technology in the first place. Nearly three weeks have passed since Trump ordered federal agencies to "immediately cease" using Anthropic's technology, but various federal agencies have yet to receive formal guidance other than Trump's social media post on how to proceed, according to conversations with multiple federal technology leaders. In turn, the response has varied across the government, with agencies like the General Services Administration and the Department of Health and Human Services abruptly removing Claude within hours of Trump's directive. Other agencies say they are still reviewing Anthropic's use, but the product may still be available. Trump's directive followed a breakdown in negotiations between the Pentagon and the AI company earlier this month over disagreements on safety guardrails. Defense Secretary Pete Hegseth separately deemed Anthropic a supply chain risk, which will be fought over in federal courts later this month. Federal employees get few answers For staffers at agencies that already moved to eliminate Anthropic, the transition has been confusing and abrupt, federal tech leaders told The Hill. Anthropic was first approved for classified use in government agencies through a partnership with Palantir nearly two years ago. Since then, the Trump administration has pushed federal agencies to use AI in workflows, leading to a rapid adoption of technology, including Anthropic's Claude models, across defense and civilian spaces. At HHS, thousands of employees using Anthropic products had just a few hours to save their chats and coding projects, according to an agency leader. "Staff were really upset with how quickly" the shutdown happened, the leader said, adding "there was no spin-down time." "People lost their chats, people lost any coding that they were doing in any projects. Are there equivalent tools that they can use? Sure, but they had been working in a secure environment," the leader added. "It's a loss of a lot of work...it was a waste of government resources." AI leaders across HHS received notice about the pending elimination less than an hour after Trump posted his directive on Truth Social, according to a screenshot obtained by The Hill. That notice, sent by HHS Deputy Chief AI officer Arma Sharma, said Claude Enterprise would be disabled "in alignment" with Trump's directive. Days later, another message clarified enterprise access to Claude was "temporarily disabled," at the agency, but HHS's office of the chief AI officer was "awaiting more detailed federal guidance regarding the future use of applications and systems that leverage Claude or other Anthropic technologies." Staff were told more direction would come pending "more definitive guidance." HHS confirmed ChatGPT Enterprise and Google Gemini remain available for staff. Reports circulated soon after that the White House is floating an executive order to eliminate Anthropic's AI from the government, though this hasn't come to fruition. The General Services Administration, the agency responsible for most federal technology procurement, is also proposing a clause to existing and new GSA schedule contracts that would confirm the government's right to use an AI system "as necessary for any lawful Government purpose." The clause would apply to the AI firms, as well as subcontractors or vendors, and is similar to the demands of the Pentagon, which maintains it should be able to use AI technologies for "any lawful purpose" in the military. GSA also removed Anthropic from its governmentwide AI testing tool, USAi and terminated its OneGov deal with the firm, which offered agencies the chance to use the company's tech at near-zero costs. At another civilian agency, one AI advisor told The Hill there was "a tremendous lack of information," and "nobody has clear answers" even as agency leaders told workers to stop using Anthropic's technology. The advisor, who spoke on the condition of anonymity to speak freely, compared the confusion to the chaotic takeover of Trump's so-called Department of Government Efficiency, which sparked more questions than answers for federal workers last year. Civilian AI leaders, according to the advisor, are still unsure of whether the order applies to all of the federal government, including contractors who may use Anthropic in their own workflows but not directly in their work for agencies. "It's a lot of complicated questions that nobody really knows the answer to," the leader said, adding their agency told them to "stop using" Anthropic products and that they will "get back" to them with more details. One federal technology leader familiar with procurement suggested "some political [appointees] seem to be proactively ordering staff based on social media, but that's up to them." Some agencies have yet to clarify Meanwhile, some agencies are holding their breath, at least publicly. It is unclear how critical missions, such as nuclear weapons research, will be impacted by the situation. The Department of Energy's National Nuclear Administration and national labs have partnerships with Anthropic to work on nuclear weapon risk research and assist scientists, respectively. When asked how the agency plans to proceed, a DOE spokesperson said Tuesday the agency is "reviewing all existing contracts and uses of Anthropic technology," and is "committed to ensuring" the technology it uses "serves the public interest" and "protects America's energy and national security." Anthropic, which filed a suit against the Trump administration over the supply chain risk designation, argues the determination should only impact Claude customers on contracts with the Department of Defense, not all Claude customers who have the contracts. This differential may be determined in the court case, and Anthropic's lawyers noted in their complaint last week that agencies already took action despite the uncertainties. "Throughout, the federal government has never once expressed concerns about Anthropic's security or Claude's competencies," attorneys wrote, pointing to Anthropic's FedRAMP High authorization through Palantir. The Department of Treasury and the Secret Service also stopped the use of Claude, FedScoop reported last week. Other agencies including the Department of Veterans Affairs and the Office of Personnel Management, which listed Anthropic products on their 2025 AI use case inventories, did not respond by publication time on their plans, while NASA referred The Hill to the Justice Department. OPM's updated inventory, posted last week, shows Anthropic was removed from an earlier version. Concerns over cost Technology leaders both in and outside of government are also sounding the alarm on the costs of this termination, and what it means for the taxpayer at the end of the day. Franklin Turner, the co-chair of McCarter & English's Government Contracts practice group, predicted there will be a cost impact to the government. Should a subcontractor say they are using Anthropic, agencies "would have to terminate that subcontract" and "go out and find a new one," Turner told The Hill. "That carries with it a cost and that's a cost that wasn't foreseen at the time you prepared and submitted your bid," he added. Chris Griesedieck, a government contracts attorney at Venable LLC, echoed this sentiment, telling The Hill contractors may also be willing to make the modification to comply, but "[the contractor] reserves the right to an equitable adjustment if this is going to cost me a bunch of extra money." The HHS leader, also speaking on the condition of anonymity, added the agency's abrupt removal of Anthropic "wasted taxpayer dollars." "Phasing out of it would have been annoying, but it wasn't, it was shut down immediately and everybody's work was lost," the leader said, adding, "Agencies who built programmatic systems on it, they're gonna have a ton of work."
[34]
Trump Defends Pentagon Ban On Anthropic, Calls It Legal And Justified
The Trump administration defended the Pentagon's move to blacklist Anthropic, arguing the decision was both legal and justified. The Trump administration argued in a court filing that Anthropic's First Amendment claims are "unlikely to succeed," saying the government's actions were driven by contract issues and national security considerations rather than any form of retaliatory conduct. "It was only when Anthropic refused to release the restrictions on the use of its products -- which refusal is conduct, not protected speech -- that the President directed all federal agencies to terminate their business relationships with Anthropic," the court document stated. "Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners," Anthropic wrote to Al Jazeera. Anthropic is understood to be reviewing the government's filing. Earlier today, it was reported that nearly 150 retired judges have stepped into the high-stakes legal battle, backing Anthropic as it challenges a U.S. defense designation that could damage its broader business. The judges emphasized that Anthropic is not seeking defense contracts. "No one is trying to force the Department to contract with Anthropic," they wrote. They added, "Instead, Anthropic is asking only that it not be punished on its way out the door." Earlier this month, U.S. Central Command reportedly used Anthropic's Claude AI in a Trump-era air operation against Iran, despite a federal ban, supporting intelligence and target planning. The military has previously used Claude in high-profile missions, including the operation that captured Venezuelan President Nicolas Maduro. Photo: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[35]
US judge says Pentagon's blacklisting of Anthropic looks like punishment for its views on AI safety
March 24 (Reuters) - A U.S. judge said on Tuesday that the Pentagon's blacklisting of Anthropic looked like an effort to punish the artificial intelligence lab for going public with its concerns about AI safety in the military. Anthropic's lawsuit in California federal court alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated Anthropic a national security supply-chain risk. The government can apply that label to companies that expose military systems to potential infiltration or sabotage by adversaries. U.S. District Judge Rita Lin in San Francisco, an appointee of former Democratic President Joe Biden, said during a court hearing that the designation "looks like an attempt to cripple Anthropic." "It looks like DOW is punishing Anthropic for trying to bring public scrutiny to this contract dispute," Lin said, using an acronym for the Department of War, President Donald Trump's new name for the Defense Department. The hearing concerns Anthropic's request for a temporary order blocking the designation while the case plays out. Lin said at the end of the hearing that she would rule in a written order within the next few days. The unprecedented designation, which followed Anthropic's refusal to allow the military to use its Claude AI software for U.S. surveillance or autonomous weapons, blocks Anthropic from certain military contracts. It could cost the company billions of dollars this year in lost business and reputational harm, Anthropic executives said on March 9. The company says AI models are not reliable enough to be safely used in autonomous weapons and that it opposes domestic surveillance as a violation of rights. UNPRECEDENTED SUPPLY-CHAIN LABEL Anthropic's designation was the first time a U.S. company has been publicly designated a supply-chain risk under an obscure government-procurement statute aimed at protecting military systems from foreign sabotage. In its March 9 lawsuit, Anthropic alleged the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute the designation, in violation of its Fifth Amendment right to due process. During Tuesday's hearing, a lawyer for Anthropic said the Pentagon was using a flawed interpretation of federal procurement law to retaliate against Anthropic for its negotiating position. "The logical implication of their position here is they can point to their frustrations in a contract negotiation, the stubbornness of the vendor, and say, 'because you're working in an area that touches national security, we're going to tell the world that we think you might come around in the future and sabotage our systems,'" said attorney Michael Mongan. Justice Department lawyer Eric Hamilton said Anthropic's pushback against lawful uses of its technology convinced the Defense Department that it could not rely on the company going forward and that the designation was appropriate to secure its systems. "What happens if Anthropic, through an update, installs a kill switch or installs functionality that allows it to change how the software is functioning when our warfighters need it most? That is an unacceptable risk," Hamilton said. Anthropic has a second lawsuit pending in Washington, D.C., over a separate Pentagon supply-chain risk designation that could lead to its exclusion from civilian government contracts. (Reporting by Jack Queen in New York;Editing by Noeleen Walder, Rod Nickel and Matthew Lewis)
[36]
US judge says Pentagon's Anthropic blacklisting looks like punishment for AI safety views
March 24 (Reuters) - A U.S. judge said on Tuesday that the Pentagon's blacklisting of Anthropic looked like an effort to punish the artificial intelligence lab for going public with its concerns about AI safety in the military. Anthropic's lawsuit in California federal court alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated Anthropic a national security supply-chain risk. The government can apply that label to companies that expose military systems to potential infiltration or sabotage by adversaries. U.S. District Judge Rita Lin in San Francisco, an appointee of former Democratic President Joe Biden, said during a court hearing that the designation "looks like an attempt to cripple Anthropic." "It looks like DOW is punishing Anthropic for trying to bring public scrutiny to this contract dispute," Lin said, using an acronym for the Department of War, President Donald Trump's new name for the Defense Department. The hearing concerns Anthropic's request for a temporary order blocking the designation while the case plays out. The unprecedented designation, which followed Anthropic's refusal to allow the military to use its Claude AI software for U.S. surveillance or autonomous weapons, blocks Anthropic from certain military contracts. It could cost the company billions of dollars this year in lost business and reputational harm, Anthropic executives said on March 9. The company says AI models are not reliable enough to be safely used in autonomous weapons and that it opposes domestic surveillance as a violation of rights. ANTHROPIC DESIGNATION FIRST FOR U.S. COMPANY Anthropic's designation was the first time a U.S. company has been publicly designated a supply-chain risk under an obscure government-procurement statute aimed at protecting military systems from foreign sabotage. In its March 9 lawsuit, Anthropic alleged the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute the designation, in violation of its Fifth Amendment right to due process. The lawsuit says the decision was unlawful, unsupported by facts and inconsistent with the military's past praise of Claude. The Justice Department countered that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing. The government said the designation stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety. Anthropic has a second lawsuit pending in Washington, D.C., over a separate Pentagon supply-chain risk designation that could lead to its exclusion from civilian government contracts. (Reporting by Jack Queen in New York;Editing by Noeleen Walder, Rod Nickel and Matthew Lewis)
[37]
US judge to weigh Anthropic's bid to undo Pentagon blacklisting
March 24 (Reuters) - A U.S. judge is set to hear oral arguments on Tuesday in a lawsuit by Anthropic seeking to block the Pentagon's blacklisting of the artificial intelligence lab over its refusal to lift certain restrictions on its Claude AI model. Anthropic's lawsuit in California federal court alleges that Defense Secretary Pete Hegseth overstepped his authority when he designated Anthropic a national security supply chain risk. The government can apply that label to companies that expose military systems to potential infiltration or sabotage by adversaries. Hegseth's unprecedented move, which followed Anthropic's refusal to allow the military to use Claude for U.S. surveillance or autonomous weapons, blocks Anthropic from certain military contracts. It could cost the company billions of dollars this year in lost business and reputational harm, Anthropic executives said on March 9. The company says AI models are not reliable enough to be safely used in autonomous weapons and that it opposes domestic surveillance as a violation of rights. ANTHROPIC DESIGNATION FIRST FOR U.S. COMPANY U.S. District Judge Rita Lin in San Francisco, an appointee of former Democratic President Joe Biden, is set to hold a hearing at 1:30 p.m. PT (2030 GMT) over Anthropic's request for an initial order blocking the designation while the case plays out. Anthropic's designation was the first time a U.S. company has been publicly designated a supply chain risk under an obscure government-procurement statute aimed at protecting military systems from foreign sabotage. In its March 9 lawsuit, Anthropic alleged the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute the designation, in violation of its Fifth Amendment right to due process. The lawsuit says the decision was unlawful, unsupported by facts and inconsistent with the military's past praise of Claude. The Justice Department countered that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing. The government said the designation stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety. Anthropic has a second lawsuit pending in Washington, D.C., over a separate Pentagon supply chain risk designation that could lead to its exclusion from civilian government contracts. (Reporting by Jack Queen in New York;Editing by Noeleen Walder, Rod Nickel)
[38]
Hegseth wants Pentagon to dump Anthropic's Claude, but military users say it's not so easy
March 19 - Pentagon staffers, former officials and IT contractors who work closely with the U.S. military say they are reluctant to give up Anthropic's AI tools, which they view as superior to alternatives, despite orders to remove them. After a dispute between Anthropic and the Pentagon over guardrails for how the military could use its artificial intelligence tools, Defense Secretary Pete Hegseth designated the company a supply-chain risk on March 3, barring its use by the Pentagon and its contractors following a six-month phase-out. But the move is running into resistance, with some military users dragging their feet and others preparing to revert to Anthropic's platform in anticipation of the dispute being resolved. "Career IT people at DoD hate this move because they had finally gotten operators comfortable using AI," said one IT contractor. "They think it's stupid." The contractor said Anthropic's Claude AI model "is the best," while xAI's Grok often produced inconsistent answers to the same query. RECERTIFYING SYSTEMS COULD TAKE MONTHS The complaints suggest uprooting Anthropic from the Pentagon's networks will be neither quick nor painless. One contractor said recertifying systems that run on Anthropic's products for military use could take months. Some Pentagon officials, staff and contractors spoke anonymously because they were not authorized to speak publicly. The Defense Department, Anthropic and xAI did not respond to requests for comment. AI tools have become essential for the U.S. military, which uses them for tasks ranging from targeting weapons and helping plan operations to handling classified material and analyzing information. Anthropic announced a $200 million defense contract in July 2025 and quickly became embedded in the military's workflow. Claude became the first AI model approved to operate on classified military networks, and officials familiar with its use said adoption was strong. Within the federal government, Anthropic's models were widely viewed as more capable than rival offerings. Reuters has previously reported that the Pentagon used Claude tools to support U.S. military operations during the conflict with Iran, and sources said the technology remains in use despite the blacklisting. One expert described that as "the clearest signal" of how highly the Pentagon values the tool. Furthermore, "It's a substantial cost to replace those models with alternatives," said Joe Saunders, the CEO of government contractor RunSafe Security. Saunders added that those alternative systems would go through a long process to recertify them for use on classified or military networks. In the case of an existing system being replaced with a new one, certification could take 12 to 18 months, he said. "It's not just costly, it's a loss of productivity," added Saunders, who helped the military incorporate AI chatbots. Orders to stop using Claude are filtering through the Pentagon. One official said staff are complying because "no one wants to end their career over this," but described the shift as wasteful. Tasks previously handled by Claude, such as querying large datasets for information, are in some cases now being done manually with tools such as Microsoft Excel, the official said. Anthropic's Claude Code tool was widely used within the Pentagon to write software code, several of the people said. Losing that tool has left developers frustrated, another senior official said, but added they should not rely on a single tool. For example, Palantir's Maven Smart Systems - a software platform that supplies militaries with intelligence analysis and weapons targeting - uses multiple prompts and workflows that were built using Anthropic's Claude Code, according to two people familiar with the matter. Palantir, which holds Maven-related contracts with the Defense Department and other U.S. national security agencies that have a potential value of more than $1 billion, will have to replace Claude with another AI model and rebuild parts of its software, one of the sources said. Some staff are "slow-rolling" their replacement of Claude because they are actively using it to create workflows, which are series of automated tasks, a Pentagon technologist said. Developers are frustrated because shifting to new AI agents would mean losing the agents they created to sift through vast amounts of data. The Defense Department has ordered contractors, including major defense firms, to assess and report their reliance on Anthropic products and to begin winding them down. Officials and contractors say they now face a strategic question: whether to pivot quickly to OpenAI, Google or xAI, or to unwind Anthropic in a way that allows for a rapid return if the Pentagon reinstates it. One chief information officer at a federal agency said it plans to slow-roll the phase-out, betting that the government and Anthropic will reach an agreement before the six-month deadline. "What we are seeing play out here is the tension of adoption, both inside the Pentagon as well as the political level," said Roger Zakheim, director of the Ronald Reagan Presidential Foundation and Institute. (Reporting by Mike Stone, Alexandra Alper and Raphael Satter in Washington; Additional reporting by David Jeans in New York; Editing by Chris Sanders, Rod Nickel)
Share
Share
Copy Link
A federal judge expressed skepticism about the Pentagon's decision to label Anthropic a supply-chain risk, suggesting the Trump administration may be illegally punishing the AI company for refusing to allow unrestricted military use of its Claude technology. Judge Rita Lin said the ban appears to violate free speech protections and doesn't seem tailored to national security concerns.
US District Judge Rita Lin delivered sharp criticism of the Pentagon during a Tuesday hearing in San Francisco, suggesting the Department of Defense appears to be illegally punishing Anthropic for its stance on military use of AI. "It looks like an attempt to cripple Anthropic," Lin said of the supply-chain risk designation applied to the company
2
. The judge added that the government's actions look like it is "punishing Anthropic for trying to bring public scrutiny to this contract dispute, which of course would be a violation of the First Amendment"2
.Source: Market Screener
The legal dispute with the Pentagon traces back to late February when President Trump and Defense Secretary Pete Hegseth publicly declared they were cutting ties with Anthropic after the company refused to allow unrestricted military use of its Claude technology
1
. The Trump administration subsequently labeled the AI company a national security risk, a designation typically reserved for foreign adversaries and hostile actors2
.Anthropics submitted two sworn declarations to the California federal court late Friday, featuring testimony from Sarah Heck, the company's Head of Policy, and Thiyagu Ramasamy, Head of Public Sector
1
. The court filings reveal a striking detail: on March 4, just one day after the Pentagon formally finalized its supply-chain risk designation against Anthropic, Under Secretary Emil Michael emailed CEO Dario Amodei saying the two sides were "very close" on issues regarding autonomous weapons and mass surveillance of Americans1
.Heck directly challenged what she described as a central falsehood in the government's filings: that Anthropic demanded approval over military operations. "At no time during Anthropic's negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role," she wrote
1
. She noted that concerns about Anthropic potentially disabling technology mid-operation were never raised during negotiations but appeared for the first time in court filings1
.Ramasamy, who previously spent six years at Amazon Web Services managing AI deployments for government customers, directly addressed claims that Anthropic could sabotage AI tools during military operations. "Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations," he wrote in his declaration
3
. Once the generative AI model Claude is deployed inside a government-secured, air-gapped system operated by a third-party contractor, Anthropic has no access to it1
.
Source: The Hill
"Anthropic does not maintain any back door or remote 'kill switch,'" Ramasamy explained, adding that the company cannot log into Department of Defense systems to modify or disable models during operations
3
. Any updates would require explicit approval from the Pentagon and action to install, and Anthropic cannot access prompts or data that military users enter into the system1
.Related Stories
Judge Lin indicated she is considering whether the ban on Anthropic AI tools constitutes a First Amendment violation, describing the government's actions as "troubling" and noting they "don't seem to be tailored to stated national security concerns"
2
. During the hearing, Trump administration attorney Eric Hamilton acknowledged that Pete Hegseth has no legal authority to bar military contractors from using Anthropic for work unrelated to the Department of Defense, despite the Defense Secretary posting on X that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic"2
.
Source: Quartz
Anthropics has argued that the dispute has led to almost a month of "profound uncertainty" for commercial partners and "irreparable and mounting" damages
5
. The company estimates that even a narrow interpretation of the order could put hundreds of millions of dollars in annual revenue at risk5
. A broader application might cut the company off from vital data center infrastructure provided by companies like Amazon and Microsoft5
.Rita Lin stated she would issue a decision on Anthropic's request for a temporary injunction within the coming days
4
. The judge can pause the designation only if she determines Anthropic is likely to win the overall case2
. Meanwhile, the Pentagon has said it is working to replace Anthropic technologies over the coming months with alternatives from Google, OpenAI, and xAI2
. A ruling in a second case filed at the federal appeals court in Washington, DC, is expected soon without a hearing2
.The case has sparked broader conversation about how artificial intelligence is being deployed by armed forces and whether Silicon Valley companies should defer to the government in determining how technology they develop is used for military purposes
2
. Free speech protections and the limits of executive power in punishing companies for their public positions remain central questions as this unprecedented legal battle unfolds5
.Summarized by
Navi
04 Mar 2026•Policy and Regulation

11 Mar 2026•Policy and Regulation

27 Mar 2026•Policy and Regulation

1
Technology

2
Technology

3
Science and Research
