6 Sources
6 Sources
[1]
Trump administration says Anthropic refusal was 'not protected speech' in US court
In a new filing, the Trump administration backs Hegseth's designation * Pentagon defends blacklisting Anthropic as lawful national security move * Company's lawsuit claims designation violates free speech and due process * Court battle looms as experts say Anthropic may have a strong case The Trump administration said the Pentagon did not violate Anthropic's speech protections under the US Constitution's First Amendment, when it blacklisted the AI company earlier this year. In a court filing that the administration filed with the court earlier this week, it essentially backed Defense Secretary Pete Hegseth's designation that Anthropic was a national security supply chain risk, and deemed blacklisting as justified and lawful, Reuters reported. In the last couple of months Anthropic, the company behind the famed Claude Artificial Intelligence solution, was in negotiations with the Pentagon over lucrative deals that would see Claude and other tools integrated into different US Department of Defense (DOD) projects. Responding with a lawsuit The negotiations allegedly broke down after Anthropic declined to remove the guardrails that were set up to protect the technology from being used for autonomous weapons or domestic surveillance. Soon after, the company was deemed a national security supply chain risk, to which Anthropic responded with a lawsuit. In the lawsuit filed on March 9, the AI company said the "unprecedented and unlawful" designation violated its free speech and due process rights. At the same time, it said the designation also broke federal law that requires agencies to follow certain procedures when making these kinds of decisions. "It was only when Anthropic refused to release the β restrictions on the use of its products -- which refusal is conduct, not protected speech -- that the President directed all federal agencies to terminate their business relationships with Anthropic," it says in the filing. "No one has purported to restrict Anthropic's expressive activity," it was stated. Anthropic asked the California federal court to block the Pentagon's decision until a ruling is made. Reuters says that "some legal experts" believe the company has a "strong case". The company responded to the filing saying "seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to β protect our business, our customers, and our partners." Via Reuters Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[2]
US govt says Anthropic AI an 'unacceptable risk' to military
San Francisco (United States) (AFP) - Artificial intelligence company Anthropic posed an "unacceptable risk" to military supply chains, the US government insisted Tuesday, as it defends against the tech firm's challenge to a designation as dangerous. Anthropic's Claude AI model has been in the spotlight in recent weeks both for its alleged use in identifying targets for US bombing in Iran and the company's refusal that its systems be used to power mass surveillance in the United States or lethal fully autonomous weapons systems. Justifying its decision to cut ties with Anthropic in response to a legal complaint from the firm, the Pentagon -- dubbed the Department of War (DoW) by the Trump administration -- said it "became concerned that allowing Anthropic continued access to DoW's technical and operational warfighting infrastructure would introduce unacceptable risk into DoW supply chains," in a court document seen by AFP. "AI systems are acutely vulnerable to manipulation," the government added in the filing to a California federal court. "Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic -- in its discretion -- feels that its corporate 'red lines' are being crossed," it said. Anthropic's refusal to agree that its AI tech could be deployed by the military for "any lawful use" therefore posed an "unacceptable risk to national security," the document read. "Anthropic's behavior more generally caused the Department to question whether Anthropic represented a trusted partner," the government said. Classification as a "supply chain risk," which Anthropic has challenged in a case against the Pentagon and other arms of the federal government, in theory means that all government suppliers would be barred from doing business with the company. The designation is typically reserved for organizations from foreign adversary countries, such as Chinese tech giant Huawei. Other major American tech firms such as Microsoft, which itself both uses Anthropic's Claude model and supplies the US military, have weighed in on the AI company's side. "This is not the time to put at risk the very AI ecosystem that the administration has helped to champion," Microsoft said in an amicus brief filed with the court last week.
[3]
Trump administration argues Pentagon's Anthropic ban is justified, lawful
The Trump administration is doubling down on its decision to cut ties with Anthropic, arguing in a new court filing that the move is "lawful and reasonable" and not a violation of free speech, as the artificial intelligence (AI) firm alleges. The Department of Justice (DOJ), in an expected court filing Tuesday, urged a federal judge in California to reject Anthropic's request for a preliminary injunction on the Pentagon's labeling of the AI company as a supply chain risk. DOJ attorneys said Anthropic's terms of service "have become unacceptable to the executive branch," after the AI firm pressed for specific restrictions on the use of its technology for autonomous weapons and domestic mass surveillance. The Pentagon maintains the federal government can use its AI services for "any lawful purpose." "If it were any other way, an AI provider might gain influence over how DOW conducts operations and which missions it chooses," the DOJ wrote, adding that throughout negotiations, "Anthropic's behavior more generally caused the Department to question whether Anthropic represented a trusted partner with whom the department was willing to contract in this highly sensitive area." The DOJ suggested Anthropic could try to disable its technology or "preemptively alter" the behavior of its model during warfighting, stating the Pentagon sees that as an "unacceptable risk to national security." Anthropic CEO Dario Amodei said during negotiations that the company understands the DOD, "not private companies, makes military decisions." "We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner," Amodei said late last month. After negotiations fell apart earlier this month, Anthropic filed suit against the Trump administration for the supply chain risk designation, alleging the Pentagon retaliated against the company for its viewpoints on AI safety and the limitations of its AI models. The DOJ suggested Anthropic's First Amendment claim is unlikely to succeed, arguing the company's refusal to accept the government's contractual term is conduct, not speech. "To conclude otherwise 'would extend First Amendment protection to every commercial transaction on the ground that it communicates to the customer information about a product or service,'" the DOJ filing stated. The federal government also maintained Anthropic's speech was not a "motivating factor" for the actions, as Anthropic argues. "Even assuming a retaliatory motive, the government would have acted the same," the filing stated, adding later, "'The challenged actions have a legitimate ground in national security concerns, quite apart from any retaliatory animus.'" Anthropic is asking a federal court in California to reverse the Pentagon's decision and an appeals court in D.C. to review the designation. The DOJ said Defense Secretary Pete Hegseth's determination is not contrary to the law and is within the covered scope of the secretary's authority. When reached for comment Wednesday, a company spokesperson told The Hill, "We are reviewing the government's filing and look forward to presenting our response to the court." "As we shared last week, seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners," the spokesperson added.
[4]
'Nobody really knows:' Pentagon clash with Anthropic throws agencies into limbo
Federal agencies and their contractors have been left in limbo as the Trump administration moves to cut off Anthropic from government systems without formal orders amid a brewing legal battle with the AI company. As agency leaders grapple with informal directives from President Trump and the Pentagon, the situation is exposing the challenges and costs of removing a major AI vendor from federal supply chains after an aggressive push to embed the technology in the first place. Nearly three weeks have passed since Trump ordered federal agencies to "immediately cease" using Anthropic's technology, but various federal agencies have yet to receive formal guidance other than Trump's social media post on how to proceed, according to conversations with multiple federal technology leaders. In turn, the response has varied across the government, with agencies like the General Services Administration and the Department of Health and Human Services abruptly removing Claude within hours of Trump's directive. Other agencies say they are still reviewing Anthropic's use, but the product may still be available. Trump's directive followed a breakdown in negotiations between the Pentagon and the AI company earlier this month over disagreements on safety guardrails. Defense Secretary Pete Hegseth separately deemed Anthropic a supply chain risk, which will be fought over in federal courts later this month. Federal employees get few answers For staffers at agencies that already moved to eliminate Anthropic, the transition has been confusing and abrupt, federal tech leaders told The Hill. Anthropic was first approved for classified use in government agencies through a partnership with Palantir nearly two years ago. Since then, the Trump administration has pushed federal agencies to use AI in workflows, leading to a rapid adoption of technology, including Anthropic's Claude models, across defense and civilian spaces. At HHS, thousands of employees using Anthropic products had just a few hours to save their chats and coding projects, according to an agency leader. "Staff were really upset with how quickly" the shutdown happened, the leader said, adding "there was no spin-down time." "People lost their chats, people lost any coding that they were doing in any projects. Are there equivalent tools that they can use? Sure, but they had been working in a secure environment," the leader added. "It's a loss of a lot of work...it was a waste of government resources." AI leaders across HHS received notice about the pending elimination less than an hour after Trump posted his directive on Truth Social, according to a screenshot obtained by The Hill. That notice, sent by HHS Deputy Chief AI officer Arma Sharma, said Claude Enterprise would be disabled "in alignment" with Trump's directive. Days later, another message clarified enterprise access to Claude was "temporarily disabled," at the agency, but HHS's office of the chief AI officer was "awaiting more detailed federal guidance regarding the future use of applications and systems that leverage Claude or other Anthropic technologies." Staff were told more direction would come pending "more definitive guidance." HHS confirmed ChatGPT Enterprise and Google Gemini remain available for staff. Reports circulated soon after that the White House is floating an executive order to eliminate Anthropic's AI from the government, though this hasn't come to fruition. The General Services Administration, the agency responsible for most federal technology procurement, is also proposing a clause to existing and new GSA schedule contracts that would confirm the government's right to use an AI system "as necessary for any lawful Government purpose." The clause would apply to the AI firms, as well as subcontractors or vendors, and is similar to the demands of the Pentagon, which maintains it should be able to use AI technologies for "any lawful purpose" in the military. GSA also removed Anthropic from its governmentwide AI testing tool, USAi and terminated its OneGov deal with the firm, which offered agencies the chance to use the company's tech at near-zero costs. At another civilian agency, one AI advisor told The Hill there was "a tremendous lack of information," and "nobody has clear answers" even as agency leaders told workers to stop using Anthropic's technology. The advisor, who spoke on the condition of anonymity to speak freely, compared the confusion to the chaotic takeover of Trump's so-called Department of Government Efficiency, which sparked more questions than answers for federal workers last year. Civilian AI leaders, according to the advisor, are still unsure of whether the order applies to all of the federal government, including contractors who may use Anthropic in their own workflows but not directly in their work for agencies. "It's a lot of complicated questions that nobody really knows the answer to," the leader said, adding their agency told them to "stop using" Anthropic products and that they will "get back" to them with more details. One federal technology leader familiar with procurement suggested "some political [appointees] seem to be proactively ordering staff based on social media, but that's up to them." Some agencies have yet to clarify Meanwhile, some agencies are holding their breath, at least publicly. It is unclear how critical missions, such as nuclear weapons research, will be impacted by the situation. The Department of Energy's National Nuclear Administration and national labs have partnerships with Anthropic to work on nuclear weapon risk research and assist scientists, respectively. When asked how the agency plans to proceed, a DOE spokesperson said Tuesday the agency is "reviewing all existing contracts and uses of Anthropic technology," and is "committed to ensuring" the technology it uses "serves the public interest" and "protects America's energy and national security." Anthropic, which filed a suit against the Trump administration over the supply chain risk designation, argues the determination should only impact Claude customers on contracts with the Department of Defense, not all Claude customers who have the contracts. This differential may be determined in the court case, and Anthropic's lawyers noted in their complaint last week that agencies already took action despite the uncertainties. "Throughout, the federal government has never once expressed concerns about Anthropic's security or Claude's competencies," attorneys wrote, pointing to Anthropic's FedRAMP High authorization through Palantir. The Department of Treasury and the Secret Service also stopped the use of Claude, FedScoop reported last week. Other agencies including the Department of Veterans Affairs and the Office of Personnel Management, which listed Anthropic products on their 2025 AI use case inventories, did not respond by publication time on their plans, while NASA referred The Hill to the Justice Department. OPM's updated inventory, posted last week, shows Anthropic was removed from an earlier version. Concerns over cost Technology leaders both in and outside of government are also sounding the alarm on the costs of this termination, and what it means for the taxpayer at the end of the day. Franklin Turner, the co-chair of McCarter & English's Government Contracts practice group, predicted there will be a cost impact to the government. Should a subcontractor say they are using Anthropic, agencies "would have to terminate that subcontract" and "go out and find a new one," Turner told The Hill. "That carries with it a cost and that's a cost that wasn't foreseen at the time you prepared and submitted your bid," he added. Chris Griesedieck, a government contracts attorney at Venable LLC, echoed this sentiment, telling The Hill contractors may also be willing to make the modification to comply, but "[the contractor] reserves the right to an equitable adjustment if this is going to cost me a bunch of extra money." The HHS leader, also speaking on the condition of anonymity, added the agency's abrupt removal of Anthropic "wasted taxpayer dollars." "Phasing out of it would have been annoying, but it wasn't, it was shut down immediately and everybody's work was lost," the leader said, adding, "Agencies who built programmatic systems on it, they're gonna have a ton of work."
[5]
Trump Defends Pentagon Ban On Anthropic, Calls It Legal And Justified
The Trump administration defended the Pentagon's move to blacklist Anthropic, arguing the decision was both legal and justified. The Trump administration argued in a court filing that Anthropic's First Amendment claims are "unlikely to succeed," saying the government's actions were driven by contract issues and national security considerations rather than any form of retaliatory conduct. "It was only when Anthropic refused to release the restrictions on the use of its products -- which refusal is conduct, not protected speech -- that the President directed all federal agencies to terminate their business relationships with Anthropic," the court document stated. "Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners," Anthropic wrote to Al Jazeera. Anthropic is understood to be reviewing the government's filing. Earlier today, it was reported that nearly 150 retired judges have stepped into the high-stakes legal battle, backing Anthropic as it challenges a U.S. defense designation that could damage its broader business. The judges emphasized that Anthropic is not seeking defense contracts. "No one is trying to force the Department to contract with Anthropic," they wrote. They added, "Instead, Anthropic is asking only that it not be punished on its way out the door." Earlier this month, U.S. Central Command reportedly used Anthropic's Claude AI in a Trump-era air operation against Iran, despite a federal ban, supporting intelligence and target planning. The military has previously used Claude in high-profile missions, including the operation that captured Venezuelan President Nicolas Maduro. Photo: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[6]
Hegseth wants Pentagon to dump Anthropic's Claude, but military users say it's not so easy
March 19 - Pentagon staffers, former officials and IT contractors who work closely with the U.S. military say they are reluctant to give up Anthropic's AI tools, which they view as superior to alternatives, despite orders to remove them. After a dispute between Anthropic and the Pentagon over guardrails for how the military could use its artificial intelligence tools, Defense Secretary Pete Hegseth designated the company a supply-chain risk on March 3, barring its use by the Pentagon and its contractors following a six-month phase-out. But the move is running into resistance, with some military users dragging their feet and others preparing to revert to Anthropic's platform in anticipation of the dispute being resolved. "Career IT people at DoD hate this move because they had finally gotten operators comfortable using AI," said one IT contractor. "They think it's stupid." The contractor said Anthropic's Claude AI model "is the best," while xAI's Grok often produced inconsistent answers to the same query. RECERTIFYING SYSTEMS COULD TAKE MONTHS The complaints suggest uprooting Anthropic from the Pentagon's networks will be neither quick nor painless. One contractor said recertifying systems that run on Anthropic's products for military use could take months. Some Pentagon officials, staff and contractors spoke anonymously because they were not authorized to speak publicly. The Defense Department, Anthropic and xAI did not respond to requests for comment. AI tools have become essential for the U.S. military, which uses them for tasks ranging from targeting weapons and helping plan operations to handling classified material and analyzing information. Anthropic announced a $200 million defense contract in July 2025 and quickly became embedded in the military's workflow. Claude became the first AI model approved to operate on classified military networks, and officials familiar with its use said adoption was strong. Within the federal government, Anthropic's models were widely viewed as more capable than rival offerings. Reuters has previously reported that the Pentagon used Claude tools to support U.S. military operations during the conflict with Iran, and sources said the technology remains in use despite the blacklisting. One expert described that as "the clearest signal" of how highly the Pentagon values the tool. Furthermore, "It's a substantial cost to replace those models with alternatives," said Joe Saunders, the CEO of government contractor RunSafe Security. Saunders added that those alternative systems would go through a long process to recertify them for use on classified or military networks. In the case of an existing system being replaced with a new one, certification could take 12 to 18 months, he said. "It's not just costly, it's a loss of productivity," added Saunders, who helped the military incorporate AI chatbots. Orders to stop using Claude are filtering through the Pentagon. One official said staff are complying because "no one wants to end their career over this," but described the shift as wasteful. Tasks previously handled by Claude, such as querying large datasets for information, are in some cases now being done manually with tools such as Microsoft Excel, the official said. Anthropic's Claude Code tool was widely used within the Pentagon to write software code, several of the people said. Losing that tool has left developers frustrated, another senior official said, but added they should not rely on a single tool. For example, Palantir's Maven Smart Systems - a software platform that supplies militaries with intelligence analysis and weapons targeting - uses multiple prompts and workflows that were built using Anthropic's Claude Code, according to two people familiar with the matter. Palantir, which holds Maven-related contracts with the Defense Department and other U.S. national security agencies that have a potential value of more than $1 billion, will have to replace Claude with another AI model and rebuild parts of its software, one of the sources said. Some staff are "slow-rolling" their replacement of Claude because they are actively using it to create workflows, which are series of automated tasks, a Pentagon technologist said. Developers are frustrated because shifting to new AI agents would mean losing the agents they created to sift through vast amounts of data. The Defense Department has ordered contractors, including major defense firms, to assess and report their reliance on Anthropic products and to begin winding them down. Officials and contractors say they now face a strategic question: whether to pivot quickly to OpenAI, Google or xAI, or to unwind Anthropic in a way that allows for a rapid return if the Pentagon reinstates it. One chief information officer at a federal agency said it plans to slow-roll the phase-out, betting that the government and Anthropic will reach an agreement before the six-month deadline. "What we are seeing play out here is the tension of adoption, both inside the Pentagon as well as the political level," said Roger Zakheim, director of the Ronald Reagan Presidential Foundation and Institute. (Reporting by Mike Stone, Alexandra Alper and Raphael Satter in Washington; Additional reporting by David Jeans in New York; Editing by Chris Sanders, Rod Nickel)
Share
Share
Copy Link
The Trump administration backed the Pentagon's blacklisting of Anthropic in a new court filing, arguing the AI company's refusal to remove safety restrictions is conduct, not protected speech. The dispute centers on Anthropic's guardrails against autonomous weapons and domestic surveillance, with federal agencies left scrambling after abrupt orders to cease using Claude AI despite lacking formal guidance.
The Trump administration filed a robust defense of the Pentagon ban on Anthropic, arguing that Defense Secretary Pete Hegseth's designation of the AI company as a supply chain risk was both lawful and justified
1
. In court documents submitted this week, the Department of Justice maintained that Anthropic posed an "unacceptable risk" to military supply chains after the company refused to remove AI safety guardrails that prevent its Claude AI models from being used for autonomous weapons or domestic surveillance2
. The legal dispute erupted after negotiations between Anthropic and the Pentagon broke down earlier this month, with the company filing a lawsuit on March 9 alleging the designation violated its free speech and due process rights under the First Amendment3
.Source: Market Screener
The Trump administration's filing directly challenged Anthropic's assertion that the blacklisting constitutes a free speech violation. "It was only when Anthropic refused to release the restrictions on the use of its products -- which refusal is conduct, not protected speech -- that the President directed all federal agencies to terminate their business relationships with Anthropic," the court document stated
1
. The Department of Justice argued that the company's First Amendment claims are "unlikely to succeed," emphasizing that the government's actions stemmed from national security concerns rather than retaliatory conduct5
. The Pentagon maintained it must be able to use AI services for "any lawful purpose" and expressed concern that Anthropic "could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations"2
.
Source: The Hill
Nearly three weeks after Trump ordered federal agencies to "immediately cease" using Anthropic's technology via a Truth Social post, government departments remain in limbo without formal guidance beyond the social media directive
4
. The General Services Administration and the Department of Health and Human Services abruptly removed Claude within hours of Trump's directive, leaving thousands of employees scrambling. At HHS, staff received less than an hour's notice to save their chats and coding projects before access was terminated4
. One HHS agency leader told The Hill that "people lost their chats, people lost any coding that they were doing in any projects," calling it "a waste of government resources." Federal AI leaders across civilian agencies reported "a tremendous lack of information" with "nobody really knows" becoming a common refrain among officials trying to navigate the abrupt transition4
.Related Stories
The supply chain risk designation, typically reserved for organizations from foreign adversary countries such as Chinese tech giant Huawei, theoretically bars all government suppliers from doing business with Anthropic
2
. Major American tech firms have weighed in on Anthropic's side, with Microsoft filing an amicus brief stating, "This is not the time to put at risk the very AI ecosystem that the administration has helped to champion"2
. Nearly 150 retired judges also backed Anthropic, emphasizing the company "is not seeking defense contracts" and "is asking only that it not be punished on its way out the door"5
. Anthropic CEO Dario Amodei maintained that the company understands the Department of Defense "not private companies, makes military decisions" and has "never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner"3
. Legal experts suggest Anthropic may have a strong case as the battle heads to California federal court, where the company has requested a preliminary injunction to block the Pentagon's decision1
.
Source: Benzinga
Summarized by
Navi
04 Mar 2026β’Policy and Regulation

04 Mar 2026β’Policy and Regulation

11 Mar 2026β’Policy and Regulation

1
Technology

2
Policy and Regulation

3
Technology
