3 Sources
3 Sources
[1]
Defense secretary Pete Hegseth designates Anthropic a supply chain risk
Nearly two hours after President Donald Trump announced on Truth Social that he was banning Anthropic products from the federal government, Secretary of Defense Pete Hegseth took it one step further and announced that he was now designating the AI company as a "supply-chain risk". After a week of tense negotiations over the company's acceptable use policies, the Pentagon gave Anthropic an ultimatum: agree by Friday, 5:30 PM EST, to let the Pentagon use Claude for "all legal purposes," including for autonomous lethal weapons without human oversight and mass surveillance, or be designated a supply-chain risk. The designation, which is typically used for companies with ties to foreign governments that pose national security risks to the United States, will bar any company that uses Anthropic products from working with the Department of Defense. In a tweet posted just after 5PM ET, Hegseth broadened the designation to encompass companies doing "any commercial activity with Anthropic," reiterating Trump's mandate that companies had six months to divest themselves from Anthropic products. "Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic," he wrote. "Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of 'effective altruism,' they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives." Hegseth, as Secretary of Defense, has the ability to label a company a "supply-chain risk" at his own discretion. But the decision comes after the Pentagon made several other attempts to compel Anthropic to let them use Claude as they wished, including the threat to invoke the Defense Production Act.
[2]
Hegseth declares Anthropic a supply chain risk, barring military contractors from doing business with AI giant
Joe Walsh is a senior editor for digital politics at CBS News. Joe previously covered breaking news for Forbes and local news in Boston. Defense Secretary Pete Hegseth deemed artificial intelligence firm Anthropic a "supply chain risk to national security" on Friday, following days of increasingly heated public conflict over the company's effort to place guardrails on the Pentagon's use of its technology. Hegseth declared on X that effective immediately, "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." The decision could have a wide-ranging impact, given the sheer number of companies that contract with the Pentagon. "America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final," Hegseth wrote. President Trump announced earlier Friday that all federal agencies must "immediately" stop using Anthropic, though the Defense Department and certain other agencies can continue using its AI technology for up to six months while transitioning to other services. CBS News has reached out to Anthropic for comment. The decision to cut off Anthropic came after a dispute with the Pentagon that highlighted sweeping disagreements about the role of AI in national security and the potential risks that the powerful technology could pose. The company -- which is the only AI firm whose model is deployed on the Pentagon's classified networks -- has sought guardrails that prevent its technology from being used to conduct mass surveillance of Americans or carry out military operations without human approval. But the Pentagon insisted any deal should allow use Anthropic's Claude model for "all lawful purposes." The Pentagon had given Anthropic a deadline of Friday at 5:01 p.m. to either reach an agreement or lose out on its lucrative contracts with the military. The military's position is that it's already illegal for the Pentagon to conduct mass surveillance of Americans, and internal policies restrict the military from using fully autonomous weapons. As talks between the two sides broke down this week, Pentagon officials have publicly accused the company of seeking to impose his own views onto the military. Hegseth called Anthropic "sanctimonious" and arrogant on Friday, and accused it of trying to "strong-arm the United States military into submission." "Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable," Hegseth alleged. But Anthropic CEO Dario Amodei has argued that guardrails are necessary because Claude is not infallible enough to power fully autonomous weapons and a powerful AI model could raise serious privacy concerns. He says the company understands that military decisions are made by the Pentagon and has never tried to limit the use of its technology "in an ad hoc manner." "However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei said in a statement Thursday. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do." Amodei has been outspoken for years about the potential risks posed by unchecked AI technology, and has backed calls for safety and transparency regulations. On Thursday, the eve of the military's deadline to reach a deal, the Pentagon's chief technology officer Emil Michael told CBS News that the Pentagon had made concessions, offering written acknowledgements of the federal laws and internal military policies that restrict mass surveillance and autonomous weapons. "At some level, you have to trust your military to do the right thing," said Michael, who also noted, "we'll never say that we're not going to be able to defend ourselves in writing to a company." Anthropic called that offer inadequate. A company spokesperson said the new language was "paired with legalese that would allow those safeguards to be disregarded at will."
[3]
Hegseth says Pentagon designating Anthropic as supply chain risk after Trump bans AI firm
Defense Secretary Pete Hegseth said the Pentagon is designating Anthropic as a supply chain risk shortly after President Trump directed all federal agencies to cease using the company's technology amid an ongoing feud with the Défense Department (DOD). "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," Hegseth said in a post on X. The Pentagon chief added that the company will continue to provide its services to the Pentagon for no more than half a year to "allow for a seamless transition to a better and more patriotic service." "Anthropic's stance is fundamentally incompatible with American principles," Hegseth said. "Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered." "America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final," he added.
Share
Share
Copy Link
Defense Secretary Pete Hegseth designated Anthropic as a supply chain risk, barring any Pentagon contractor from doing business with the AI company. The move follows Anthropic's refusal to allow unrestricted military use of its Claude AI model for autonomous weapons and mass surveillance without human oversight, ending a week of tense negotiations.
Defense Secretary Pete Hegseth designated Anthropic as a supply chain risk on Friday, effectively barring any company working with the Department of Defense from conducting commercial activity with the AI firm
1
. The decision came nearly two hours after Donald Trump announced on Truth Social that he was ordering federal agencies to ban Anthropic products, though the Pentagon and certain agencies can continue using the technology for up to six months while transitioning to alternative services2
.
Source: The Verge
The supply chain risk designation, typically reserved for companies with foreign government ties that pose national security threats, represents a dramatic escalation in a week-long dispute over acceptable use policies for Anthropic's Claude AI model
1
. Pete Hegseth declared in a post on X that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," a move with potentially wide-ranging impact given the thousands of military contractors2
.The Pentagon gave Anthropic a Friday 5:30 PM EST deadline during negotiations: agree to let the military use Claude for "all legal purposes," including autonomous lethal weapons without human oversight and mass surveillance, or face designation as a supply chain risk
1
. Anthropic, the only AI firm whose model is deployed on the Pentagon's classified networks, sought guardrails preventing its technology from conducting mass surveillance of Americans or carrying out military operations without human approval2
.Hegseth accused Anthropic and CEO Dario Amodei of "duplicity," claiming they "attempted to strong-arm the United States military into submission" through what he called "cowardly corporate virtue-signaling that places Silicon Valley ideology above American lives"
1
. The Defense Secretary insisted the military must have "full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic"1
.
Source: CBS
Dario Amodei defended the company's position in a Thursday statement, arguing that guardrails are necessary because Claude is not infallible enough to power fully autonomous weapons and could raise serious privacy concerns
2
. "In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei said, adding that "some uses are also simply outside the bounds of what today's technology can safely and reliably do"2
.The Pentagon's position is that existing federal laws already prohibit mass surveillance of Americans, and internal policies restrict the military from using fully autonomous weapons
2
. Pentagon chief technology officer Emil Michael told CBS News on Thursday that the military had offered written acknowledgements of these laws and policies during negotiations. "At some level, you have to trust your military to do the right thing," Michael said, noting "we'll never say that we're not going to be able to defend ourselves in writing to a company"2
.An Anthropic spokesperson called the Pentagon's offer inadequate, saying the new language was "paired with legalese that would allow those safeguards to be disregarded at will"
2
. The breakdown in negotiations prompted the Pentagon to consider invoking the Defense Production Act before ultimately choosing the supply chain risk designation1
.Related Stories
The confrontation sets a precedent for how the federal government will engage with Big Tech companies that seek to impose restrictions on military use of AI models. Companies doing business with the Pentagon now face a choice: divest from Anthropic within six months or lose lucrative military contracts
1
. Hegseth emphasized that "Anthropic's stance is fundamentally incompatible with American principles" and that the company's "relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered"3
.The dispute highlights fundamental disagreements about the role of AI in national security and whether tech companies should have authority to set boundaries on military applications. As warfighters increasingly rely on AI capabilities, questions about lethal weapons systems, human oversight requirements, and the balance between innovation and safety will continue to shape policy debates. The standoff also raises concerns about whether the Pentagon's aggressive stance might discourage other AI firms from partnering with the military, potentially limiting access to cutting-edge technology at a time when competitors like China are racing ahead in AI development.
Summarized by
Navi
12 Feb 2026•Policy and Regulation

14 Feb 2026•Policy and Regulation

30 Jan 2026•Policy and Regulation

1
Business and Economy

2
Technology

3
Policy and Regulation
