174 Sources
174 Sources
[1]
Pete Hegseth wants unfettered access to Anthropic's models for the military
US Defense Secretary Pete Hegseth has threatened to cut Anthropic from his department's supply chain unless it agrees to sign off on its technology being used in all lawful military applications by Friday. The threat is the latest escalation in a feud between Anthropic and the department, triggered by the AI group's refusal to give unfettered access to its models for classified military use, including domestic surveillance and deadly missions with no direct human control. Hegseth summoned Anthropic chief executive Dario Amodei to Washington for a meeting on Tuesday. During tense talks, the defense secretary threatened to cut the company out of the department's supply chain or to invoke the Defense Production Act, a cold war-era measure enabling the president to control domestic industry in the interest of national defense, said a person with knowledge of the talks. Anthropic had until 5.01 pm on Friday "to get on board or not" with Hegseth's terms, said a senior Pentagon official. "If they don't get on board, [Hegseth] will ensure the Defense Production Act is invoked on Anthropic, compelling them to be used by the Pentagon regardless of if they want to or not," the official said. The defense department would also label Anthropic "a supply chain risk." "You can't lead tactical ops by exception," the official added, claiming "this has nothing to do with mass surveillance and autonomous weapons being used." Anthropic said it had continued with "good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do." The $380 billion start-up could take legal action if Hegseth follows through on his ultimatum, according to people familiar with the matter. The disagreement threatens to widen a fault line between the White House and one of the US's leading AI labs. Anthropic has pushed for tighter regulation of AI and Amodei has repeatedly warned of the risks of the technology. Meanwhile, President Donald Trump and his advisers have promoted a light-touch regulatory framework. Trump's AI tsar David Sacks has derided Anthropic as "woke" and last October accused the $380 billion company of "running a sophisticated regulatory capture strategy based on fear-mongering." Those attacks echo criticisms from Elon Musk, who Sacks last year described as "a good friend." Sacks worked with Musk at PayPal and has invested in xAI and other Musk groups. Sacks divested those positions when he was appointed to his government role. But the Pentagon has relied on Anthropic for AI technology. The San Francisco-based company's Claude tool has until recently been the only model working on classified missions as a result of its partnership with Palantir. Hegseth is negotiating with AI labs, including Google, OpenAI and Elon Musk's xAI, to replace Anthropic and integrate their technology into classified military systems. The senior Pentagon official said Musk's Grok "is on board with being used in a classified setting, while the rest of the companies are close." Cutting Anthropic from the Pentagon supply chain is an extreme measure typically reserved for companies linked to foreign adversaries. But at the same time, deploying the DPA would suggest Anthropic's technology is critical to Pentagon operations. Invoking the DPA would allow the Pentagon to make use of Anthropic's tools without an agreement. The act gives the administration the ability to "allocate materials, services and facilities" for national defense. The Trump and Biden administrations used the act to address a shortage of medical supplies during the coronavirus pandemic, and Trump has also used the DPA to order an increase in the US's production of critical minerals. The Pentagon has pushed for open-ended use of AI technology, aiming to expand the set of tools at its disposal to counter threats and to undertake military operations. The department released its AI strategy last month, with Hegseth saying in a memo that "AI-enabled warfare and AI-enabled capability development will redefine the character of military affairs over the next decade." He added the US military "must build on its lead" over foreign adversaries to make soldiers "more lethal and efficient," and that the AI race was "fueled by the accelerating pace" of innovation coming from the private sector. Anthropic has expressed particular concern about its models being used for lethal missions that do not have a human in the loop, arguing that state of the art AI models are not reliable enough to be trusted in those contexts, said people familiar with the negotiations. It had also pushed for new rules to govern the use of AI models for mass domestic surveillance, even where that was legal under current regulations, they added. A decision to cut Anthropic from the defense department's supply chain would have significant ramifications for national security work and the company, which has a $200 million contract with the department. It would also have an impact on partners, including Palantir, that make use of Anthropic's models. Claude was used in the US capture of Venezuelan leader Nicolás Maduro in January. That mission prompted queries from Anthropic about the exact manner in which its model was used, said people familiar with the matter. A person with knowledge of Tuesday's meeting said Amodei had stressed to Hegseth that his company had never objected to legitimate military operations. The defense department declined to comment.
[2]
Employees at Google and OpenAI support Anthropic's Pentagon stand in open letter | TechCrunch
Anthropic has reached a stalemate with the United States Department of War over the military's request for unrestricted access to the AI company's technology. But as the Pentagon's Friday afternoon deadline for Anthropic's compliance approaches, over 300 Google employees and over 60 OpenAI employees have signed an open letter urging the leaders of their companies to support Anthropic and refuse this unilateral use. Specifically, Anthropic stood in opposition to the use of AI for domestic mass surveillance and autonomous weaponry. The open letter's signatories seek to encourage their employers to "put aside their differences and stand together" to uphold the boundaries Anthropic has asserted. "They're trying to divide each company with fear that the other will give in," the letter says. "That strategy only works if none of us know where the others stand." The letter specifically calls on executives at Google and OpenAI to maintain Anthropic's red lines against mass surveillance and fully automated weaponry. "We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands." Leaders at the companies have not yet formally reponded to the letter. TechCrunch has reached out to Google and OpenAI for comment. However, informal statements suggest both companies are sympathetic to Anthropic's side of the case. In an interview with CNBC on Friday morning, OpenAI CEO Sam Altman said that he doesn't "personally think the Pentagon should be threatening DPA against these companies." According to a CNN reporter, an OpenAI spokesperson confirmed that the company shares Anthropic's red lines against autonomous weapons and mass surveillance. Google DeepMind has not formally addressed the conflict, but Chief Scientist Jeff Dean, presumably speaking as an individual, did express opposition to mass surveillance by the government. "Mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression," Dean wrote on X. "Surveillance systems are prone to misuse for political or discriminatory purposes." According to an Axios report, the military currently can use X's Grok, Google's Gemini, and OpenAI's ChatGPT for unclassified tasks, and has been negotiating with Google and OpenAI to bring its technology over for use in classified work. While Anthropic has an existing partnership with the Pentagon, the AI company has remained firm in maintaining the boundary that its AI be used for neither mass domestic surveillance, nor fully autonomous weaponry. Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei that if his company doesn't concede, the Pentagon will either declare Anthropic a "supply chain risk" or invoke the Defense Production Act (DPA) to force the company to comply with military demands. In a statement on Thursday, Amodei maintained his company's position. "These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security," the statement reads. "Regardless, these threats do not change our position: we cannot in good conscience accede to their request."
[3]
Anthropic's safety-first AI collides with the Pentagon as Claude expands into autonomous agents
As Anthropic releases its most autonomous agents yet, a mounting clash with the military reveals the impossible choice between global scaling and a "safety first" ethos On February 5 Anthropic released Claude Opus 4.6, its most powerful artificial intelligence model. Among the model's new features is the ability to coordinate teams of autonomous agents -- multiple AIs that divide up the work and complete it in parallel. Twelve days after Opus 4.6's release, the company dropped Sonnet 4.6, a cheaper model that nearly matches Opus's coding and computer skills. In late 2024, when Anthropic first introduced models that could control computers, they could barely operate a browser. Now Sonnet 4.6 can navigate Web applications and fill out forms with human-level capability, according to Anthropic. And both models have a working memory large enough to hold a small library. Enterprise customers now make up roughly 80 percent of Anthropic's revenue, and the company closed a $30-billion funding round last week at a $380-billion valuation. By every available measure, Anthropic is one of the fastest-scaling technology companies in history. But behind the big product launches and valuation, Anthropic faces a severe threat: the Pentagon has signaled it may designate the company a "supply chain risk" -- a label more often associated with foreign adversaries -- unless it drops its restrictions on military use. Such a designation could effectively force Pentagon contractors to strip Claude from sensitive work. If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. Tensions boiled over after January 3, when U.S. special operations forces raided Venezuela and captured Nicolás Maduro. The Wall Street Journal reported that forces used Claude during the operation via Anthropic's partnership with the defense contractor Palantir -- and Axios reported that the episode escalated an already fraught negotiation over what, exactly, Claude could be used for. When an Anthropic executive reached out to Palantir to ask whether the technology had been used in the raid, the question raised immediate alarms at the Pentagon. (Anthropic has disputed that the outreach was meant to signal disapproval of any specific operation.) Secretary of Defense Pete Hegseth is "close" to severing the relationship, a senior administration official told Axios, adding, "We are going to make sure they pay a price for forcing our hand like this." The collision exposes a question: Can a company founded to prevent AI catastrophe hold its ethical lines once its most powerful tools -- autonomous agents capable of processing vast datasets, identifying patterns and acting on their conclusions -- are running inside classified military networks? Is a "safety first" AI compatible with a client that wants systems that can reason, plan and act on their own at military scale? Anthropic has drawn two red lines: no mass surveillance of Americans and no fully autonomous weapons. CEO Dario Amodei has said Anthropic will support "national defense in all ways except those which would make us more like our autocratic adversaries." Other major labs -- OpenAI, Google and xAI -- have agreed to loosen safeguards for use in the Pentagon's unclassified systems, but their tools aren't yet running inside the military's classified networks. The Pentagon has demanded that AI be available for "all lawful purposes." The friction tests Anthropic's central thesis. The company was founded in 2021 by former OpenAI executives who believed the industry was not taking safety seriously enough. They positioned Claude as the ethical alternative. In late 2024 Anthropic made Claude available on a Palantir platform with a cloud security level up to "secret" -- making Claude, by public accounts, the first large language model operating inside classified systems. The question the standoff now forces is whether safety-first is a coherent identity once a technology is embedded in classified military operations and whether red lines are actually possible. "These words seem simple: illegal surveillance of Americans," says Emelia Probasco, a senior fellow at Georgetown's Center for Security and Emerging Technology. "But when you get down to it, there are whole armies of lawyers who are trying to sort out how to interpret that phrase." Consider the precedent. After the Edward Snowden revelations, the U.S. government defended the bulk collection of phone metadata -- who called whom, when and for how long -- arguing that these kinds of data didn't carry the same privacy protections as the contents of conversations. The privacy debate then was about human analysts searching those records. Now imagine an AI system querying vast datasets -- mapping networks, spotting patterns, flagging people of interest. The legal framework we have was built for an era of human review, not machine-scale analysis. "In some sense, any kind of mass data collection that you ask an AI to look at is mass surveillance by simple definition," says Peter Asaro, co-founder of the International Committee for Robot Arms Control. Axios reported that the senior official "argued there is considerable gray area around" Anthropic's restrictions "and that it's unworkable for the Pentagon to have to negotiate individual use-cases with" the company. Asaro offers two readings of that complaint. The generous interpretation is that surveillance is genuinely impossible to define in the age of AI. The pessimistic one, Asaro say, is that "they really want to use those for mass surveillance and autonomous weapons and don't want to say that, so they call it a gray area." Regarding Anthropic's other red line, autonomous weapons, the definition is narrow enough to be manageable -- systems that select and engage targets without human supervision. But Asaro sees a more troubling gray zone. He points to the Israeli military's Lavender and Gospel systems, which have been reported as using AI to generate massive target lists that go to a human operator for approval before strikes are carried out. "You've automated, essentially, the targeting element, which is something [that] we're very concerned with and [that is] closely related, even if it falls outside the narrow strict definition," he says. The question is whether Claude, operating inside Palantir's systems on classified networks, could be doing something similar -- processing intelligence, identifying patterns, surfacing persons of interest -- without anyone at Anthropic being able to say precisely where the analytical work ends and the targeting begins. The Maduro operation tests exactly that distinction. "If you're collecting data and intelligence to identify targets, but humans are deciding, 'Okay, this is the list of targets we're actually going to bomb' -- then you have that level of human supervision we're trying to require," Asaro says. "On the other hand, you're still becoming reliant on these AIs to choose these targets, and how much vetting and how much digging into the validity or lawfulness of those targets is a separate question." Anthropic may be trying to draw the line more narrowly -- between mission planning, where Claude might help identify bombing targets, and the mundane work of processing documentation. "There are all of these kind of boring applications of large language models," Probasco says. But the capabilities of Anthropic's models may make those distinctions hard to sustain. Opus 4.6's agent teams can split a complex task and work in parallel -- an advancement in autonomous data processing that could transform military intelligence. Both Opus and Sonnet can navigate applications, fill out forms and work across platforms with minimal oversight. These features driving Anthropic's commercial dominance are what make Claude so attractive inside a classified network. A model with a huge working memory can also hold an entire intelligence dossier. A system that can coordinate autonomous agents to debug a code base can coordinate them to map an insurgent supply chain. The more capable Claude becomes, the thinner the line between the analytical grunt work Anthropic is willing to support and the surveillance and targeting it has pledged to refuse. As Anthropic pushes the frontier of autonomous AI, the military's demand for those tools will only grow louder. Probasco fears the clash with the Pentagon creates a false binary between safety and national security. "How about we have safety and national security?" she asks.
[4]
Anthropic CEO stands firm as Pentagon deadline looms | TechCrunch
Anthropic CEO Dario Amodei said Thursday that he "cannot in good conscience accede to [the Pentagon's] request" to give the military unrestricted access to its AI systems. "Anthropic understands that the Department of War, not private companies, makes military decisions," Amodei wrote in a statement. "However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today's technology can safely and reliably do." The two cases are: mass surveillance of Americans and fully autonomous weapons with no human in the loop. The Pentagon believes it should be able to use Anthropic's model for all lawful purposes, and that its uses shouldn't be dictated by a private company. Amodei's statement comes less than 24 hours ahead of the Friday at 5:01 PM deadline Defense Secretary Pete Hegseth has given Anthropic to either acquiesce to his demands, or face the consequences. The Department of Defense has attempted to force Amodei's hand by either labeling Anthropic a supply chain risk -- a designation reserved for foreign adversaries -- or invoke the Defense Production Act and effectively force the firm to do its bidding. Amodei pointed out the contradiction in those two threats. "One labels us a security risk; the other labels Claude as essential to national security." He added that it's the Department's right to choose contractors most aligned with its vision, "but given the substantial value that Anthropic's technology provides to our armed forces, we hope they reconsider." Anthropic is currently the only frontier AI lab that has classified-ready systems for the military, though the DOD is reportedly getting xAI ready for the job. "Our strong preference is to continue to serve the Department and our warfighters -- with our two requested safeguards in place," Amodei said. "Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions." TLDR, he's saying: 'We can just part ways. There's no need to be nasty about it.'
[5]
OpenAI reached a new agreement with the Pentagon.
CEO Sam Altman wrote on X that the agreement allowed the US military to "deploy our models in their classified network." He said the agreement reflects OpenAI's desire for prohibitions on domestic mass surveillance and "human responsibility for the use of force, including for autonomous weapon systems." Altman also wrote that OpenAI is "asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept." This follows a rollercoaster week of negotiations between Anthropic and the Pentagon.
[6]
Anthropic won't budge as Pentagon escalates AI dispute | TechCrunch
Anthropic has until Friday evening to either give the U.S. military unrestricted access to its AI model or face the consequences, reports Axios. Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei in a meeting Tuesday morning that the Pentagon will either declare Anthropic a "supply chain risk" -- a designation usually reserved for foreign adversaries -- or invoke the Defense Production Act (DPA) to force the company to tailor a version of the model to the military's needs. The DPA gives the president the authority to force companies to prioritize or expand production for national defense. It was recently invoked during the COVID-19 pandemic to compel companies like General Motors and 3M to produce ventilators and masks, respectively. Anthropic has long stated that it doesn't want its technology used for mass surveillance of Americans or for fully autonomous weapons -- and is refusing to compromise on these points. Pentagon officials have argued the military's use of technology should be governed by U.S. law and constitutional limits, not by the usage policies of private contractors. Using the DPA in a dispute over AI guardrails would mark a significant expansion of the law's modern use. It would also reflect an expansion of a broader pattern of executive branch instability that has intensified in recent years, according to Dean Ball, senior fellow at the Foundation for American Innovation and former senior policy advisor on AI in Trump's White House. "It would basically be the government saying, 'If you disagree with us politically, we're going to try to put you out of business,'" Dean Ball, senior fellow at the Foundation for American Innovation and former senior policy advisor on AI in Trump's White House, said. The dispute unfolds against a backdrop of ideological friction, with some in the administration -- including AI czar David Sacks -- publicly criticizing Anthropic's safety policies as "woke." "Any reasonable, responsible investor or corporate manager is going to look at this and think the U.S. is no longer a stable place to do business," Ball said. "This is attacking the very core of what makes America such an important hub of global commerce. We've always had a stable and predictable legal system." It's a serious game of chicken, and Anthropic may not be the one to blink first. According to Reuters, Anthropic doesn't plan on easing its usage restrictions. Anthropic is the only frontier AI lab with classified DOD access, according to several reports. The Department of Defense doesn't have a backup option currently in play -- though the Pentagon has reportedly reached a deal to use xAI's Grok in classified systems. That lack of redundancy may help explain the Pentagon's aggressive posture, Ball argued. "If Anthropic canceled the contract tomorrow, it would be a serious problem for the DOD," he told TechCrunch, noting the agency appears to be falling short of a National Security Memorandum from the late Biden administration that directs federal agencies to avoid dependence on a single classified-ready frontier AI system. "The DOD has no backups. This is a single-vendor situation here," he continued. "They can't fix that overnight." TechCrunch has reached out to Anthropic and the DOD for comment.
[7]
Anthropic Refuses to Remove AI Safeguards Despite Pentagon Pressure
Anthropic says it won't loosen guardrails on its AI systems, despite pressure from the Pentagon. In a blog post, Anthropic CEO Dario Amodei outlined the company's position, saying it wouldn't back down on two of its AI policies around mass domestic surveillance and fully autonomous weapons. The Department of Defense (also known as the Department of War) applied pressure earlier this week on Anthropic to adapt its AI systems to allow the government "any lawful use" of its Claude technologies. Amodei said, "These threats do not change our position: we cannot in good conscience accede to their request." US Secretary of Defense Pete Hegseth reportedly told Anthropic to comply by the end of business on Friday or risk consequences. Hegseth is reportedly exploring how it could use the Defense Production Act to force Anthropic to allow unrestricted use of its systems, citing national security grounds. Axios reports the Pentagon is also considering designating Anthropic as a "supply chain risk." Amodei's blog post says, "I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries." "However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today's technology can safely and reliably do." Amodei outlines how those cases are mass domestic surveillance of American citizens and the current use of fully autonomous weapons. Anthropic says it believes AI-driven mass surveillance remains legal, but it's "because the law has not yet caught up with the rapidly growing capabilities of AI." He also explains that while Anthropic believes fully autonomous weapons, which make decisions and engage targets without any human input, may be helpful for future national defense, the tech is not yet reliable enough. "We will not knowingly provide a product that puts America's warfighters and civilians at risk," Amodei said. "We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer." According to government officials speaking with The Washington Post, Google, OpenAI, and xAI have all agreed to the Pentagon's changes on unclassified networks. Each brand is working with the Pentagon on agreements around classified networks. Under Secretary of War Emil Michael responded to Anthropic's blog post, calling Amodei "a liar" with a "God-complex." Michael said, "He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk." Anthropic promises a smooth transition if the Pentagon chooses to offboard its technologies, saying it will work to avoid "any disruption to ongoing military planning, operations, or other critical missions." Disclosure: Ziff Davis, PCMag's parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
[8]
Anthropic v the US military: what this public feud says about the use of AI in warfare
The very public feud between the US Department of Defense (also known these days as the Department of War) and its AI technology supplier Anthropic is unusual for pitting state might against corporate power. In the military space, at least, these are usually cosy bedfellows. The origin of this disagreement dates back months, amid repeated criticisms from Donald Trump's AI and crypto "czar", David Sacks, about the company's supposedly woke policy stances. But tensions ramped up following media reports that Anthropic technology had been used in the violent abduction of former Venezuelan president Nicolás Maduro by the US military in January 2026. It was alleged this caused discontent inside the San Francisco-based company. Anthropic has denied this, with company insiders suggesting it did not find or raise any violations of its policies in the wake of the Maduro operation. Nonetheless, the US secretary of defense, Pete Hegseth, has issued Anthropic with an ultimatum. Unless the company relaxes its ethical limits policy by 5.01pm Washington time on Friday, February 27, the US government has suggested it could invoke the 1950 Defense Production Act. This would allow the Department of Defense (DoD) to appropriate the use of this technology as it wishes. At the same time, Anthropic could be designated a supply chain risk, putting its government contracts in danger. These extraordinary measures may appear contradictory, but they are consistent with the current US administration's approach, which favours big gestures and policy ambiguity. At the heart of the dispute is the question of how Anthropic's large language model (LLM) Claude is used in a military context. Across many sectors of industry, Claude does a range of automated tasks including writing, coding, reasoning and analysis. In July 2024, US data analytics company Palantir announced it was partnering with Anthropic to "bring Claude AI models ... into US Government intelligence and defense operations". Anthropic then signed a US$200 million (£150 million) contract with the DoD in July 2025, stipulating certain terms via its "acceptable use policy". These would, for example, disallow the use of Claude in mass surveillance of US citizens or fully autonomous weapon systems which, once activated, can select and engage targets with no human involvement. According to Anthropic, either would violate its definition of "responsible AI". Hegseth and the DoD have pushed back, characterising such limits as unduly restrictive in a geopolitical environment marked by uncertainty, instability and blurred lines. Responsible AI should, they insist, encompass "any lawful use" of AI models by the US military. A memorandum issued by Hegseth on January 9 2026 stated: Diversity, Equity and Inclusion and social ideology have no place in the Department of War, so we must not employ AI models which incorporate ideological 'tuning' that interferes with their ability to provide objectively truthful responses to user prompts. The memo instructed that the term "any lawful use" should be incorporated in future DoD contracts for AI services within 180 days. Anthropic's competitors are lining up Anthropic's red lines do not rule out the mass surveillance of human communities at large - only American citizens. And while it draws the line at fully autonomous weapons, the multitude of evolving uses of AI to inform, accelerate or scale up violence in ways that severely limit opportunities for moral restraint are not mentioned in its acceptable use policy. At present, Anthropic has a competitive advantage. Its LLM model is integrated into US government interfaces with sufficient levels of clearance to offer a superior product. But Anthropic's competitors are lining up. Palantir has expanded its business with the Pentagon significantly in recent months, giving rise to more AI models. Meanwhile, Google recently updated its ethical guidelines, dropping its pledge not to use AI for weapons development and surveillance. OpenAI has likewise modified its mission statement, removing "safety" as a core value, and Elon Musk's xAI (creator of the Grok chatbot) has agreed to the Pentagon's "any lawful use" standard. A testing point for military AI For C.S. Lewis, courage was the master virtue, since it represents "the form of every virtue at the testing point". Anthropic now faces such a testing point. On February 24, the company announced the latest update to its responsible scaling policy - "the voluntary framework we use to mitigate catastrophic risks from AI systems". According to Time magazine, the changes include "scrapping the promise to not release AI models if Anthropic can't guarantee proper risk mitigations in advance". Anthropic's chief science officer, Jared Kaplan, told Time: "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments ... if competitors are blazing ahead." Ethical language saturates the press releases of Silicon Valley companies eager to distinguish themselves from "bad actors" in Russia, China and elsewhere. But ethical words and actions are not the same, because the latter often entails a real-world cost. That such a highly public spectacle is happening at this time is perhaps no accident. In early February, representatives of many countries - but not the US - came together for the third time to find ways to agree on "responsible AI" in the military domain. And on March 2-6, the UN will convene its latest conference discussing how best to limit the use of emerging technologies for lethal autonomous weapons systems. Such legal and ethical debates about the role of AI technology in the future of warfare are critical, and overdue. Anthropic deserves credit for apparently resisting the US military's efforts to undercut its ethical guidelines. But AI's role is likely to be tested in many more conflict situations before agreement is reached.
[9]
OpenAI Gives Pentagon AI Model Access After Anthropic Dustup
The Pentagon declared Anthropic a supply-chain risk and gave the company a six-month period to hand over AI services to another provider, following a feud over safeguards on its technology. OpenAI has agreed to deploy its own artificial intelligence models within the Defense Department's classified network after rival Anthropic PBC saw its relationship with the Pentagon implode over surveillance and autonomous weapons concerns. OpenAI Chief Executive Officer Sam Altman said late Friday that he'd reached an agreement with the department that reflects the firm's principles on prohibiting "domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems." The startup also built safeguards to ensure its models behave as they should as part of the deployment, Altman said in a post on the social media platform X. OpenAI declined to comment on whether the firm's services for the department would replace the work previously done by Anthropic. The Defense Department did not immediately respond to a request for comment late Friday night. Just hours earlier, the Pentagon declared Anthropic a supply-chain risk, a move that could have profound consequences for the company's business and escalated a feud between the artificial intelligence startup and defense officials over safeguards on its technology. In a post on X, Defense Secretary Pete Hegseth outlined a six-month period for Anthropic to hand over AI services to another provider. "America's warfighters will never be held hostage by the ideological whims of Big Tech," Hegseth wrote. "This decision is final." His post appeared shortly after President Donald Trump wrote on social media that he was ordering federal agencies to drop Anthropic. Anthropic, which has stipulated that its products not be used for surveillance of Americans or to carry out strikes without human involvement, said on Friday that "no amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons." Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. The Pentagon offered terms to Anthropic earlier this week that incorporated some language that the company had proposed on surveillance and autonomy, a person familiar with the situation said, asking not to be identified because the talks weren't public. But in Anthropic's opinion, they didn't go far enough in ensuring the department wouldn't be able to set aside any restrictions when it deems it necessary to do so, the person said. OpenAI's deal with the Pentagon threatens to widen the rift between the Trump administration and Anthropic, which has drawn strong support for its stance in Silicon Valley where tech workers rallied to the company's side and urged other major tech companies including Amazon.com Inc. and Microsoft Corp. to follow suit. Altman addressed some of issues of surveillance and autonomous weapons in his post, saying the Defense Department agreed with its principles and reflected them in its agreement with OpenAI -- asking the department "to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept." Dario Amodei, CEO of Anthropic, used to work at OpenAI and left in 2020 in part because of his concerns that the startup was prioritizing commercialization and speed over safety. OpenAI began as a nonprofit and converted to a more traditional for-profit enterprise last year. Though the company initially prohibited the use of its technology for military applications, OpenAI updated its policy to allow such uses in 2024. The company has also dropped the word "safely" from its mission statement, which currently states that the company's goal is to "ensure that artificial general intelligence -- AI systems that are generally smarter than humans -- benefits all of humanity." Both Anthropic and OpenAI are now increasingly turning their attention to profits as they push for initial public offerings as soon as this year, tapping frenzied investor interest in AI. Earlier on Friday, OpenAI announced it had raised $110 billion in a deal that values the startup at $730 billion, representing the ChatGPT maker's largest funding round to date and bolstering its costly push to secure more computing power and talent for AI development. Anthropic raised $30 billion in a funding round earlier this month from some of OpenAI's same investors. Amodei and Altman have publicly clashed over the years. Most recently, during an AI summit in New Delhi this month, the two men ended up standing next to each other with Prime Minister Narendra Modi, and noticeably didn't hold hands while everyone else lined up on stage did.
[10]
Anthropic to Pentagon: Robo-weapons could hurt US troops
AI upstart won't remove Claude's guardrails to stay onside with Dept. of War Anthropic has fired back at the US Department of War, arguing that it can't agree to Uncle Sam's contract demand to remove guardrails on its AI in part because the tech can't be trusted not to harm American civilians and warfighters. As The Register reported earlier this week, the US Department of War wants to compel Anthropic to allow unrestricted military use of its Claude tech, and has threatened to cancel the AI upstart's Pentagon contracts and penalize the company if it does not comply. On Thursday, Anthropic issued a statement in which CEO Dario Amodei said the company won't change its stance. "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner," he wrote, before adding "However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values." Amodei said two items in Anthropic's contract with the department of war are "simply outside the bounds of what today's technology can safely and reliably do." One of those use cases is mass domestic surveillance, which Amodei said can now create "a comprehensive picture of any person's life -- automatically and at massive scale" with the help of AI. The CEO thinks that's only legal "because the law has not yet caught up with the rapidly growing capabilities of AI." The second use case is powering fully autonomous weapons, which Amodei says are too dangerous to deploy in their current form. "Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons," he wrote. "We will not knowingly provide a product that puts America's warfighters and civilians at risk." The CEO said Anthropic has "offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer." He also suggested fully autonomous weapons "cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don't exist today." Amodei also pointed out what he believes are inconsistencies in the Pentagon's approach to this matter, by pointing out that one of its threatened sanctions labels Anthropic a threat to national security for refusing to do as asked, while another seeks to compel the company to remove guardrails on AI in the name of national security. "Regardless, these threats do not change our position: we cannot in good conscience accede to their request," Amodei wrote. The CEO wrapped his post by expressing his desire his desire for Anthropic to continue supplying the Pentagon, without having to remove its guardrails. The statement sets the scene for a showdown with Secretary of War Pete Hegseth, who gave Anthropic a Friday deadline to acquiesce to the Pentagon's terms and conditions. Hegseth has argued that the USA's military must focus on warfighting and become more lethal. ®
[11]
OpenAI confirms it's working with the Pentagon after Trump banned Anthropic from agencies
Simon is a Computer Science BSc graduate who has been writing about technology since 2014, and using Windows machines since 3.1. After working for an indie game studio and acting as the family's go-to technician for all computer issues, he found his passion for writing and decided to use his skill set to write about all things tech. Since beginning his writing career, he has written for many different publications such as WorldStart, Listverse, and MakeTechEasier. However, after finding his home at MakeUseOf in February 2019, he would eventually move on to its sister site, XDA, to bring the latest and greatest in Windows, artificial intelligence, and cybersecurity topics. Summary Trump ordered Anthropic banned from agencies with a six-month transition to replace its models. OpenAI struck a deal to deploy models on the Department of War's classified network. OpenAI will enforce bans on domestic mass surveillance and autonomous use of force. It seems the US government is having a big shake-up over which AI models it wants to use. Earlier today, Trump confirmed on his Truth Social account that he was banning Anthropic products from government agencies due to the AI company preventing some actions due to its Terms of Service. Trump declared that he would allow six months for the changeover from Anthropic to happen. Well, the news is still a few hours old, and Trump already has a new AI ally. Sam Altman of OpenAI has posted on X confirming that the company had come to an agreement with the Department of War. Related Anthropic just dropped its core AI safety promise, and that should worry you History doesn't repeat itself, but AI companies sure do. Posts 1 By Mahnoor Faisal Sam Altman confirms an agreement with the Department of War It doesn't seem like the DoW will have free rein over the AI models, though Sam Altman confirmed the agreement over on X, where he says that OpenAI had "reached an agreement with the Department of War to deploy our models in their classified network." However, Sam claims that, while Anthropic had concerns about the US government using its AI for tasks that went against its terms of service, OpenAI won't hand over the keys to its safeguards to the DoW. Sam states that OpenAI will enforce "prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," which the DoW has agreed to. There will also be "technical safeguards" to ensure the AI models work properly. Sam then says he believes the US government should "offer these same terms to all AI companies," as he believes the terms should be fine for any company to adopt. The ditching of Anthropic and the adoption of OpenAI shows us just how influential AI can be on the government level. The companies have the models, and the DoW wants to use them to protect its citizens. As such, there will always be an uneasy middle-ground where the two will have to meet if they ever intend to work together.
[12]
Pentagon Anthropic feud has sales and AI warfare at stake as Friday deadline looms
NEW YORK, Feb 27 (Reuters) - An explosive feud between the Pentagon and top artificial intelligence lab Anthropic is set to come to a head by 5:01 p.m. (2201 GMT) on Friday over concerns about how the military could use AI at war. The dispute, barreling toward a deadline set by the Pentagon for resolution, is widely seen as a referendum on how powerful AI could be deployed by the military and how its risks are managed. The Pentagon wants any lawful use to be allowed and has threatened Anthropic's business if the startup does not scrap additional guardrails. "It's a shot across the bow about the future of artificial intelligence and its use on the battlefield," Chris Miller, the former acting secretary of defense, told Reuters. He added that the outcome will "be an acid test for those companies that claim to want to use AI humanely." The months-long spat has divided some industry leaders, military officials and lawmakers over whether AI should be wielded without constraints when its creator Anthropic said the technology was not yet reliable for fully autonomous weapons. Democratic Senator Elissa Slotkin weighed in, opens new tab on Thursday: "The average person does not think we should allow weapons systems to get into war and kill people without a human being overseeing that in some way." Speaking at a confirmation hearing for two assistant defense secretary nominees, Slotkin added: "I certainly don't think any American, Democrat or Republican, wants mass surveillance on the American people." The Pentagon, which the Trump administration renamed the Department of War, has pushed back on the dilemma as a false choice "peddled by leftists in the media." "The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement," Pentagon chief spokesperson Sean Parnell posted on X, opens new tab Thursday. NEGOTIATIONS FALTER The Pentagon has signed $200-million ceiling agreements with major AI labs in the past year, including Anthropic, OpenAI and Google. It is pushing companies to agree to scrap their usage policies in favor of abiding by an all-lawful use clause. Anthropic, continuing these talks, has maintained red lines over the military's use of its Claude AI models for autonomous weapons and domestic surveillance. Anthropic was first among these AI companies to work with classified information, through a supply deal via cloud provider Amazon. Anthropic CEO Dario Amodei, famous for quitting OpenAI in 2020 over concerns about AI technology's stewardship, has warned that AI has advanced faster than the law. Powerful technology could hoover up disparate material to gather intelligence on unwitting civilians, he said in a Thursday blog post, a prospect that critics view as a legal loophole. "Anthropic understands that the Department of War, not private companies, makes military decisions," but AI in narrow cases "can undermine, rather than defend, democratic values," Amodei said. Amodei met with Defense Secretary Pete Hegseth this week. Afterward, the Pentagon gestured toward compromise and sent the startup revised contract language. But the two parties remained at an apparent impasse. An Anthropic spokesperson said on Thursday, "The contract language we received overnight from the Department of War made virtually no progress" and would allow "safeguards to be disregarded at will." BUSINESS THREATS Key business for Anthropic is at stake. The Pentagon warned it would terminate its work with the startup and declare it a supply-chain risk if Anthropic did not accede to the department's demand for all-lawful use of AI. The designation, reserved typically for suppliers in adversary nations, means that defense contractors could be barred from deploying Anthropic's AI during work for the Pentagon. The setback comes as Anthropic races to win sales to businesses and government, with national security an area of focus. The Pentagon has asked contractors including Lockheed Martin (LMT.N), opens new tab to give an appraisal of their reliance on Anthropic ahead of the risk designation, Reuters reported on Wednesday. The defense industrial base totaled around 60,000 contractors including major public companies as of 2021. The Pentagon made a second threat, the legality of which some experts, opens new tab have questioned. "If they don't get on board, SecWar will ensure the Defense Production Act is invoked on Anthropic," a senior Pentagon official told Reuters, "compelling them to be used by the Pentagon regardless of if they want to or not." Reporting by David Jeans in New York and Jeffrey Dastin and Deepa Seetharaman in San Francisco; Editing by Kenneth Li Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Technology David Jeans Thomson Reuters David Jeans is a space and defense correspondent for Reuters, based in New York. He covers the intersection of weapons, technology and national security, with a focus on the rise of venture-backed military startups and the Pentagon's evolving relationship with Silicon Valley. Previously, he covered defense tech for Forbes. He's also the co-author of WONDER BOY: Tony Hsieh, Zappos and the Myth of Happiness in Silicon Valley, named a Financial Times Best Business Book. Jeffrey Dastin Thomson Reuters Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.
[13]
AI vs. the Pentagon: killer robots, mass surveillance, and red lines
Can AI firms set limits on how and where the military uses their models? Anthropic is in heated negotiations with the Pentagon after refusing to comply with new military contract terms that would require it to loosen the guardrails on its AI models, allowing for "any lawful use," even mass surveillance of Americans and fully autonomous lethal weapons. Pentagon CTO Emil Michael is pushing for Anthropic to be designated a "supply chain risk" if it doesn't comply, a label usually only given to national security threats. Anthropic's rivals OpenAI and xAI have reportedly agreed to the new terms, but even after a White House meeting with Defense Secretary Pete Hegseth, Anthropic CEO Dario Amodei is still refusing to cross his company's red line, stating that "threats do not change our position: we cannot in good conscience accede to their request."
[14]
OpenAI strikes deal with Pentagon, hours after rival Anthropic was blacklisted by Trump
Sam Altman, chief executive officer of OpenAI Inc., at the AI Impact Summit in New Delhi, India, on Thursday, Feb. 19, 2026. OpenAI CEO Sam Altman said late Friday that his company has agreed to terms with the Department of Defense on use of its artificial intelligence models shortly after President Donald Trump said the government won't work with AI rival Anthropic. "Tonight, we reached an agreement with the Department of War to deploy our models in their classified network," Altman wrote in a post on X. "In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome."
[15]
OpenAI strikes a deal with the Defense Department to deploy its AI models
OpenAI has reached an agreement with the Defense Department to deploy its models in the agency's network, company chief Sam Altman has revealed on X. In his post, he said two of OpenAI's most important safety principles are "prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems." Altman claimed the company put those principles in its agreement with the agency, which he called by the government's preferred name of Department of War (DoW), and that it had agreed to honor them. The agency has closed the deal with OpenAI, shortly after President Donald Trump ordered all government agencies to stop using Claude and any other Anthropic services. If you'll recall, the government US Defense Secretary Pete Hegseth previously threatened to label Anthropic "supply chain risk" if it continues refusing to remove the guardrails on its AI, which are preventing the technology to be used for mass surveillance against Americans and in fully autonomous weapons. It's unclear why the government agreed to team up with OpenAI if its models also have the same guardrails, but Altman said it's asking the government to offer the same terms to all the AI companies it works with. Anthropic, which started working with the US government in 2024, refused to bow down to Hegseth. In its latest statement, published just hours before Altman announced OpenAI's deal, it repeated its stance. "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons," Anthropic wrote. "We will challenge any supply chain risk designation in court." Altman added in his post on X that OpenAI will build technical safeguards to ensure the company's models behave as they should, claiming that's also what the DoW wanted. It's sending engineers to work with the agency to "ensure [its models'] safety," and it will only deploy on cloud networks. As The New York Times notes, OpenAI is not yet on Amazon cloud, which the government uses. But that could change soon, as company has also just announced forming a partnership with Amazon to run its models on Amazon Web Services (AWS) for enterprise customers.
[16]
Anthropic refuses to bend to Pentagon on AI safeguards as dispute nears deadline
A public showdown between the Trump administration and Anthropic is hitting an impasse as military officials demand the artificial intelligence company bend its ethical policies by Friday or risk damaging its business. Anthropic CEO Dario Amodei drew a sharp red line 24 hours before the deadline, declaring his company "cannot in good conscience accede" to the Pentagon's final demand to allow unrestricted use of its technology. Anthropic, maker of the chatbot Claude, can afford to lose a defense contract. But the ultimatum this week from Defense Secretary Pete Hegseth posed broader risks at the peak of the company's meteoric rise from a little-known computer science research lab in San Francisco to one of the world's most valuable startups. If Amodei doesn't budge, military officials have warned they will not just pull Anthropic's contract but also "deem them a supply chain risk," a designation typically stamped on foreign adversaries that could derail the company's critical partnerships with other businesses. And if Amodei were to cave, he could lose trust in the booming AI industry, particularly from top talent drawn to the company for its promises of responsibly building better-than-human AI that, without safeguards, could pose catastrophic risks. Anthropic said it sought narrow assurances from the Pentagon that Claude won't be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language "framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will." That was after Sean Parnell, the Pentagon's top spokesman, posted on social media that "we will not let ANY company dictate the terms regarding how we make operational decisions" and added the company has "until 5:01 p.m. ET on Friday to decide" if it would meet the demands or face consequences. Emil Michael, the defense undersecretary for research and engineering, later lashed out at Amodei, alleging on X that he "has a God-complex" and "wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk." That message hasn't resonated in much of Silicon Valley, where a growing number of tech workers from Anthropic's top rivals, OpenAI and Google, voiced support for Amodei's stand late Thursday in an open letter. OpenAI and Google, along with Elon Musk's xAI, also have contracts to supply their AI models to the military. "The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," the open letter says. "They're trying to divide each company with fear that the other will give in." Also raising concerns about the Pentagon's approach were Republican and Democratic lawmakers and a former leader of the Defense Department's AI initiatives. "Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end," wrote retired Air Force Gen. Jack Shanahan in a social media post. Shanahan faced a different wave of tech worker opposition during the first Trump administration when he led Maven, a project to use AI technology to analyze drone footage and target weapons. So many Google employees protested its participation in Project Maven at the time that the tech giant declined to renew the contract and then pledged not to use AI in weaponry. "Since I was square in the middle of Project Maven & Google, it's reasonable to assume I would take the Pentagon's side here," Shanahan wrote Thursday on social media. "Yet I'm sympathetic to Anthropic's position. More so than I was to Google's in 2018." He said Claude is already being widely used across the government, including in classified settings, and Anthropic's red lines are "reasonable." He said the AI large language models that power chatbots like Claude are also "not ready for prime time in national security settings," particularly not for fully autonomous weapons. "They're not trying to play cute here," he wrote. Parnell asserted Thursday that the Pentagon wants to " use Anthropic's model for all lawful purposes" and said opening up use of the technology would prevent the company from "jeopardizing critical military operations," though neither he nor other officials have detailed how they want to use the technology. The military "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement," Parnell wrote. When Hegseth and Amodei met Tuesday, military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn't approve. Amodei said Thursday that "those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." He said he hopes the Pentagon will reconsider given Claude's value to the military, but, if not, Anthropic "will work to enable a smooth transition to another provider." -- - AP reporter Konstantin Toropin contributed to this report.
[17]
Defense Secretary summons Anthropic's Amodei over military use of Claude
Defense Secretary Pete Hegseth is calling in Anthropic CEO Dario Amodei to the Pentagon on Tuesday morning to discuss the military use of Claude, according to reporting from Axios. The meeting comes as the Pentagon threatens to declare Anthropic a "supply chain risk" -- a label typically reserved for foreign adversaries -- after the AI firm refused to allow the Department of Defense to use its tech for the mass surveillance of Americans and the development of weapons that fire without human involvement. Anthropic signed a $200 million contract with DOD last summer, and Claude was reportedly used during the January 3 special operations raid that resulted in the capture of Venezuelan president Nicolás Maduro, an episode that brought the two sides' tensions into the open. A source told Axios that Hegseth is giving Amodei an ultimatum: play ball or be banished. It's unclear whether he's bluffing -- replacing Anthropic would be a significant undertaking. But the stakes are real: a supply chain risk designation would void Anthropic's contract and force other Pentagon partners to drop Claude entirely.
[18]
OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash
OpenAI, the maker of ChatGPT, said on Friday that it had reached an agreement with the Pentagon to provide its artificial intelligence technologies for classified systems, just hours after President Trump ordered federal agencies to stop using A.I. technology made by rival Anthropic. Under the deal, OpenAI agreed to let the Pentagon use its A.I. systems for any lawful purpose. The San Francisco company also said it had found a way to ensure that its technologies would not be applied for domestic surveillance in the United States or with autonomous weapons by installing specific technical guardrails on its systems. "In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome," Sam Altman, OpenAI's chief executive, said in a social media post, using the initials for the Department of War, the administration's preferred name for the Department of Defense. The Department of Defense did not immediately respond to a request for comment. The deal appeared to be a business and political coup for OpenAI, taking advantage of a rival's troubles. Anthropic, which competes with OpenAI, had battled the Pentagon in recent weeks over how its A.I. could be used. In negotiations over a $200 million contract, the Pentagon had demanded that it be able to use Anthropic's A.I. system for all lawful purposes, or it would cut the company off from government business. But Anthropic said it needed terms that would ensure that its A.I. technology would not be used for domestic surveillance of Americans or for autonomous lethal weapons. The Pentagon, in turn, said a private contractor could not decide how its tools would be used for national security. Their disagreement erupted into public view this month and escalated as both dug in their heels. Anthropic and the Pentagon failed to agree on terms by a 5:01 p.m. deadline on Friday. Defense Secretary Pete Hegseth then designated Anthropic a "supply-chain risk to national security," a label that cuts the A.I. company off from business with the U.S. government. Mr. Trump also weighed in, calling the start-up a "radical Left AI company." Amid the maelstrom, OpenAI stepped in. This week, Mr. Altman publicly backed Anthropic's position that A.I. should not be used for domestic surveillance or autonomous weapons. On CNBC on Friday, he said he mostly trusted Anthropic and that "they really do care about safety." At the same time, Mr. Altman engaged in talks with the Pentagon, starting on Wednesday, over a deal for its technology, said two people familiar with the discussions who spoke on the condition of anonymity. Mr. Altman negotiated with the Department of Defense in a different way from Anthropic, agreeing to the use of OpenAI's technology for all lawful purposes. Along the way, he also negotiated the right to put safeguards into OpenAI's technologies that would prevent its systems from being used in ways that it did not want them to be. OpenAI "will build technical safeguards to ensure our models behave as they should, which the DoW also wanted," Mr. Altman said. These moves allowed Mr. Altman to uphold safety principles around A.I. while still landing the Pentagon contract. He added that the Pentagon had agreed to have some OpenAI employees work alongside government personnel on classified projects to "to help with our models and to ensure their safety." Anthropic did not respond to a request for comment on OpenAI's deal. (The New York Times sued OpenAI and Microsoft in 2023, accusing them of copyright infringement of news content related to A.I. systems. The two companies have denied those claims.) Mr. Altman and Dario Amodei, the chief executive of Anthropic, have long been bitter rivals. Dr. Amodei and several other founders of Anthropic previously worked at OpenAI. But they left in 2021 after disagreements with Mr. Altman and others over how A.I. should be funded, built and released. Last week, during an A.I. summit in India, Mr. Altman and Dr. Amodei were caught on video refusing to join hands during a photo session with Prime Minister Narendra Modi. It may take time for OpenAI's technology to be used by the Pentagon. The company is not yet approved for classified work in part because its technologies are not available from Amazon's cloud computing services, which is how the government often accesses classified systems. That could change after OpenAI signed a partnership with Amazon on Friday. Amazon, a new investor in OpenAI, is pouring $50 billion into the A.I. start-up as part of $110 billion in funding that OpenAI raised to pay for its continued growth and to fuel A.I. development. The Pentagon may also use A.I. services from other Anthropic rivals. Google and Elon Musk's xAI have contracts with the Defense Department, and the Pentagon said earlier this week that it had reached an agreement to use xAI's technology for classified operations. Google has had similar discussions, but it is unclear where those talks stand. In 2018, during the first Trump administration, Google backed away from a military contract after protests from employees. It has since agreed to work with the Pentagon again. This week, as the Pentagon threatened to sever ties with Anthropic, dozens of OpenAI employees signed an open letter urging other A.I. companies to support the stance that the technologies not be used for domestic surveillance or with autonomous weapons. "They're trying to divide each company with fear that the other will give in," the letter read, referring to the Pentagon. "That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War."
[19]
Sam Altman backs Anthropic in AI battlefield row with Pentagon
OpenAI boss Sam Altman has weighed in to the deepening row between the US Department of Defense and rival AI company, Anthropic, throwing his support behind his competitor. Altman said in a note to staff that he had the same "red lines" as Anthropic boss Dario Amodei, who has refused to give the Pentagon unfettered access to the firm's AI tools. In the note seen by the BBC, Altman said any OpenAI contracts for defence would also reject uses that were "unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons". US Secretary of Defense Pete Hegseth has threatened Amodei with retaliation if the tech boss insists on limiting how Anthropic is used. In a meeting with Amodei on Tuesday Hegseth appeared to make two contradictory threats. He said he would invoke the Defense Production Act, allowing the government to use Anthropic's products as it saw fit. He also said he would deem Anthropic a "supply chain risk," meaning the company would be labelled not secure enough for government use. Amodei said on Thursday he would rather stop working with the Pentagon than acquiesce to such threats. Anthropic has said it objects to the potential for its AI tools including Claude to be used by the government in two ways: "mass domestic surveillance" and "fully autonomous weapons." The Department of Defense (DoD) has said it is not asking to use Anthropic for either of those purposes. However, it wants the company to accept "any lawful use" of its tools. There are few laws in the US that deal with AI tool and capabilities. Emil Michael, a former Uber executive who now serves as Undersecretary of Defence, posted on X a number of times the night following Amodei's rejection of Hegseth's pressure. He made personal attacks against the executive and claimed Anthropic's decision was an attempt to grab government power. "Dario Amodei wants to override Congress and make his own rules to defy democratically decided laws," Michael wrote in one post. However, within the tech community there is mounting support for Anthropic's leader. Altman's internal memo added that the way the government is reacting to Anthropic's safety concerns "risks our national security, and also risks the government resorting to actions which could risk American leadership in AI. We would like to try to help de-escalate things." Amodei was an early employee of OpenAI. He and a handful of other OpenAI employees left the company to found Anthropic after disagreements with Altman. The two startups now compete directly for users and corporate customers with an evolving offer of AI chatbots, agents and other tools. "I do not fully understand how things got here; I do not know why Anthropic did their deal with the Pentagon and Palantir in the way they originally did it," Altman wrote. "But regardless of how we got here, this is no longer just an issue between Anthropic and the DoW; this is an issue for the whole industry and it is important to clarify our stance." Anthropic in 2024 entered in to a partnership with Palantir, a major government contractor, allowing Claude to be used within Palantir's government products. The Department of War (DoW) is a secondary name for the Defence Department under an executive order signed by US President Donald Trump in September. Altman said OpenAI was also "going to see if there is a deal with the DoW that allows our models to be deployed in classified environments and that fits with our principles." A former official with the DoD, who asked not to be named, told the BBC that Anthropic appeared to have the upper hand in the fight. "This is great PR for them and they simply do not need the money," the former official said. Anthropic's work with the Pentagon is part of a contract worth $200 million. The company's most recent valuation came earlier this month and put the company's worth at $380 billion, based on its current revenue and future expected earnings. The former official added that the DoD's basis for threatening Anthropic with invoking either the Defense Production Act and being labeled a supply chain risk was "extremely flimsy". Should Hegseth make good on either threat, Anthropic could in theory sue the Defence Department or individuals working within the agency. On Friday morning, groups representing roughly 700,000 tech workers within Amazon, Google, and Microsoft, all companies that have their own contracts with the Defence Department, signed an open letter urging the companies they worked for to also "refuse to comply" with the Pentagon's demands. "Tech workers are united in our stance that our employers should not be in the business of war," the elected Executive Board of the Alphabet Workers Union said in a separate statement. The union also expressed concern that Google would capitulate to the Pentagon's demands if the tech giant found itself in a similar position to Anthropic. The BBC has requested a response to those concerns from Google. Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
[20]
The Cold War Era Law at the Center of Hegseth's Anthropic Threat
The Pentagon's threats, including designating Anthropic a supply-chain risk, could be challenged in court and may undermine Pentagon efforts to adopt artificial intelligence, with up to $200 million in work at stake. The Pentagon is poised to take unprecedented steps in its standoff with Anthropic PBC that risk touching off a massive legal fight with the $380 billion company and threaten to undermine Pentagon efforts to further adopt artificial intelligence. At the center of the escalating dispute is a 76-year-old law called the Defense Production Act, passed in 1950 to boost production for the Korean War. Defense Secretary Pete Hegseth has threatened to invoke the legitigation as a way of forcing Anthropic to give up guardrails it's set around the way its Claude AI system can be used. People familiar with the matter said earlier this week that Hegseth made the threat in a contentious meeting with Anthropic Chief Executive Officer Dario Amodei. That warning -- along with the threat to designate Anthropic what's known as a supply-chain risk -- raises a host of legal questions. Any such moves would almost certainly be challenged in court. The Defense Production Act was meant to make sure the government has critical resources at its disposal in the event of an emergency. President Donald Trump used it during the Covid-19 pandemic to manage distribution of protective equipment. President Joe Biden invoked it to address a shortage in baby formula. Deploying the act to strong-arm a tech startup into accepting Pentagon demands on how its software is used would be new territory. Declaring Anthropic a supply-chain risk could put limits on how other defense contractors use its services. That's normally a designation reserved for state-backed actors in China or Russia. "It is unclear how they intend to use DPA, we don't know which DPA authorities they will use, and we also don't know whether this is a one-off or the first of its kind," according to Bloomberg Economics analyst Becca Wasser. "This is akin to the uncertainty that was generated around the administration taking stakes in private companies using DPA authorities," she said. Hours before a Pentagon deadline, the dispute showed little sign of ending, as Anthropic rejected the government's latest offer saying it failed to fully address the company's concerns. Under Secretary Emil Michael said on Friday that the Pentagon remained willing to negotiate. "So long as they're in good faith, we're always open to talks," Michael told Bloomberg Television. "Up until that deadline, I'm open to more talks and I told them so." Asked whether the DPA and supply-chain risk threats were contradictory, Michael said, "They're two different things, and I think depending on how today goes at 5 o'clock, the Secretary of War Pete Hegseth gets to make the decision on how to reply." Any of those moves could deal a devastating blow to Anthropic's efforts to win government business. At stake is up to $200 million in work that Anthropic had agreed to do for the military. Contracts for other government agencies could also be at risk. Hegseth has given Anthropic until 5:01 p.m. Friday to let his agency do as it pleases with the company's AI tools, within lawful limits. Anthropic has sought certain conditions, including curbs to keep Claude from being used for mass surveillance of Americans or to develop weapons that can be deployed without a human involved. The Pentagon "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement," Pentagon spokesman Sean Parnell said Thursday on X. "Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes," he said. His post aired the threat of labeling the company a supply-chain risk but made no mention of the Defense Production Act. Nadia Schadlow, a senior fellow at Hudson Institute who was US deputy national security advisor for strategy during the first Trump administration, said the threat to invoke the DPA should be "taken seriously" as it reflects the increasingly essential role played by software in modern warfare. "What we're seeing here is the dispute play out over the changing character of war," she said in an interview. Software is now so essential to the Pentagon, she said, that it's not out of the realm of possibility to consider invoking the act. She called DPA "a powerful tool to take command of the economy, and that's why it was created." Even though the DPA is a wartime authority, Schadlow argued that attempting to invoke it to enlist software during a time of crisis would be challenging. Biden used the DPA in relation to artificial intelligence -- but to limit it, not expand its use. In 2023 he signed an executive order aimed at making sure it's safe for public use before being released. Trump revoked that order after taking office. Biden used a specific information-gathering authority, known as Title VII, as part of that order. But Hegseth may invoke a different part, called Title I, which is the act's "core compulsion power," Alan Rozenshtein, a law professor at the University of Minnesota Law School, wrote at Lawfare. "That's an enormous escalation," Rozenshtein wrote. Dean Ball, a former White House adviser who helped create the Trump administration's AI Action Plan, said invoking the Defense Production Act would essentially commandeer a top AI platform. In practice, he said, it could go as far as embedding Pentagon personnel with Anthropic who would be involved in technical decisions on AI safeguards and model training. "In the end this would amount to quasi-nationalization," Ball wrote in a post on X. "It's important to be clear-eyed that this is what is now on the table."
[21]
Anthropic, DoD face off over acceptable military AI use
US Secretary of Defense Pete Hegseth has made Anthropic an offer it may not be able to refuse. The Defense Department and the AI firm held a meeting at the Pentagon on Tuesday, where the government tried to compel the house of Claude to lift some restrictions on military use of its tech. However, recent changes to the company's safety policy suggest it may be willing to be more flexible than it's letting on. The Pentagon's unhappiness with Anthropic has been in the news since the end of last month, when Reuters reported that the two were clashing over safeguards that would prevent the DoD from using Anthropic's AI to autonomously target weapons without human intervention and to conduct domestic surveillance within the United States. The Register has confirmed with individuals on both sides of the discussion that a meeting between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth on Tuesday has done little to change Anthropic's mind on the matter, with the Pentagon now trotting out threats to get what it wants. A senior Pentagon official told us that, if Anthropic refuses to let the Defense Department do what it wants with its AI by the end of the day on Friday, it may compel the company to do what it wants through the Defense Production Act. The DPA gives the President and any executive branch officials to whom he delegates such authority, like the Defense Secretary, broad authority to require businesses to accept contracts deemed necessary to promote the national defense. That authority, the official told us, would give the Pentagon the right to use Anthropic AI regardless of what the company wants. The DoD is also reserving the right to declare Anthropic a supply chain risk, essentially forcing any company that contracts with the US government to eliminate Anthropic software anywhere it's used in their dealings with the federal government. Such a move could be a major financial blow to the AI provider. Additionally, sources familiar with the meeting told us that the Pentagon was ready and willing to terminate the up to $200 million contract the agency signed with Anthropic (simultaneous to agreements with Google, OpenAI, and xAI) if the company doesn't agree to its terms. We're told that Anthropic has maintained its red line for use of its AI by the US military, which includes autonomous weapons that use AI to make final targeting decisions, and domestic surveillance of American citizens, even if lawful. The Pentagon told us that it has always followed the law, has only issued lawful orders, and its intended use of Anthropic's AI has nothing to do with mass surveillance or autonomous weapon usage. Legal usage of Anthropic's AI, the Pentagon official said, is the department's responsibility as the end user - not Anthropic's. Coincidentally or not, Anthropic also released the third iteration of its Responsible Scaling Policy on Tuesday, the same day Amodei met with Hegseth in Washington, DC. The new version lacks a key safety pledge that Anthropic has been pushing for years. Prior editions of the RSP included a clause that stated Anthropic would cease training AI models that it couldn't guarantee were safe, and wouldn't release any model without proper risk mitigations in place. Those guarantees are gone, with the company citing the need to remain competitive in the AI space as the reason for their removal. "We felt that it wouldn't actually help anyone for us to stop training AI models," Anthropic's science chief Jared Kaplan told Time in an interview ahead of the RSP update's release. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments ... if competitors are blazing ahead." According to a blog post outlining changes in the new version of the RSP, AI competitiveness and economic growth have become the driving force in the current policy environment, with Anthropic lamenting the fact that safety discussions have been left on the wayside. "We remain convinced that effective government engagement on AI safety is both necessary and achievable," Anthropic explained. "But this is proving to be a long-term project -- not something that is happening organically as AI becomes more capable or crosses certain thresholds." Anthropic's admission that its priorities have shifted from safety first to competitiveness begs the question of whether it may be willing to comply with the Pentagon to avoid losing out on a massive contract, risking being blacklisted across the defense industry, and still pressed into service against its wishes. We reached out to Anthropic to find that out, but didn't hear back before publication. We'll update this story if we do. ®
[22]
OpenAI signs Pentagon AI deal after Trump orders Anthropic ban
The deal follows a dramatic clash between the White House and Anthropic over limits on military AI use. OpenAI has reached an agreement with the Pentagon to deploy its artificial intelligence models in classified military systems, just hours after President Donald Trump ordered federal agencies to stop using rival Anthropic's technology. The announcement came late Friday from OpenAI CEO Sam Altman, who said the company had secured terms with the Department of Defense to use its models within the department's classified network. The deal follows a sharp escalation between the Trump administration and Anthropic over how AI systems can be used in military contexts. Earlier in the day, Trump directed every federal agency to "immediately cease" use of Anthropic's products. Defense Secretary Pete Hegseth also designated the company a "supply chain risk to national security," a classification typically used under federal procurement authorities to restrict certain technologies in defense contracts. Similar supply chain restrictions in recent years have been applied to foreign telecom companies such as Huawei and ZTE under Section 889 of the 2019 National Defense Authorization Act. Those measures were implemented through federal procurement rules requiring contractor certification that prohibited technologies are not used in connection with government contracts. In Anthropic's case, the designation requires the Pentagon to phase out use of the company's systems and obligates military contractors to certify that their Defense Department work does not involve Anthropic's AI tools. The administration has provided a six-month transition window. The confrontation centers on whether AI companies can limit how the military uses their systems. Anthropic had sought contractual assurances that its flagship model, Claude, would not be used for domestic mass surveillance of Americans or to power fully autonomous weapons. The Pentagon has said it does not intend to use AI in those ways but has insisted that models must remain available for all lawful purposes. After weeks of negotiations, talks between Anthropic and the Defense Department collapsed. Officials accused the company of attempting to impose ideological restrictions on military operations. Anthropic maintained that its objections were narrow and focused on safety and constitutional rights. Shortly after the administration's move against Anthropic, Altman announced that OpenAI had finalized its own agreement with the Pentagon. In a post on X, Altman said the deal reflects two of OpenAI's "most important safety principles": prohibitions on domestic mass surveillance and a requirement for human responsibility in the use of force, including autonomous weapon systems. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement," Altman wrote, referring to the department's recent use of the "Department of War" branding in official posts. It remains unclear how OpenAI's agreement differs in substance from the safeguards Anthropic had sought. Pentagon officials have argued that existing U.S. law and Defense Department policy already prohibit domestic mass surveillance and fully autonomous weapons, and that no new legal standards were necessary. The dispute with Anthropic became increasingly political in recent days. In a Truth Social post, Trump criticized the company in harsh terms and framed its position as an attempt to override constitutional authority. Hegseth accused Anthropic of trying to assert control over operational military decisions and said the department must retain unrestricted access to AI models for lawful purposes. Anthropic has pushed back, arguing that the supply chain risk designation exceeds the Defense Department's statutory authority. The company said federal law limits such determinations to specific defense-related contracts and does not grant the executive branch broad power to block all commercial activity with a domestic company. Anthropic has said it intends to challenge the designation in court. Supply chain risk determinations are more commonly associated with foreign-owned firms deemed national security threats. Applying the designation to a U.S.-based frontier AI developer marks a significant shift in how procurement authorities are being used in the context of artificial intelligence. The episode underscores how rapidly AI has become embedded in national security policy. OpenAI, Anthropic, Google, and Elon Musk's xAI have secured Defense Department agreements or approvals for use of their AI models, including in classified environments. At the same time, concerns about surveillance, autonomous weapons, and the reliability of large language models have intensified scrutiny of military AI deployments. Anthropic, which NPR reported is valued at roughly $380 billion and is preparing for a public offering, now faces legal and reputational uncertainty following the administration's actions. The Pentagon contract at the center of the dispute is worth up to $200 million, a relatively small portion of the company's reported revenue but symbolically significant. For OpenAI, the agreement positions the company as a key partner in the Defense Department's AI strategy while maintaining its publicly stated safety principles. Whether the contrasting outcomes reflect substantive differences in contract terms or divergent negotiation strategies remains unclear. What is clear is that the relationship between AI developers and the U.S. military has entered a more visible and politically charged phase.
[23]
Sam Altman Insists He Also Has Principles as Anthropic's Pentagon Stand Off Continues
The Pentagon gave Anthropic, the makers of Claude, an ultimatum: allow the military unfettered access to its AI model, even if the potential uses violate the company's own safeguards, or it will face significant punishment. Anthropic refused, with CEO Dario Amodei saying the company "cannot in good conscience accede" to the Department of Defense's requests. Now, Sam Altman, CEO of OpenAI, would like you to know that he also would have acted in a principled manner if he ever had to. In a memo circulated to OpenAI employees Thursday nightâ€"and definitely not cynically leaked to the press so that everyone knows what a brave boy Sam Altman isâ€"the founder and CEO told his firm, "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines." Those are, of course, the exact same red lines that Anthropic has held to and that have reportedly been an issue for the Pentagon despite its insistence that it only wants to use AI for "all lawful purposes." The best read on that position is that there aren't currently laws preventing the use of autonomous weapons, and the Pentagon would like to use Claude to deploy its arsenal in some way. Anthropic has already reportedly created a carveout in its red line policy that would allow the Defense Department to use Claude for "defensive weapons," but that clearly wasn't enough for the agency, or else this standoff would already be over. Altman adopting Anthropic's no-go policies doesn't even put him second in line behind Amodei in drawing a line in the sand with the military. More than 100 employees at Google beat him to the punch, signing and sending a letter to management asking the company to adopt the same red lines as Anthropic if the company is going to continue to do business with the Pentagon. But that must have at least been a big enough gust for Altman to feel where the wind was blowing and plant his flag. To Anthropic's credit, it did actually have to stare down Pentagon pressure, the details of which have continued to trickle out over the course of the week. The latest detail, reported by the Washington Post, included the Department of Defense peppering the company with hypotheticals like whether Claude could be used to shoot down an intercontinental ballistic missile launched at the United States. (Anthropic's CEO reportedly told the DoD to call and ask, which was not a satisfying answer to the Pentagon, though Anthropic denies it. A recent study found that chatbots, including Claude, launch nuclear weapons in 95% of war games, so making a call seems like the least of the potential problematic outcomes.) The company didn't budge, even though the Pentagon threatened to cancel Anthropic’s government contracts, declare Anthropic a “supply chain risk,†and/or invoke the Defense Production Act to force the company to build a model for the military’s desired purposes. It does seem like those threats might have been more bluster than anything, which Anthropic probably anticipated, based on the fact that Bloomberg reported the Pentagon is still open to negotiating. The company also has more leverage than one might imagine, given that the Defense Department favorite Palantir has cloud infrastructure that relies on Anthropic's model to operate. With its biggest guns unable to get Anthropic to cave, the Pentagon has resorted to a new approach: petty insults. Undersecretary of Defense Emil Michael spent most of Thursday blasting Anthropic on Twitter, calling Amodei a “liar†with a “God complex†and claiming that Anthropic's CEO wanted to "personally control the US Military." He also alleged that Anthropic's constitution for Claude, a document that dictates how the chatbot should act, was actually the company trying to position its own rules to supersede the US Constitution, which is not how that works at all. It's a bit hard to imagine the Department of Defense winning this stand-off in the court of public opinion, given that Anthropic's position is "let's agree not to spy on Americans or let AI nuke people," and the Pentagon's response was "No." But it sure seems the agency wants to have the fight for all to see, for whatever reason.
[24]
Anthropic just dropped its core AI safety promise, and that should worry you
When Anthropic introduced Claude in March 2023, the key differentiator was trust and a safety-first approach no other AI lab had taken. In the announcement blog post, the company described Claude as "a next-generation AI assistant based on Anthropic's research into training helpful, honest, and harmless AI systems." In fact, the name Anthropic itself, derived from the Greek word for "human," was a statement of intent. Rather than positioning itself as just another AI company racing to ship the most powerful model, Anthropic was meant to be the one that put guardrails first. That promise was formalized later in 2023 with the company's Responsible Scaling Policy, which committed Anthropic to something no competitor would. This week, Anthropic revised that commitment and dropped the very pledge that set the company apart as the "trustworthy" AI lab. Anthropic no longer promises to halt AI development Safety goals remain, though The Responsible Scaling Policy, or RSP, is a public document that outlines rules Anthropic wrote and published detailing what they will and won't do as their AI models get more powerful. When Anthropic published the first version of it in September 2023, the central rule was straightforward: if Claude's capabilities ever outpaced the company's ability to guarantee safety, Anthropic would completely halt training/deploying new models until it caught up. Anthropic's commitment to follow the ASL scheme thus implies that we commit to pause the scaling and/or delay the deployment of new models whenever our scaling ability outstrips our ability to comply with the safety procedures for the corresponding ASL. On Tuesday, Anthropic published a rewritten version of the RSP that removed this strict, binding commitment to unconditionally halt AI development if safety measures cannot keep up with model capabilities. In the new version of the policy, they've introduced a "Frontier Safety Roadmap" that outlines their plans for risk mitigation. This new framework is far more flexible than the original policy and doesn't include a hard trigger to stop development. Instead, it replaces the promise with public transparency, where Anthropic will tell the world what the risks are and what they're doing about them. However, the decision to keep going is ultimately theirs. These are not hard commitments but rather public goals against which we will openly grade our progress. Anthropic will only slow down if clearly ahead Safety delays depend on competition and risk evidence now Anthropic argues that the overall risk from AI depends on multiple developers, and if one responsible developer pauses while others continue without strong mitigation, it could result in a less safe world where developers with the weakest protections set the pace. Interestingly, while the company has dropped its unconditional pledge to pause, it hasn't abandoned the idea of delaying deployment entirely. Under the new policy, Anthropic claims it will delay AI development to ensure safety if it has a "significant lead" over competitors or if there is strong evidence that all competitors developing highly capable models have strong safety measures. However, if competitors are advancing with weaker safeguards, Anthropic indicates that it will try to meet those performance standards but "will not necessarily delay AI development and deployment in this scenario." In other words, Anthropic will only consider slowing down if it's clearly ahead of the competition and there's strong evidence of danger. If it's not in the lead, it keeps going. The timing of all this is hard to ignore On the same Tuesday that Anthropic published its rewritten RSP, Defense Secretary Pete Hegseth met with CEO Dario Amodei and delivered an ultimatum: roll back the company's AI safeguards for military use, or face serious consequences. Subscribe to the newsletter for Anthropic policy analysis Subscribing to the newsletter provides clear, jargon-free analysis of Anthropic's Responsible Scaling Policy shift, impartial context on AI safety debates, and careful tracking of industry responses - so you get focused coverage tied to this topic. Subscribe By subscribing, you agree to receive newsletter and marketing emails, and accept our Terms of Use and Privacy Policy. You can unsubscribe anytime. The latter would end Anthropic's $200 million Pentagon contract and require any company with military contracts to stop using Anthropic's tech entirely. Given that Anthropic was the last major AI lab with a hard safety commitment, there is now no major AI company with a binding promise to stop if things get dangerous.
[25]
Hegseth gives Anthropic until Friday to back down on AI safeguards, Axios reports
Feb 24 (Reuters) - U.S. Defense Secretary Pete Hegseth has given artificial intelligence company Anthropic until Friday to back down on safeguards for its products used by the military, Axios reported on Tuesday. Reuters reported exclusively this month that the Pentagon was including OpenAI and Anthropic to make their AI tools available on classified networks without many of the standard restrictions that the companies apply to users. Also this month, Axios reported that the Pentagon had been considering with Anthropic over the latter's insistence on retaining restrictions on how the U.S. military uses its models, which includes Claude AI. Reporting by Costas Pitas, Writing by Christian Martinez; Editing by Daphne Psaledakis and Editing by Franklin Paul Our Standards: The Thomson Reuters Trust Principles., opens new tab
[26]
OpenAI's Sam Altman proposes framework for US military AI deployment
OpenAI's Sam Altman has come out to defend rival Anthropic against concerns with working with the US Department of War (DoW). According to reports, Altman believes that any artificial intelligence (AI) company must ensure they have solid "redlines" when working with the Pentagon. This all started when the Pentagon asked Anthropic to allow its AI model to be used for "all lawful use." Anthropic, for its part, is happy to work with the DoW, but is concerned that this could mean that its AI models could be used for things like autonomous weapons.
[27]
Anthropic rejects Pentagon terms for lethal use of its chatbot Claude
The Pentagon is seen from the Air Force Memorial in May, 2023, in Arlington, VA. (Jabin Botsford/The Washington Post) Anthropic said late Thursday that it will not concede to the Pentagon's terms for full access to its artificial intelligence tool Claude, saying it cannot loosen its restrictions against use in fully autonomous weapons or mass domestic surveillance. The AI firm and the Defense Department have been at odds for weeks, after Anthropic reportedly raised questions over how Claude was used in the raid to capture Venezuelan President Nicolás Maduro, and the relationship soured further as the two sides issued conflicting accounts of the terms of their disagreement. The Pentagon said it has never considered autonomous weapons or mass surveillance in the scope of its use but has not been willing to prohibit them in its contract with Anthropic, saying it will only pursue lawful applications. However, Defense Secretary Pete Hegseth said the Pentagon must be able to use the technology for the full range of warfighting -- a broad remit that left too many questions for Anthropic to be comfortable with. The mutual frustration culminated with the Pentagon giving Anthropic a 5:01 p.m. Friday deadline to comply or risk being forced to provide full access to its AI using the Defense Production Act. On Thursday, Anthropic CEO Dario Amodei said in a lengthy statement that the firm was holding firm to its red lines -- and hoped the Pentagon would reconsider. "In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei wrote. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now," he said, citing specifically autonomous weapons use and mass surveillance. "We cannot in good conscience accede to their request," Amodei wrote. The Pentagon did not immediately respond to a request for comment.
[28]
Anthropic refuses Pentagon's new terms, standing firm on lethal autonomous weapons and mass surveillance
Less than 24 hours before the deadline in an ultimatum issued by the Pentagon, Anthropic has refused the Department of Defense's demands for unrestricted access to its AI. It's the culmination of a dramatic exchange of public statements, social media posts, and behind-the-scenes negotiations, coming down to Defense Secretary Pete Hegseth's desire to renegotiate all AI labs' current contracts with the military. But Anthropic, so far, has refused to back down from its two current red lines: no mass surveillance of Americans, and no lethal autonomous weapons (or weapons with license to kill targets with no human oversight whatsoever). OpenAI and xAI had reportedly already agreed to the new terms, while Anthropic's refusal had led to CEO Dario Amodei being summoned to the White House this week for a meeting with Hegseth himself, in which the Secretary reportedly issued an ultimatum to the CEO to back down by the end of business day on Friday or else.
[29]
Sam Altman aims to 'help de-escalate' tensions with Pentagon as OpenAI employees voice support for Anthropic
Sam Altman, chief executive officer of OpenAI Inc., at the AI Impact Summit in New Delhi, India, on Thursday, Feb. 19, 2026. OpenAI CEO Sam Altman told staffers late Thursday that he would like the company "try to help de-escalate things" between rival Anthropic and the Department of Defense. "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions," Altman wrote in a memo that was viewed by CNBC. "These are our main red lines." Anthropic has until 5:01 p.m. ET on Friday to decide whether it will agree to give the Pentagon permission to use its artificial intelligence models in all lawful use cases without limitation. The startup wants assurance that its technology won't be used for fully autonomous weapons or domestic mass surveillance of Americans, but the DoD hasn't budged. Altman's internal letter on Thursday was meant to show that OpenAI shares Anthropic's boundaries. The Wall Street Journal was first to report the memo.
[30]
Google and OpenAI employees sign open letter in 'solidarity' with Anthropic
Hundreds of employees at Google and OpenAI have urging their companies to in its standoff with the Pentagon over military applications for AI tools like Claude. The letter, titled "We Will Not Be Divided," calls on the leadership of both companies to "put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight." These are two lines that Anthropic CEO Dario Amodei should not be crossed by his or any other AI company. As of publication, the letter has over 450 signatures, almost 400 of which come from Google employees and the rest from OpenAI. Currently, roughly 50 percent of all participants have chosen to attach their names to the cause, with the rest remaining anonymous. All are verified as current employees of these companies. The original organizers of the letter aren't Google or OpenAI employees; they say are unaffiliated with any AI company, political party or advocacy group. The open letter is the latest development in the saga between Anthropic and US Defense Secretary Pete Hegseth, who to label the company a "supply chain risk" if it did not agree to withdraw certain guardrails for classified work. The Pentagon has also been in talks with Google and OpenAI about using their models for classified work, with earlier this week. The letter argues the government is "trying to divide each company with fear that the other will give in." OpenAI CEO Sam Altman told his employees on Friday that the ChatGPT maker will draw the same red lines as Anthropic, according to an internal memo seen by . He told on the same day that he doesn't "personally think the Pentagon should be threatening DPA against these companies."
[31]
US military would only use Anthropic's AI technology in legal ways, Pentagon says
WASHINGTON (AP) -- The Pentagon's top spokesman has reiterated that the military wants to use Anthropic's artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands. Sean Parnell said Thursday on social media that the Pentagon "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement." Anthropic's policies prevent their models from being used for those purposes. It's the last of its peers to not supply its technology to a new U.S. military internal network. Parnell said the Pentagon wants to "use Anthropic's model for all lawful purposes" but didn't offer details on what that entailed. He said opening up use of the technology would prevent the company from "jeopardizing critical military operations." "We will not let ANY company dictate the terms regarding how we make operational decisions," he said. During a meeting on Tuesday between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei, military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn't approve. Parnell mentioned only two of those consequences in the Thursday post on X and said Anthropic has "until 5:01 PM ET on Friday to decide." "Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk," he wrote. Anthropic didn't immediately respond to a request for comment Thursday. It said in a statement after Tuesday's meeting that it "continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do."
[32]
Anthropic just wrote itself a safety loophole
The policy revision potentially weakens Anthropic's safety stance despite company claims of maintaining strong protections and transparency commitments. "Safety first" was the mantra that made Anthropic unique among its big AI competitors. The company's pledge originally went like this: If Anthropic, the maker of Claude, couldn't guarantee a new model would meet its stringent safety standards, it would stop training that model, even if its competitors forged ahead. But at the very moment when Anthropic would seem to need its "safety first" pledge the most-namely, during its standoff with the Pentagon-the company has revealed a revised policy that adds a critical loophole. As first reported by Time, version 3.0 of Anthropic's Responsible Scaling Policy backtracks on the company's earlier promise, allowing it to continue training potentially hazardous models that its rivals are actively working on. The new policy also includes mandates for greater transparency about AI safety, along with a vow to "delay" development of dangerous models if it considers itself comfortably ahead of its competitors. This is all happening during a crucial moment for Anthropic, which is facing a Friday deadline to acquiesce to the Pentagon's demand for wide-ranging access to Anthropic's models for military use. During a meeting at the Pentagon with Anthropic CEO Dario Amodei on Tuesday, Defense Secretary Pete Hegseth reportedly threatened to use the Defense Production Act to force Anthropic to hand over its models, which the military wants to use for "any lawful purpose," according to The New York Times. Anthropic is said to be holding fast, demanding a promise from Hegseth that the Pentagon not use its models for "autonomous weapons" or to spy on Americans. But some view Anthropic's revised Responsible Scaling Policy as an escape hatch for the current pickle it's in, allowing the company to potentially give in to the Pentagon's demands while keeping square with its own safety policies. It's worth noting that Elon Musk-controlled xAI and OpenAI have already reached agreements with the Pentagon. For its part, Anthropic is pushing back on the idea that its revised RSP has weakened its safety policies, arguing instead that the old "red lines" were outdated given the government's hands-off stance on AI. By giving itself the latitude to continue training unstable models that others are actively developing, Anthropic will be able to act as a steadying force rather than allow more reckless companies to become the AI industry's leaders, Anthropic says. Maybe so, and hopefully they're right. Still, it's unsettling to see the AI company that has always stood for safety rewriting its own rulebook.
[33]
Pentagon Standoff Is a Decisive Moment for How A.I. Will Be Used in War
Adam Satariano reported from London, Julian Barnes from Washington and Sheera Frenkel from San Francisco. The fight between the Department of Defense and the artificial intelligence company Anthropic has ostensibly been about a $200 million contract over the use of A.I. in classified systems. But as the two sides careen toward a 5:01 p.m. Friday deadline over terms of the contract, far more is at stake. Amid the legalese and heated rhetoric are questions being asked globally about how to use A.I., what the technology's risks are and who gets to decide on setting any limits -- the makers of A.I. or national governments. Underlying it all is fear and awe over the dizzying pace of A.I. progress and the technology's uncertain impact on society. "Something like this dispute was inevitable," said Michael C. Horowitz, who worked on A.I. weapons issues in the Defense Department during the Biden administration. "Because the technology is advancing so quickly, we're having these debates now. A.I. has moved from being in a niche conversation to something really at the center of global power." The clash centers on the Pentagon's use of a classified version of Anthropic's A.I. model, Claude. The company wants to embed safeguards in its technology to prevent its use for mass domestic surveillance of Americans or in fully autonomous weapons with no humans in the loop. The Pentagon has said that it has no plans to use the technology for those purposes, but that a private contractor cannot decide how its tools will be lawfully used for national security, just as a weapons manufacturer does not determine where its missiles are dropped. At the Pentagon, the dispute comes at an important moment. Defense Secretary Pete Hegseth, the former Fox News contributor who has lashed out at policies and companies he sees as too liberal, wants to aggressively integrate A.I. in war planning and weapons development. Mr. Hegseth is echoing his boss, President Trump, who has made the expansion of A.I. a cornerstone of his policies. But Anthropic, a five-year-old company worth about $380 billion, has staked its reputation on A.I. safety and raised concerns about the technology's dangers, even as it has collaborated with U.S. defense and intelligence agencies. It is the only A.I. company currently operating on the Pentagon's classified systems. In recent days, the Pentagon and Anthropic have showed . Sean Parnell, the Pentagon spokesman, posted on social media on Thursday that the Pentagon demanded that Anthropic allow it to use A.I. "for all lawful purposes," saying it was a "common-sense request." In response, Dario Amodei, Anthropic's chief executive, said the Pentagon's "threats do not change our position: we cannot in good conscience accede to their request." Anthropic was prepared to lose its government contract and help the Pentagon transition to another company's technology, he said. Without a compromise, Mr. Hegseth has threatened to invoke the rarely used Defense Production Act to force Anthropic to work with it on its terms, or designate the company a supply chain threat and block it from doing business with the government. The confrontation has created new divisions between Silicon Valley and Washington at a moment when the industry seemed in step with President Trump's tech-forward agenda, especially as Google, xAI and OpenAI are also involved in A.I. work with the Pentagon. On Thursday, nearly 50 OpenAI employees and 175 Google employees published a letter calling on their leaders to "refuse the Department of War's current demands." More than 100 employees who work on Google's A.I. technology expressed concern in another letter to company leaders about working with the Pentagon. Prominent technologists including Jeff Dean, a top Google executive, have also said they are concerned about how A.I. can be misused for surveillance. Trump Administration: Live Updates Updated Feb. 27, 2026, 12:04 p.m. ET (The New York Times has sued OpenAI and Microsoft, accusing them of copyright infringement of news content related to A.I. systems. The companies have denied those claims.) A little over two years ago, A.I. safety and regulation was a top concern. At a global summit hosted in Britain by then Prime Minister Rishi Sunak, the United States, China and 26 other countries signed a pledge to address some of the technology's potential risks, such as giving hackers new attack methods and accelerating disinformation. But as the A.I. race ramped up, the issue has faded as a priority. Last year, the Trump administration revoked safety policies imposed under President Biden. Mr. Trump signed an executive order in December aimed at undercutting state laws that regulate A.I. He has also lifted restrictions on exports of A.I. semiconductors, despite concerns that the components could help rivals like China. The European Union, which passed far-reaching A.I. regulations in 2024, is now considering rolling some back. At the United Nations, a yearslong effort to ban certain A.I. weapons has been stalled by opposition from the United States, Russia and other countries. On the battlefield, the war in Ukraine has ushered in an era of drone warfare that turned autonomous weapons from a futuristic possibility to a near-term reality. "As A.I. becomes more powerful and more capable, the incentives to use it also become much stronger," said Helen Toner, an A.I. policy expert at Georgetown University and former OpenAI board member. "At the same time, people's appetite to talk about risks and how to solve them has gone down." Ms. Toner said the Anthropic-Pentagon dispute showed a fundamental disconnect. In Washington, officials view A.I. as a new tool that can be harnessed for specific goals. In Silicon Valley, creators of the technology see it becoming more like an "entity" with sophisticated reasoning that may behave in unexpected and dangerous ways without oversight and refinement, she said. The fight between the Pentagon and Anthropic began on Jan. 9, when Mr. Hegseth published a memo calling for A.I. companies to remove restrictions on their technologies. "The time is now to accelerate A.I. integration, and we will put the full weight of the Department's leadership, resources, and expanding corps of private sector partners into accelerating America's Military A.I. Dominance," he wrote. Underpinning Mr. Hegseth's strategy was a fundamental shift in military technology. Hardware is in an age of decline. Military contractors have struggled to deliver ships and fighter planes on time and on budget. Software has become an increasingly powerful tool. Tech executives including Alex Karp, the chief executive of the data analytics company Palantir, which works closely with the federal government, have argued that America's competitive edge over adversaries will be found in its advances with software. Anthropic has been a willing partner, providing the government with a special version of Claude that has fewer restrictions. Yet some in the Pentagon viewed the start-up with suspicion. Its openness to talking about safety risks put off some in the department's leadership, who have called the San Francisco company "woke." When talks between the Pentagon and Anthropic began over a $200 million contract for use of A.I. in classified systems, lawyers from both sides quietly traded emails over contract language, said two people involved in the discussions. Anthropic asked for two things. The company said it was willing to loosen its restrictions on the technology, but wanted guardrails to stop its A.I. from being used for mass surveillance of Americans or deployed in autonomous weapons with no humans involved. Without those, Anthropic risks damaging its safety-first reputation. "This is really about the power of the state to determine how A.I. is being deployed in the world versus companies," said Robert Trager, co-direct of Oxford University's Martin A.I. Governance Initiative. Cordula Droege, the chief lawyer for the International Committee of the Red Cross, which has called for global limits on A.I. weapons, said the violent risks of introducing swarms of autonomous weapons on battlefields is being lost in the wider debate. "Throughout history, warfare goes in parallel with the development of technology," she said.
[34]
Anthropic boss rejects Pentagon demands to drop AI safeguards
Anthropic has said it will not back down in a fight with the US Department of Defense (DoD) over how its artificial intelligence (AI) technology is used. The firm's chief executive Dario Amodei said on Thursday that his company would rather not work with the Pentagon than agree to uses of its tech that may "undermine, rather than defend, democratic values." His comments come two days after a meeting with US Secretary of Defense Pete Hegseth over demands that Anthropic accept "any lawful use" of its tools. It ended with a threat to remove Anthropic from the DoD's supply chain. "These threats do not change our position: we cannot in good conscience accede to their request," Amodei said. At issue for Anthropic is the potential use of its AI tools like Claude for two purposes: "Mass domestic surveillance" and "Fully autonomous weapons." Amodei said "such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now." The Department of War is a secondary name for the Defense Department under an executive order signed by US President Donald Trump in September. "Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider," Amodei said. A representative of the Defense Department could not be reached for comment. A Pentagon official previously told the BBC that should Anthropic not comply, Hegseth would ensure the Defense Production Act was invoked on the company. The act essentially gives a US president the authority to deem a given company or its product so important that the government can require it to meet defence needs. But Hegseth also threatened to label Anthropic a "supply chain risk", meaning the company would be designated as not secure enough for government use. A former DoD official who asked not to be named told the BBC on Thursday that Hegseth's grounds for either measure were "extremely flimsy". While Amodei did not specify exactly how Anthropic could be or had been used by the DoD for mass surveillance or fully autonomous weapons, he wrote in a company blog post that AI can be used to "assemble scattered, individually innocuous data into a comprehensive picture of any person's life - automatically and at massive scale." "We support the use of AI for lawful foreign intelligence and counterintelligence missions," Amodei said. "But using these systems for mass domestic surveillance is incompatible with democratic values." As for AI being used in weapons, Amodei said even today's most advanced and capable AI systems "are simply not reliable enough to power fully autonomous weapons." "We will not knowingly provide a product that puts America's warfighters and civilians at risk," Amodei said. "Without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don't exist today." He added that Anthropic had "offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer." Hegseth had demanded the Tuesday meeting with Amodei, a source previously told the BBC.
[35]
OpenAI says it shares Anthropic's 'red lines' over military AI use
The Pentagon is seen from an airplane, Monday, Feb. 2, 2026, in Washington. Julia Demaree Nikhinson/Associated Press hide caption OpenAI CEO Sam Altman says he shares the "red lines" set by rival Anthropic restricting how the military uses AI models, amid Anthropic's escalating feud with the Pentagon. The Department of Defense has given Anthropic a deadline of 5:01 p.m. ET today to drop restrictions on its AI model, Claude, from being used for domestic mass surveillance or entirely autonomous weapons. The Pentagon has said it doesn't intend to use AI in those ways, but requires AI companies to allow their models to be used "for all lawful purposes." Defense officials say if Anthropic doesn't comply, it could lose its contract worth as much as $200 million with the U.S. military. The government has also threatened to invoke the Korean War-era Defense Production Act (DPA) to compel Anthropic to allow use of its tools and has, at the same time, warned it would label Anthropic a "supply chain risk," potentially blacklisting it from lucrative government contracts. By wading into the standoff between Anthropic and the Pentagon, Altman could complicate the Pentagon's efforts to replace Anthropic if it follows through on its threat to cancel the contract. OpenAI also has a Defense Department contract, along with Google, xAI, and Anthropic, but Anthropic was the first to be cleared for use on classified systems. "I don't personally think the Pentagon should be threatening DPA against these companies," Altman told CNBC in an interview on Friday morning. He said he thinks it's important for companies to work with the military "as long as it is going to comply with legal protections" and "the few red lines" that "we share with Anthropic and that other companies also independently agree with." "For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety, and I've been happy that they've been supporting our warfighters," Altman added. "I'm not sure where this is going to go." In an internal note sent to staff on Thursday evening, Altman said OpenAI was seeking to negotiate a deal with the Pentagon to deploy its models in classified systems with exclusions preventing use for surveillance in the U.S. or to power autonomous weapons without human approval, according to a person familiar with the message who was not authorized to speak publicly. The Wall Street Journal first reported Altman's note to staff. The Defense Department didn't respond to a request for comment on Altman's statements. Whether AI companies can set restrictions on how the government uses their technology has emerged as a major sticking point in recent months between Anthropic and the Trump administration. On Thursday, Anthropic CEO Dario Amodei said the Pentagon's threats over its contract would not make the company budge. "We cannot in good conscience accede to their request," he wrote in a lengthy statement. "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner," he said, using the Pentagon's rebranded "Department of War" moniker. But, he added, domestic mass surveillance and fully autonomous weapons are uses that are "simply outside the bounds of what today's technology can safely and reliably do." Emil Michael, the Pentagon's undersecretary for research and engineering, shot back in a post on X, accusing Amodei of lying and having a "God-complex." "He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk," Michael wrote. "The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company," he wrote. In an interview with CBS News, Michael said federal law and Pentagon policies already bar the use of AI for domestic mass surveillance and autonomous weapons." "At some level, you have to trust your military to do the right thing," he said. Independent experts say the standoff is highly unusual in the world of Pentagon contracting. "This is different for sure," said Jerry McGinn, director of the Center for the Industrial Base at the Center for Strategic and International Studies, a Washington DC think tank. Pentagon contractors don't usually get to tell the Defense Department how their products and services can be used, he notes "because otherwise you'd be negotiating use cases for every contract, and that's not reasonable to expect." At the same time, McGinn notes, artificial intelligence is a new and largely untested technology. "This is a very unusual, very public fight," he said. "I think it's reflective of the nature of AI."
[36]
Opinion: Red lines and Red flags
The fierce standoff over Claude isn't just a contract fight. It's about who controls the future of military AI. In Washington and Silicon Valley, a conflict once relegated to specialist policy briefings has burst into view as arms-length diplomacy between the U.S. Department of Defense and Anthropic, the San Francisco-based AI lab, approaches a critical deadline. At stake is the future of AI governance and what limits, if any, private developers can place on how governments use powerful models. For years, Anthropic has distinguished itself from peers by embracing a safety-first stance. Its flagship model, Claude, was designed with guardrails that explicitly prohibit use in fully autonomous lethal weapons or domestic surveillance. Those restrictions have been central to the company's identity and its appeal to customers wary of unfettered AI. The Pentagon has responded sharply. Defense Secretary Pete Hegseth has given Anthropic until Friday, 27th of February, to drop those limits for military users, arguing that the Department must have "unrestricted access to AI for all lawful purposes." Officials stress they are not seeking unlawful use, but in military operations, "lawful" is a broad canvas, one the Pentagon says its leaders must be free to paint on. Anthropic's CEO, Dario Amodei, has stood firm. In statements this week he said the company "cannot in good conscience accede to" demands that would strip away safety protections, a stance that, if sustained, could cost Anthropic a contract worth up to $200 million and, more severely, its place in the U.S. military supply chain. The Pentagon has threatened to designate Anthropic a "supply chain risk," a step normally reserved for foreign adversaries whose technologies are seen as security threats. Such a label would effectively ban Anthropic tools from use across a broad swath of defense contractors and could isolate the company economically and strategically. To many observers, this is the first time a leading AI company has openly refused a direct government ultimatum over operational policy. The confrontation exposes a deeper question that goes beyond this single contract: in an era where AI is central to national security, who gets to decide how the technology is used? And under what conditions can governments override corporate safety commitments? Support and criticism have already rippled across the tech world. More than 200 current and former engineers at major AI firms have signed petitions opposing unrestricted military use, highlighting fears that government pressures could undercut broader ethical norms in AI deployment. At the same time, figures like Nvidia's CEO characterize the dispute as serious but "not the end of the world," pointing to the delicate balance between commercial innovation, national security, and economic interests embedded in this fight. If the dispute settles only after Claude is forced to operate without restrictions, it would set a precedent that could shape how all frontier AI systems interface with state power. Governments around the world are watching Washington's next move; China, Russia and others are already advancing their own military AI strategies. In that context, America's posture on governance, autonomy, and ethical constraint will signal what model the next decade of AI policy follows. In the end, this isn't just about one contract, or one model. It's about affirming whether the architects of artificial intelligence can simultaneously safeguard human values and meet the demands of national security, or whether the latter will subsume the former by force of law. As the Pentagon's deadline expires and Anthropic publicly refuses to strip away its ethical guardrails, the standoff moves past routine contract negotiations into uncharted constitutional and technological territory.
[37]
Anthropic Tells Pete Hegseth to Take a Hike
The Pentagon approached Anthropic this week with a demand that it remove guardrails in its AI model Claude to prohibit mass domestic surveillance and fully automated weapons. But Anthropic is refusing to do that, according to a new statement from CEO Dario Amodei, who writes, "we cannot in good conscience accede to their request." There's a lot of money on the line. And it's anyone's guess what happens next. Earlier this week, Defense Secretary Pete Hegseth gave Anthropic a deadline of 5:01 p.m. ET on Friday to agree to the removal of all safeguards, threatening to boot Claude from U.S. military systems or designate the company as a "supply chain risk," a label used for adversaries of the U.S. that's never been applied to an American company before. Hegseth, who refers to the Defense Department as the Department of War, has even threatened to invoke the Defense Production Act, which would theoretically allow the Pentagon to just demand Anthropic do whatever Hegseth wants. Amodei pointed out Thursday in a letter posted online: "These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." Experts have called the contradictory messages from Hegseth "incoherent," a label that might also apply to the Trump regime more broadly. Anthropic, which has a $200 million contract with the Department of Defense, told CBS News that the Pentagon's "best and final offer," which was sent Wednesday, seemed to have loopholes that would allow the military to disregard the protections put in place. "New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will. Despite DOW's recent public statements, these narrow safeguards have been the crux of our negotiations for months," Anthropic reportedly said. The new letter released by Anthropic on Thursday made sure to point out that the AI company works with the military and intelligence communities and that they "remain ready to continue our work to support the national security of the United States." But asking to drop all safeguards is just a bridge too far. "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner," the company wrote. "However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do." The company went on to list the two use cases where it believes safeguards are needed to protect American interests. In the section on mass domestic surveillance, Amodei put the word domestic in italics, as if to warn Americans more broadly about what's happening right under our noses. The letter notes that the government can purchase "detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant," something that obviously infringes on the rights of Americans. The Pentagon has suggested it doesn't have a plan for mass surveillance of Americans, telling CNN the conflict with Anthropic has "nothing to do with mass surveillance and autonomous weapons being used." The second section of Amodei's letter, which covers autonomous weapons, acknowledges that AI-assisted weapons are already being used on battlefields today in places like Ukraine. But it warns, "frontier AI systems are simply not reliable enough to power fully autonomous weapons." The letter goes on to say, "We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer." Amodei met with Hegseth on Tuesday in a meeting that was described by CNN as "cordial," but it will obviously be interesting to see where this goes. Hegseth is not known as a particularly smart or level-headed guy, so it's entirely possible that he tries to label Anthropic as both a national security threat and a part of America's warfighting machine so vital that he'll essentially draft the company to do what he wants. It sounds like we all get to find out by end of day Friday.
[38]
Anthropic Sees Support From Other Tech Workers in Feud With Pentagon
Anthropic PBC got a vote of support from Silicon Valley workers for its increasingly contentious public-relations battle with the Pentagon over how the military can use artificial intelligence. Two coalitions of workers - including employees of Amazon.com Inc., Google, Microsoft Corp. and OpenAI - are asking their companies to join Anthropic in refusing to comply with Defense Department demands for unrestricted use of AI products. "We are writing to urge our own companies to also refuse to comply should they or the frontier labs they invest in enter into further contracts with the Pentagon," a coalition of labor unions and other groups representing workers at Alphabet Inc., Amazon and Microsoft said in a letter posted early Friday. The letters, and similar support for Anthropic from tech executives on social media, show how a tussle between one AI company and the Pentagon could mushroom into an industry-wide battle over how best to deploy the powerful technology safely. Anthropic and the US military have been in talks over what exactly the armed forces can do with its tools. The richly valued startup, which has pitched itself as a cautious and responsible AI developer, insists that its products, including the Claude chatbot, not be used for surveillance of US citizens or to carry out lethal strikes without human involvement. Defense officials have demanded the right to use Claude without restriction, threatening to invoke the Defense Production Act to compel Anthropic to make its products available and label the company a supply-chain risk, a move that would preclude Anthropic from doing deals with military suppliers. Anthropic Chief Executive Officer Dario Amodei said in a statement Thursday that the company could not comply with the Defense Department request, though it continues to negotiate with the Pentagon. In response, a senior defense official took to social media to accuse Anthropic of putting US safety at risk. In the open letter posted Friday, workers with groups including Amazon Employees for Climate Justice, the Alphabet Workers Union, No Tech for Apartheid and No Azure for Apartheid sought to connect Anthropic's stand to employee efforts to get their companies to disclose more about the services they sell to state agencies taking part in President Donald Trump's deportation push. "Executive leadership at Google, Microsoft and Amazon must reject the Pentagon's advances and provide workers with transparency about contracts with other repressive state agencies including DHS, CBP and ICE," they said, referring to the Department of Homeland Security, Customs and Border Protection and Immigration and Customs Enforcement. Another letter, published earlier this week and signed by Google and OpenAI employees, urged executives to put aside their differences "and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight."
[39]
Sam Altman says OpenAI shares Anthropic's red lines in Pentagon fight
Why it matters: If other leading firms like Google follow suit, this could massively complicate the Pentagon's efforts to replace Anthropic's Claude, which was the first model integrated into the military's most sensitive work. * It would also be the first time the nation's top AI leaders have taken a collective stand about how the U.S. government can and can't use their technology. The flipside: Altman made clear he still wants to strike a deal with the Pentagon that would allow ChatGPT to be used for sensitive military contexts. * Despite the show of solidarity, such a deal could see OpenAI replace Anthropic if the Pentagon follows through with its plan to declare the latter a "supply chain risk." What he's saying: "[R]egardless of how we got here, this is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance," Altman wrote Thursday in a memo obtained by Axios. * "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines." The intrigue: ChatGPT is already available in the military's unclassified systems, and talks to move it into the classified space have accelerated amid the Pentagon-Anthropic fight, sources tell Axios. * But the Pentagon has insisted OpenAI and Google would have to agree the military can use their models for "all lawful purposes," the same standard Anthropic rejected since it didn't incorporate their specific guardrails. * Elon Musk's xAI recently agreed to those terms, but Grok is not seen as a wholesale alternative to Claude. In his memo, Altman wrote that the military will need AI, and he hopes to "help de-escalate things." * "We are going to see if there is a deal with the [Pentagon] that allows our models to be deployed in classified environments and that fits with our principles. We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons," Altman said. * The Wall Street Journal first reported on the memo. State of play: After Anthropic CEO Dario Amodei stood firm by his company's red lines, employees from OpenAI and Google signed onto a letter in solidarity on Thursday, pushing executives at their respective companies to resist "pressure" from the Pentagon. * While Anthropic said it intended to continue negotiations, a rupture appeared close. Emil Michael, the Pentagon official handling negotiations with Anthropic and the other major AI firms, denounced Amodei as a "liar" with a "God complex" who was "putting our nation's safety at risk." * Many others in D.C. and Silicon Valley praised Anthropic for taking a principled stand at the risk of a major financial hit. * Altman and Amodei are former colleagues at OpenAI who have become fierce rivals since the latter left to start Anthropic. The other side: Defense officials contend they have no intention of conducting mass surveillance or swiftly deploying autonomous weapons. * Their primary objection is having a private company dictate how the U.S. government can deploy AI for national security purposes, particularly during a technological race with China. * Defense officials told Axios their interactions with Anthropic left them concerned the company might raise questions about the deployment of their technology at critical junctures. Anthropic denies that. * It's possible the negotiations with OpenAI will be less adversarial. What to watch: "We have had some meetings to discuss this over the past couple of days, and will have more tomorrow with our safety teams before we decide what to do. We will also set up an all hands and office hours as soon as we can," Altman said, referring to those negotiations.
[40]
'We cannot in good conscience accede to their request': Anthropic CEO Dario Amodei draws a line in the sand in standoff with US government
* Anthropic CEO Dario Amodei does not want Claude used by the Pentagon for mass domestic surveillance and autonomous weapons * A statement has laid bare Anthropic's reasons for retaining Claude's safety rails * Pete Hegseth gave Anthropic until Friday to provide the DoD with full access Anthropic CEO Dario Amodei has released a statement concerning the company's ongoing disagreement with the US Department of Defense. Amodei declared Anthropic "cannot in good conscience accede" to the DoD's request to provide full access to its AI models, over fears they could be used for 'mass domestic surveillance' and 'fully autonomous weapons'. US Defense Secretary Pete Hegseth has threatened to label Anthropic as a "supply chain risk" and invoke the Defense Production Act to force the company to comply. Unprecedented threats against Anthropic In his statement, Amodei said Anthropic has historically had a very good relationship with the US government, including being the first AI company to deploy its models within US government networks, the National Laboratories, and the first to deploy models for national security. Amodei also noted the company has complied with US regulations on the use and sale of AI models to China, to the extent that it chose to "forgo several hundred million dollars in revenue" by preventing the use of Claude by the Chinese Communist Party. "Anthropic understands that the Department of War, not private companies, makes military decisions," Amodei continued. "However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values." But the hesitations to provide the DoD with full access to Claude surround the potential misuse of the model for two nefarious purposes. Regulations surrounding AI have not caught up with the capabilities of AI models such as Claude, Amodei says, which would allow the US government to deploy Claude as a tool for mass domestic surveillance. Theoretically, the government could purchase highly detailed records and use AI models organize it into highly accurate reflection of US citizens at a scale never seen before. As for AI use in weapons systems, Amodei says they "may prove critical for our national defense," but he argues that current AI models are "simply not reliable enough to power fully autonomous weapons." If an AI model in charge of an autonomous weapon system were to suffer a hallucination, the responsibility would likely fall on the model developer. Amodei also addresses the threats made by Hegseth, stating that they "are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." The statement concludes that Anthropic's "strong preference is to continue to serve the Department and our warfighters -- with our two requested safeguards in place." "Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required."
[41]
Anthropic says it 'cannot in good conscience' allow Pentagon to remove AI checks
Pete Hegseth has threatened to cancel $200m contract unless it is given unfettered access to Claude model Anthropic said Thursday it "cannot in good conscience" comply with a demand from the Pentagon to remove safety precautions from its artificial intelligence model and grant the US military unfettered access to its AI capabilities. The Department of Defense had threatened to cancel a $200m contract and deem Anthropic a "supply chain risk", a designation with serious financial implications, if the company did not comply with the request by Friday. Chief executive Dario Amodei said in a statement that the threats from the defense secretary, Pete Hegseth, would not change the company's position, and that he hoped Hegseth would "reconsider". "Our strong preference is to continue to serve the Department and our warfighters - with our two requested safeguards in place," he said. "We remain ready to continue our work to support the national security of the United States." At the core of the Department of Defense and Anthropic's standoff is a disagreement over how the AI company will permit its product, Claude, to be used. The Pentagon has demanded that Anthropic turn off safety guardrails and allow any lawful use of Claude, while Anthropic has pushed back against allowing Claude to be used for mass domestic surveillance or in autonomous weapons systems that can kill people without human input. After months of dispute and pressure from the government, Hegseth reportedly gave Amodei until Friday evening to agree to the Pentagon's demands or face punitive action. Whether Anthropic would concede was seen as a high-profile test of its claim to be the most safety-conscious of the major AI firms, as well as whether any part of the AI industry would push back against government desires to use the technology for controversial, potentially lethal purposes. In his statement, Amodei said using AI for autonomous weapons and mass domestic surveillance is "simply outside the bounds of what today's technology can safely and reliably do". The Department of Defense has handed a number of lucrative deals to tech firms in recent years for the companies to build or integrate AI technology into US military systems. In July of last year, Anthropic was one of several big tech companies including Google and OpenAI to receive up to $200m contracts with the DoD. What set Anthropic apart, and has intensified its conflict with the Pentagon, is that until this week it was the only AI model that had been approved for use in the military's classified systems. (Elon Musk's xAI reached an agreement earlier this week to also be used in classified systems). Anthropic's technology has reportedly already been used for military applications, including the US capture of Venezuelan leader Nicolás Maduro last month, highlighting the growing use of AI in conflict. The growth of autonomous weapons technology, such as drones that can carry out operations even after their connection to a human operator has been severed, has also intensified longstanding concerns around how AI will be used in life-and-death situations. Anthropic and Amodei have long been some of the industry's most prominent advocates for regulation and safety precautions in developing AI, even as they have struck deals with the military and this week watered down a core policy to not release new AI models without first guaranteeing their safety. Amodei's calls for regulation, and history of political opposition to Donald Trump, have run up against Hegseth's vows to remove "wokeness" from the armed forces and pursue aggressive military policies. If Hegseth follows through with his threat to categorize Anthropic as a supply chain risk, it would be a huge blow to the AI company. The designation, which is more commonly intended to be used for foreign adversaries, would prohibit other vendors that do business with the US military from using Anthropic's products.
[42]
Opinion | Pete Hegseth seeks a Pyrrhic victory against Anthropic
Defense Secretary Pete Hegseth at the State of the Union on Tuesday. (Kenny Holston/The New York Times via AP/Pool) At great risk to its business, Anthropic is taking a principled stand in the face of threats from the administration to commandeer its cutting-edge artificial intelligence technology. Defense Secretary Pete Hegseth gave the company a deadline of 5:01 p.m. on Friday to either allow the military to freely use its Claude model or lose a $200 million government contract and be blacklisted as "a supply chain risk," which would force defense contractors to drop them too. More troubling, Hegseth is threatening to invoke the Defense Production Act to compel Anthropic to drop its guardrails. "These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security," Anthropic CEO Dario Amodei wrote in a blog post late Thursday. "Regardless, these threats do not change our position: we cannot in good conscience accede to their request." If Hegseth follows through, that will test not just the legal limits of a law intended for wartime emergencies but the practical limits of the state's ability to coerce companies to its will. The blowup follows Anthropic's concerns about the classified use of its product during the successful operation to capture Venezuelan President Nicolás Maduro. Hegseth wants Anthropic to modify its contract to allow "any lawful use" of the technology. Anthropic is willing to rewrite its current terms of use but not to include mass surveillance of Americans or for weapons that operate without a person in the loop to make the final decision. The Pentagon denies that it has any plan to surveil Americans or take humans out of the kill chain. The government seems to believe Anthropic shouldn't get more say over how its product is deployed in battle than Lockheed does over the use of its fighter jets. Amodei believes his model poses such a potential risk to humanity if it becomes fully autonomous that he cannot let go of the reins. The clash raises fascinating philosophical questions about the future of war. At its base, however, this is about economic freedom and the right of a private company to decide how and who it wants to do business with. If Anthropic does not want to comply with government diktats, there are plenty of other competitors who are eager to do so and want the business. The headache for Hegseth is that Claude may be the best product on the market, at least for now. The government has reasonable concerns about its ability to act quickly in a crisis. As an American company, Anthropic has a patriotic responsibility to work in good faith with the Pentagon to ensure its products won't freeze up in the event of an enemy attack that requires swift and overwhelming response. Amodei says they've done so. At the same time, the bar should be extremely high for when a privately held company is required to do risky work it sees as immoral. That threshold has not been met. Invoking the DPA to try taking control of a model would put the government into legally murky waters. Anthropic could turn this into a drawn-out lawsuit, creating uncertainty. And if the government wins, what then? A court can compel performance, but it cannot compel good performance. Interestingly, Anthropic slightly relaxed its AI safety commitments the same day its CEO met with Hegseth. The company says that is unrelated to its fight with the Pentagon. Rather, Anthropic is trying to keep competitors at bay, saying it will no longer halt development for safety concerns if a rival has released an equal or superior model. The government should take heed: Americans benefit from having as many companies as possible vying for government business, not by making Uncle Sam a nightmare customer.
[43]
OpenAI is negotiating with the U.S. government, Sam Altman tells staff | Fortune
Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement is emerging with the U.S. Department of War to use the startup's AI models and tools, according to a source present at the meeting and a summary of the meeting seen by Fortune. The contract has not yet been signed. The meeting came at the end of a week where a conflict between Secretary of War Pete Hegseth and OpenAI rival Anthropic burst into public acrimony, ending with the apparent end of Anthropic's contracts with the Pentagon and with the federal government in general. Altman said the government is willing to let OpenAI build their own "safety stack" -- that is, the layered system of technical, policy, and human controls that sit between a powerful AI model and real-world use -- and that if the model refuses to do a task, then the government would not force OpenAI to make it do that task. OpenAI would retain control over how technical safeguards are implemented, which models are deployed and where, and would limit deployment to cloud environments rather than "edge systems." (In a military context, edge systems are a category that could include aircraft and drones.) In what would be a major concession, Altman told employees that the government said it is willing to include OpenAI's named "red lines" in the contract, including not using AI to power autonomous weapons, no domestic mass surveillance and no critical decision-making. OpenAI and the Department of War did not immediately respond to requests for comment. Sasha Baker, head of national security policy at OpenAI, and Katrina Mulligan, who leads national security for OpenAI for Government, also spoke at the OpenAI all-hands, according to the source. One of those officials said the relationship with Anthropic and the government had broken down because Anthropic CEO and cofounder Dario Amodei had offended Department of War leadership, including publishing blog posts that "the department got upset about." Anthropic, a company founded by people who left OpenAI over safety issues, had been the only large commercial AI maker whose models were approved for use at the Pentagon, in a deployment done through a partnership with Palantir. But Anthropic's management and the Pentagon been locked for several days in a dispute over limitations that Anthropic wanted to put on the use of its technology. Those limitations are essentially the same ones that Altman said the Pentagon would abide by if it used OpenAI's technology. Anthropic had refused Pentagon demands that it remove safeguards on its Claude model that restrict uses such as domestic mass surveillance or fully autonomous weapons, even as defense officials insisted that AI models must be available for "all lawful purposes." The Pentagon, including Secretary of War Pete Hegseth, had warned Anthropic it could lose a contract worth up to $200 million if it did not comply. Altman has previously said OpenAI shares Anthropic's "red lines" on limiting certain military uses of AI, underscoring that even as OpenAI negotiates with the U.S. government, it faces the same core tension now playing out publicly between Anthropic and the Pentagon. The OpenAI all-hands came just after President Trump announced that the federal government will stop working with Anthropic, in a dramatic escalation of the government's clash with the company over its AI models. "I am directing every federal agency in the United States government to immediately cease all use of Anthropic's technology. We don't need it, we don't want it and will not do business with them again!" Trump said in a post on Truth Social. The Department of War and other agencies using Anthropic's Claude models will have a six-month phase-out period, he said. At the OpenAI all-hands, staff were told that the most challenging aspect of the deal for leadership were concerns about foreign surveillance, and that there was a major worry about AI-driven surveillance threatening democracy, according to the source. However, company leaders also seemed to acknowledge the reality that governments will spy on adversaries internationally, recognizing claims that national-security officers "can't do their jobs" without international surveillance capabilities. References were made to threat intelligence reports showing that China was already using AI models to target dissidents overseas.
[44]
AI Workers, and Even CEOs, Suddenly Turning Against the Trump Administration
Can't-miss innovations from the bleeding edge of science and tech The Trump administration has a new rival in its ongoing feud with AI company Anthropic: Silicon Valley's rank-and-file. Newly reported by Bloomberg, a coalition of labor groups representing over 700,000 workers from Amazon, Google, Microsoft, and OpenAI have made a formal ask of their corporations to join Anthropic in its refusal to comply with recent demands from the Pentagon. "We are speaking out today because the Pentagon is demanding that Anthropic abandon two major safety guardrails for Claude, which is the only frontier AI model currently deployed in classified Department of War operations," reads the letter. "We are writing to urge our own companies to also refuse to comply should they or the frontier labs they invest in enter into further contracts with the Pentagon." This week, the Pentagon issued an ultimatum to Anthropic to drop two key guardrails regarding the use of its AI system, Claude: one barring "mass domestic surveillance," and another prohibiting the Pentagon from using its tech to build AI-powered weapons that can kill without a human operator. The Pentagon had previously agreed to uphold both guardrails when it entered a contract worth up to $200 million to license Claude for classified use in July of 2025. But following a series of back-and-forth meetings, including discussion of using the company's AI in a nuclear strike scenario, the Pentagon ordered Anthropic to allow unfettered access to Claude or face its wrath. "How the Pentagon reacts remains to be seen, but we know they will rapidly seek to onboard other models without these guardrails in place, regardless of whether they try to force Anthropic to comply," the worker's letter urges. "If any tech company caves to the Pentagon's demands, War Secretary Pete Hegseth will have won the ability to surveil our communities -- here and abroad -- en masse, at an unprecedented level," it continues. "He will have the power to build and deploy AI-powered drones that kill people without the approval of any human." In the face of mounting pressure from arguably the most powerful military entity in the world, Anthropic's CEO Dario Amodei has stood defiant. In a statement published on Anthropic's website, Amodei described the Pentagon's increasingly desperate stance: "they have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a 'supply chain risk'... and to invoke the Defense Production Act to force the safeguards' removal." "These latter two threats are inherently contradictory," Amodei continued: "one labels us a security risk; the other labels Claude as essential to national security." Either way, he says Anthropic "cannot in good conscience accede to their request," the deadline for which is 5:01pm on Friday, February 27th. The legion of tech workers and Amodei gained a particularly strange bedfellow: OpenAI CEO Sam Altman, who has become something of a nemesis to Amodei as the rivalry between the two companies has heated up. But swallowing his pride -- or perhaps sensing a PR opportunity -- Altman sent a memo to staff on Thursday essentially siding with Anthropic against Hegseth and the Pentagon. "[R]egardless of how we got here, this is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance," he wrote. "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines." Zooming out, the rift draws attention to the growing contradiction between Anthropic's dedication to ethics and its contract with the Department of Defense. While the $200 million contract is financially immaterial to the $380 billion Anthropic, allowing the Pentagon unlimited access to Claude could come at substantial reputational and legal risk, especially as the United Nations has begun efforts to ban lethal autonomous weapons on a global scale. In a sense, the Pentagon seems to rely on Anthropic more than the other way around. According to Defense One, it would take the Trump administration three months or more to replace Claude. In other words, Anthropic technically holds all the cards on paper -- but there's no telling what the ever-unpredictable Trump administration might do if the company fails to meet the deadline.
[45]
Anthropic digs in on Claude AI standoff with the Pentagon
Anthropic delivered a blunt message for the Pentagon: Thanks, but no thanks. The AI start-up late Thursday rebuffed the Defense Department's latest offer to resolve a standoff over deploying Anthropic's Claude AI system for military purposes without restrictions. The Pentagon had imposed a 5:01 p.m. Friday deadline for Anthropic to yield to its demands, or face retaliation from the Trump administration. At stake is a $200 million defense contract between Anthropic and the Pentagon around the use of AI in classified military systems. Anthropic has pressed for assurances its AI won't be engaged in mass surveillance of Americans or used in autonomous weapons systems without human oversight. The company has dug in against repeated demands from the Defense Department that its technology must be applied as the Pentagon sees fit militarily while complying with the law. "These threats do not change our position: we cannot in good conscience accede to their request," Anthropic CEO Dario Amodei said in a Thursday statement posted on the company's website. Both sides are barreling toward the deadline with few signs of progress. Still, there appears to be an opening for a last-minute breakthrough with hours to go. Emil Michael, the Pentagon's top technology officer, told Bloomberg News on Friday morning that "up until that deadline, I'm open to more talks." That was a sharp change from the antagonistic tone he had taken only a day earlier. On Thursday, Michael bashed Amodei as "a liar" with a "God complex" in a social media post. He added the Pentagon will comply with the law but "not bend to whims of any one for-profit tech company." The Defense Department has threatened to label Anthropic as a "supply chain" risk, a move usually reserved for foreign rivals that could sever the company from U.S. government contracts. It has also warned it may invoke the Defense Production Act (DPA), an extraordinary step that would allow the U.S. government to commandeer the company's AI technology. Analysts have pointed to a contradiction in the Trump administration's hardline approach to the company. Labeling Anthropic as a supply chain risk would bar the government from using its products. Yet invoking the Defense Production Act would allow it to claim Anthropic's AI model is essential to national security. Amodei echoed that point in his statement on Thursday, calling the threats "inherently contradictory." "One labels us a security risk," he said. "The other labels Claude as essential to national security." OpenAI CEO Sam Altman backed up Anthropic in its standoff with the Pentagon, a sign that the Trump administration might have to deal with the same concerns from other AI companies about the use of cutting-edge AI technology. "I don't personally think that the Pentagon should be threatening DPA against these companies," Altman said in a CNBC interview on Friday. "For all the differences I have with Anthropic, I mostly trust them as a company and I think they really do care about safety. I'm not sure where this is going to go."
[46]
Inside Anthropic's existential negotiations with the Pentagon
A former Uber exec is playing hardball, but for the AI lab, it's more than just a $200 million military contract at stake. Anthropic's weekslong battle with the Department of Defense has played out over social media posts, admonishing public statements, and direct quotes from unnamed Pentagon officials to the news media. But the future of the $380 billion AI startup comes down to just three words: "any lawful use." The new terms, which OpenAI and xAI have reportedly already agreed to, would give the US military carte blanche to use services for mass surveillance and lethal autonomous weapons, AI that has full power to track and kill targets with no humans involved in the decision-making process.
[47]
In Defense-Anthropic clash, AI is real-time testing the balance of power in future of warfare
Anthropic trying to put limitations on its AI models 'really has no standing', says Brent Sadler The Department of Defense's clash with Anthropic over the integration of artificial intelligence into military operations, and who sets the limits on usage, reached a peak this week with Defense Secretary Pete Hegseth giving the AI company until 5:01 p.m. ET Friday to cede to the government's demands. Anthropic has not budged, to date at least, but the battle between military and industry over AI is just getting started. The Pentagon is colliding with the private companies that control AI in a way that has not been tested in the post-World War II era. On Thursday, Anthropic refused Defense Secretary Pete Hegseth's demand to loosen certain safeguards of its models for military use, including mass domestic surveillance or fully autonomous weapons, because it violates company policies. CEO Dario Amodei's decision comes after the Pentagon warned it could terminate the partnership if the company refuses to support "all lawful uses." "It is the Department's prerogative to select contractors most aligned with their vision," Amodei wrote in a statement on Thursday. "But given the substantial value that Anthropic's technology provides to our armed forces, we hope they reconsider." The standoff highlights the emerging reality that private firms developing frontier AI may seek to set their own limits on how the technology is deployed, even in national security contexts. In July the Defense Department awarded contracts worth up to $200 million each to four companies -- Anthropic, OpenAI, Google DeepMind, and Elon Musk's xAI -- to prototype frontier AI capabilities tied to U.S. national security priorities. The awards signal how aggressively the Pentagon is moving to bring cutting-edge commercial AI into defense work. The urgency is reflected in internal Pentagon planning as well. A January 9 memorandum outlining the military's artificial intelligence strategy calls for the U.S. to become an "AI-first" fighting force and to accelerate integration of leading commercial AI models across warfighting, intelligence, and enterprise operations. "There are no winners in this," Lauren Kahn, a senior research analyst at Georgetown's Center for Security and Emerging Technology, told CNBC in a recent interview about the standoff between the Pentagon and Anthropic. "It leaves a sour taste in everyone's mouth." What is does do, though, is mark a shift -- a departure from decades of defense innovation during which governments themselves controlled the technology as it was created. "For most of the post-World War II era, the U.S. government defined the frontier of advanced technology," said Rear Admiral Lorin Selby, former chief of naval research. "It set the requirements, funded the foundational research, and industry executed against government-driven specifications. From nuclear propulsion to stealth to GPS, the state was the primary engine of discovery, and industry was the integrator and manufacturer." AI, Selby said, has inverted that model. "Today the commercial sector is the primary driver of frontier capability. Private capital, global competition, and commercial data scale are advancing AI at a pace that traditional government R&D structures cannot easily replicate. The Department of War is no longer defining the edge of what is technically possible in artificial intelligence -- it is adapting to it," he said.
[48]
Anthropic refuses to bow to Pentagon despite Hegseth's threats
Despite an ultimatum from Defense Secretary Pete Hegseth, Anthropic said that it can't "in good conscience" comply with a Pentagon edict to remove guardrails on its AI, CEO Dario Amodei wrote in a blog post. The Department of Defense had threatened to cancel a $200 million contract and label Anthropic a "supply chain risk" if it didn't agree to remove safeguards over mass surveillance and autonomous weapons. "Our strong preference is to continue to serve the Department and our warfighters -- with our two requested safeguards in place," Amodei said. "We remain ready to continue our work to support the national security of the United States." In response, US Under Secretary of Defense Emil Michael accused Amodei in a post on X of wanting "nothing more than to try to personally control the US military and is OK putting our nation's safety at risk." The standoff began when the Pentagon demanded that Anthropic its Claude AI product available for "all lawful purposes" -- including mass surveillance and the development of fully autonomous weapons that can kill without human supervision. Anthropic refused to offer its tech for those things, even with a "safety stack" built into that model. Yesterday, Axios reported that Hegseth gave Anthropic a deadline of 5:01 PM on Friday to agree to the Pentagon's terms. At the same time, the DoD requested an assessment of its reliance on Claude, an initial step toward potentially labelling Anthropic as a "supply chain risk" -- a designation usually reserved for firms from adversaries like China and "never before applied to an American company," Anthropic wrote. Amodei declined to change his stance and stated that if the Pentagon chose to offboard Anthropic, "we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations or other critical missions." Grok is one of the other providers the DoD is reportedly considering, along with Google's Gemini and OpenAI. It may not be that simple for the military to disentangle itself from Claude, however. Up until now, Anthropic's model has been the only one allowed for the military's most sensitive tasks in intelligence, weapons development and battlefield operations. Claude was reportedly used in the Venezuelan raid in which the US military exfiltrated the country's president, Nicolás Maduro, and his wife. AI companies have been widely criticized for potential harm to users, but mass surveillance and weapons development would clearly take that to a new level. Anthropic's potential reply to the Pentagon was seen as a test of its claim to be the most safety-forward AI company, particularly after dropping its flagship safety pledge a few days ago. Now that Amodei has responded, the focus will shift to the Pentagon to see if it follows through on its threats, which could seriously harm Anthropic.
[49]
What to know about Defense Protection Act and the Pentagon's Anthropic ultimatum
NEW YORK (AP) -- Defense Secretary Pete Hegseth gave Anthropic an ultimatum this week: Open its artificial intelligence technology for unrestricted military use by Friday, or risk losing its government contract. Defense officials in the Trump administration also warned they could designate Anthropic, which makes the AI chatbot Claude, as a supply chain risk -- or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn't approve. Some experts say that using the law this way would be unprecedented, and could bring future legal challenges. The government's efforts to essentially force Anthropic's hand also underscore a wider, contentious debate over AI's role in national security. Here's what we know. What is the Defense Production Act? The Defense Production Act gives the federal government broad authority to direct private companies to meet the needs of national defense. The act was signed by President Harry S. Truman in 1950, amid concerns about supplies and equipment during the Korean War. But over its now decades-long history, the law's powers have been invoked not only in times of war but also for domestic emergency preparedness as well as recovery from terrorist attacks and natural disasters. One of the act's provisions allows the president to require companies to prioritize government contracts and orders deemed necessary for national defense, with the goal of ensuring the private sector is producing enough goods needed to meet a war effort or other national emergency. Other provisions give the president the ability to use loans and additional incentives to increase production of critical goods, and authorize the government to establish voluntary agreements with private industry. The DPA is "one of the government's most powerful and adaptable industrial policy tools," said Joel Dodge, an attorney and the director of industrial policy and economic security at the Vanderbilt Policy Accelerator. Anthropic is the last of its AI peers to not supply its technology to a new U.S. military internal network. Its CEO Dario Amodei repeatedly has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent. The Defense Department is considering invoking the DPA to give the military more authority to use Anthropic's products, even if the company doesn't approve of how, according to a person familiar with the matter and a senior Pentagon official. That could mean forcing Anthropic to adapt its model to the Pentagon's needs without built-in safety limits, or remove certain ethical restrictions from the company's contract language. Experts like Dodge say both would be "without precedent under the history of the DPA." "It's a powerful law," he said. "(But) it has never been used to compel a company to produce a product that it's deemed unsafe, or to dictate its terms of service." How has this law been used in the past? Trump in his first term and former President Joe Biden invoked the DPA to boost supplies to combat the COVID-19 pandemic. And during 2022's nationwide baby formula shortage, Biden used the law to speed production of formula and authorize flights to import supply from overseas. Biden also invoked the DPA in a 2023 executive order on AI, notably in efforts to require that companies share safety test results and other information with the government. Trump repealed the order at the start of his second term. Decades ago, the administrations of both President Bill Clinton and George W. Bush used the DPA to ensure that electricity and natural gas shippers continued supplying California utilities amid an energy crisis. And the law was used after Hurricane Maria struck Puerto Rico in 2017 to prioritize contracts for food, bottled water, manufactured housing units and the restoration of electrical systems. The DPA requires periodic reauthorization to remain in effect, which can expand or refine the scope of the law. According to congressional documents, its next expiration date is slated for Sept. 30 of this year. And depending on how the Defense Department's reported demands unfold, Anthropic could be at the top of lawmakers' minds. Possible next steps for Anthropic If the Defense Department uses the DPA provision aimed at prioritizing government contracts and ordering production of certain goods -- which the Anthropic case suggests it will -- a company can push back if the requested product isn't something it already produces, Dodge and others say, or if it deems the terms to be unreasonable. But the government may try and overrule that, notes Charlie Bullock, senior research fellow at the Institute for Law & AI. "If neither side backs down, it seems realistic that there would be litigation between Anthropic and the government," Bullock said. Some have also noted tension between the Pentagon's warning that it could designate Anthropic as a supply chain risk while also indicating that its products are so important to national defense that it needs to invoke the DPA -- two assertions that seem at odds with each other. "There are a lot of forces that I think the administration's counting on that would lead Anthropic to just give in on Friday and agree with its terms," Dodge said. If there's future litigation over a potential DPA order, Dodge doesn't expect the government to prevail because "it seems very out of bounds under the text of the law." But if the administration is successful, or Anthropic simply agrees to new terms, that could open up "a Pandora's box of what the government could do to assert power and control over private companies," he added. ___ Associated Press Writers Matt O'Brien in Providence, Rhode Island and Konstantin Toropin and David Klepper in Washington contributed to this report.
[50]
Anthropic Won't Lift AI Safeguards Amid Ongoing Pentagon Dispute: CEO - Decrypt
The standoff follows reports that the U.S. military used Claude to capture former Venezuelan President Nicolás Maduro Anthropic CEO Dario Amodei said Thursday the company will not remove safeguards from its Claude AI model, escalating a dispute with the U.S. Department of Defense over how the technology can be used in classified military systems. The statement comes as the Defense Department reviews its relationship with Anthropic and weighs potential consequences, including cancellation of the company's $200 million contract and possible invocation of the Defense Production Act. "We cannot in good conscience accede to their request," Amodei wrote, referring to the Pentagon's demand in January that AI contractors permit use of their systems for "any lawful use." While the Pentagon has since required AI vendors to adopt standard "any lawful use" language in future agreements, Anthropic remained the only frontier AI firm that resisted turning over control of its AI to the military. On Wednesday, Axios first reported that the Pentagon had issued an ultimatum requiring unrestricted military use of Claude. The deadline reportedly is Friday of this week. "It is the Department's prerogative to select contractors most aligned with their vision," Amodei continued. "But given the substantial value that Anthropic's technology provides to our armed forces, we hope they reconsider." In his statement, Amodei framed the company's stance as aligned with U.S. national security goals. "I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries," he said. He added that Claude is "extensively deployed across the Department of War and other national security agencies for intelligence analysis, modeling and simulation, operational planning, cyber operations, and more." War on AI The dispute unfolds against broader concerns about how advanced AI systems behave in high-stakes military scenarios. In a recent King's College London study, OpenAI's GPT-5.2, Anthropic's Claude Sonnet 4, and Google's Gemini 3 Flash deployed nuclear weapons in 95% of simulated geopolitical crises. During a speech at SpaceX's Starbase in Texas in January, Defense Secretary Pete Hegseth said the U.S. military plans to deploy the most advanced AI models. That same month, reports surfaced that Claude was used during a U.S. operation to capture former Venezuelan President Nicolás Maduro earlier that month. Amodei refuted claims that Anthropic questioned any specific military operations. "Anthropic understands that the Department of War, not private companies, makes military decisions," he said. "We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner." Despite this, Amodei said using these systems for mass domestic surveillance or autonomous weapons is incompatible with democratic values and presents serious risks. "Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons," he said. "We will not knowingly provide a product that puts America's warfighters and civilians at risk." He also addressed the Pentagon's threat to designate Anthropic a "supply chain risk" while also potentially invoking the Defense Production Act. "These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security," he said. While Amodei has said the company will not comply with the Pentagon's request, at the same time, Anthropic has revised its Responsible Scaling Policy, dropping a pledge to halt training of advanced systems without guaranteed safeguards in place. Robert Weissman, co-president of Public Citizen, said the Pentagon's posture signals broader pressure on the tech industry. "The Pentagon is publicly bullying Anthropic, and the public part is intentional, because they want to pressure this particular company and send a message to all big tech and all corporations that we intend to do and take whatever we want and don't get in our way," Weissman told Decrypt. Weissman described Anthropic's guardrails as "modest" and aimed at preventing "improper surveillance of American people or to facilitate the development and deployment of killer robots, AI-enabled weaponry that could launch lethal strikes without humans say so." "Those are the most sensible and modest guardrails you could come up with when it comes to this powerful new technology." Regarding the Pentagon's threat of designating Anthropic a "supply chain risk," Weissman called it a potentially crushing penalty from the government, and argued it would pressure other AI firms to avoid imposing similar limits. "Individuals might use Claude, but none of the AI companies, particularly Anthropic, have business models based on individual use; they're looking for business use," he said. "This is a potentially crushing penalty from the government." While the Pentagon has not yet said whether it plans to go through with its threat to terminate the contract or invoke the Defense Production Act, Weissman said the Pentagon is signaling to AI companies that it expects unrestricted access to their technology once it is deployed in government systems. "The message of the Pentagon is, 'we're not going to tolerate this, and we expect to be able to use the technology as it's invented for any purpose we want,'" Weissman said.
[51]
Dispute Between Pentagon and Anthropic Intensifies as Deadline Looms
Julian E. Barnes reported from Washington, and Sheera Frenkel from San Francisco. A standoff between the Pentagon and the artificial intelligence company Anthropic appeared to be deepening as the two sides hurtled toward a 5:01 p.m. deadline Friday that military officials gave the firm to either allow them unrestricted access to its most advanced model or face consequences. Defense Department officials criticized Anthropic's leader after the company on Thursday rejected their latest offer to settle the dispute. The Pentagon has threatened to either cut the company off from government business by declaring it a supply chain threat or force it to provide its frontier model without restrictions under the Defense Production Act. Emil Michael, a top Pentagon official who oversees artificial intelligence, attacked Dario Amodei, the chief executive of Anthropic, who on Thursday released a statement about why the company would not agree to the Defense Department's latest terms. "It's a shame that @DarioAmodei is a liar and has a God-complex," Mr. Michael wrote late Thursday. "He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company." On the surface, the battle between the Pentagon and Anthropic is a contract dispute over technical details of how the artificial model works, and the military's use of it. But it has also ballooned into a deeply political fight, involving questions of the military's ability to employ cutting-edge technology the way it sees fit and what A.I. can or should be used for. Officials from the State Department took to social media to reinforce the Pentagon's case and chastise Anthropic, while Democratic senators backed the company. Senator Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, posted a video on social media on Thursday in which he said companies need to make some concessions with the government, but indicated he thought Anthropic's concerns about surveillance and autonomous drones held merit. Mr. Warner argued that Anthropic was being threatened by Pete Hegseth, the defense secretary, for prioritizing safety. Trump Administration: Live Updates Updated Feb. 27, 2026, 3:00 a.m. ET "He is threatening them, literally by tomorrow, that if they don't give up all controls on safety and other things that anyone who does business with them would be banned," Mr. Warner said. The Pentagon wants all its contractors to adhere to a single standard -- that the military can use what it buys however it wants, as long as it complies with the law. But Pentagon officials have also been happy to beat up on tech companies, particularly ones the Trump administration has branded as "woke." For Anthropic, a firm that prioritizes both national security and technological safety, the political stakes are high. Supporters cheered Mr. Amodei's assertion that his company would not bend or allow its model to be used for mass surveillance of Americans or to command pilotless drones. The Pentagon said on Thursday that it had no interest in using Claude for Government, Anthropic's model that works on classified systems, for either activity. Mr. Amodei said the Pentagon's assertion that it would not use Claude for domestic surveillance or autonomous drones was undercut by the legal language in their contract. "In a narrow set of cases, we believe A.I. can undermine, rather than defend, democratic values," he wrote. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do." It is unclear what exactly will happen after 5:01 p.m. Friday. Any action by the Pentagon to label the company a supply chain risk or to force it to comply with the Defense Production Act would prompt legal action by Anthropic. Labeling the company a supply chain threat would block it from doing business with the government. But that, in turn, could have far-reaching effects for the Pentagon and intelligence agencies, because Anthropic's Claude has been the primary A.I. program used in classified systems. While many of the uses of artificial intelligence to assist military operations on the ground are still in a developmental stage, the models are actively used for intelligence analysis. Forcing Claude off government computers would hurt analysts at the National Security Agency sifting through overseas communications intercepts. It could also hamper C.I.A. analysts searching for patterns in intelligence reports. The Pentagon is ready to move forward with Grok, produced by Elon Musk's xAI, on its classified system. But Grok is considered by current and former government officials to be an inferior product. And switching A.I. software would take time and almost certainly cause disruption.
[52]
US threatens Anthropic with deadline in dispute on AI safeguards
US Secretary of Defense Pete Hegseth vowed to remove Anthropic from his agency's supply chain if the company declined to allow its artificial intelligence (AI) technology to be used across military applications. The threat was issued on Tuesday at a Pentagon meeting that Hegseth had demanded with Anthropic boss Dario Amodei, a source familiar with discussions told the BBC. "We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do," Anthropic said in a statement. A senior Pentagon official said Anthropic had until Friday evening to comply. A source told the BBC the tone of the discussion between Hegseth and Amodei was cordial, but Amodei laid out what Anthropic considers to be its red lines. These include involvement in autonomous kinetic operations in which AI tools make final military targeting decisions without human intervention. The use of Anthropic tools for mass domestic surveillance constitutes another red line, the source said. But the Pentagon official told the BBC the current conflict between the agency and Anthropic is unrelated to the use of autonomous weapons or mass surveillance. The official said that if Anthropic did not comply, Hegseth would ensure the Defense Production Act was invoked on the company. That measure could compel Anthropic executives to allow unrestricted use by the Pentagon on national security grounds. The official added that the Pentagon would simultaneously label Anthropic as a supply chain risk. An Anthropic spokesperson said Amodei "expressed appreciation for the Department's work and thanked the Secretary for his service" during the meeting with Hegseth. Anthropic is the maker of the AI chatbot Claude and was one of four AI companies to be awarded contracts with the Pentagon last summer. Google, ChatGPT-maker OpenAI and Elon Musk's xAI which makes the AI chatbot Grok were also awarded contracts of up to $200m (£148m) each. Defence department official Emil Michael has previously said the agency wants OpenAI, Google, xAI, and Anthropic to allow the Pentagon to "be able to use any model for all lawful use cases." Anthropic has consistently aimed to position itself as a more safety-orientated approach to AI research as compared to rivals. It regularly shares safety reports on its own products with the public. One such report from last year acknowledged its AI technology had been "weaponised" by hackers who used it to conduct sophisticated cyber-attacks. The company's image was challenged after reports that the US military used its AI model Claude during the operation that led to the capture former Venezuelan President Nicolás Maduro in January. Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
[53]
Deadline looms as Anthropic rejects Pentagon demands it remove AI safeguards
Pages from the Anthropic website and the company's logos are displayed on a computer screen in New York on Thursday, Feb. 26, 2026. Patrick Sison/Associated Press hide caption The Pentagon is headed for a showdown with Anthropic, one of the world's most powerful AI companies, over the military use of its AI model is set to come to a head Friday, after Anthropic's CEO rejected the Defense Department's ultimatum that it loosen safety restrictions or be blacklisted from lucrative military work. At stake are hundreds of millions of dollars in contracts and access to some of the most advanced AI tools on the planet. Here's what to know about the fight and what the consequences could be. For months, Anthropic CEO Dario Amodei has insisted that Anthropic's AI model, Claude, must not be used for mass surveillance in the U.S. or to power entirely autonomous weapons, such as a drone that uses AI to kill targets without human approval. He has described those uses as "entirely illegitimate" and says they are "bright red lines" for the company. The Pentagon says that it does not intend to use Anthropic's tools for surveillance or autonomous weapons. But it says that it's not up to a contractor like Anthropic to make decisions about how its technology is used, and says AI companies including Anthropic need to allow the U.S. government to use their tools "for all lawful purposes." "Legality is the Pentagon's responsibility as the end user," a senior Pentagon official who declined to give their name told NPR this week. On Thursday, Amodei said Anthropic could not accept the Pentagon's latest changes to the terms of its contract. "I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries," the CEO wrote in a lengthy statement about the impasse. "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner," he said. "However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei continued. He described domestic mass surveillance and fully autonomous weapons as uses that are "simply outside the bounds of what today's technology can safely and reliably do." Those uses "have never been included in our contracts with the Department of War, and we believe they should not be included now," he added. Amodei's rejection comes as Anthropic's relationship with the Pentagon has grown increasingly acrimonious. At a meeting on Tuesday between Defense Secretary Pete Hegseth and Amodei, Hegseth threatened to punish the company if it does not bend to the administration's demands, according to two people with direct knowledge of the meeting who were not authorized to speak publicly. One person close to the discussion said Hegseth dangled the possibility of canceling Anthropic's $200 million contract with the Defense Department, while a Pentagon official said repercussions could include forcing Anthropic to allow the federal government to use its AI model against its will and effectively blacklisting Anthropic from working with the U.S. military. "These threats do not change our position: we cannot in good conscience accede to their request," Amodei wrote on Thursday. "But given the substantial value that Anthropic's technology provides to our armed forces, we hope they reconsider." In a post on X on Thursday, Pentagon spokesman Sean Parnell warned that Anthropic had until Friday afternoon before the Pentagon would take action. "They have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW," Parnell wrote, using the Pentagon's rebranded "Department of War" acronym. Anthropic said on Thursday the Pentagon had sent the company new contract language overnight that, in the company's view, "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons." The statement continued: "New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will. Despite DOW's recent public statements, these narrow safeguards have been the crux of our negotiations for months." Anthropic said it's ready to continue negotiations and is "committed to operational continuity for the Department and America's warfighters." Deeming Anthropic a supply chain risk would be unusual, according to Geoffrey Gertz, a senior fellow at the Center for a New American Security. The designation has "traditionally been used for foreign adversary technology," he said, such as Chinese telecommunications company Huawei. It's unclear exactly how far-reaching the Pentagon designation would be. It could mean that other Pentagon contractors would be prohibited from using Anthropic's tools in their work for the Pentagon, or it could prohibit them from using Anthropic's tools at all. That second case would be particularly damaging to the company, Gertz said. At the same time, the Pentagon has threatened to invoke the Defense Production Act to force Anthropic to remove its guardrails. That too would be an extraordinary step, Gertz said. The Defense Production Act is designed to give the government control over certain commercial sectors in extraordinary circumstances. It is "traditionally evoked very rarely in true emergency crisis situations," he said. The goal in this case, presumably, would be to use the act to compel Anthropic to loosen restrictions on the use of its AI tools. Gertz noted that these two threats against Anthropic appear to be somewhat contradictory: "It's this funny mix where they both are such a risk that they need to be kicked out of all systems, and so essential that they need to be compelled to be part of the system no matter what," he said. The Pentagon's contract with Anthropic is worth as much as $200 million, a relatively small portion of the company's $14 billion in revenue. While the Pentagon has similar contracts with other AI companies including Google, OpenAI and xAI, Anthropic was the first to be cleared for classified use after defense officials deemed it the most advanced and secure model for sensitive military applications. If the contract were simply cancelled, that might be the end of it, Gertz said. But if the Pentagon either tries to compel Anthropic to remove its guardrails or hits it with a wider supply-chain-risk designation, then the company will almost certainly have to fight back, he predicts. "Certainly if the Pentagon seeks to escalate it," Gertz said, "I suspect we'll see more legal fights."
[54]
OpenAI secures Pentagon deal with safety safeguards as Trump drops Anthropic
OpenAI said Friday it struck a deal for the Pentagon to use its models in the US defense agency's classified network, with "safeguards," after President Donald Trump blacklisted AI rival Anthropic. Trump had ordered the government to stop using Anthropic, calling it a threat to national security after it refused to agree to unconditional military use of its Claude models. The firm vowed to sue over the "intimidation" in what has become a rare public dispute between a major tech firm and the US government, insisting its technology should not be used for mass surveillance or fully autonomous weapons systems. Hours later, OpenAI CEO Sam Altman announced a deal with the Pentagon to use its models with similar red lines to Anthropic, using "technical safeguards" that the Department of Defense had agreed to. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman wrote on X, adding that those principles went "into our agreement." The Department of Defense did not immediately respond to a request for comment. Washington had lashed out at Anthropic over its ethical concerns, saying the Pentagon operates within the law and contracted suppliers cannot set terms on how their products are employed. "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!" Trump said in a post on his Truth Social platform. "Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow," Trump added. Court challenge Altman told employees Thursday that he was seeking an agreement with the Pentagon that would include demands similar to Anthropic's, and that he hoped to help broker a resolution. "Humans should remain in the loop for high-stakes automated decisions," he wrote in a memo to employees, according to US media. Anthropic echoed those sentiments in a statement earlier Friday, saying no pushback from Washington would "change our position on mass domestic surveillance or fully autonomous weapons." The company said it remains "ready to continue our work to support the national security of the United States." The Pentagon had said Anthropic must agree to comply with its demand by 5:01 pm (22:01 GMT) Friday or face compulsion under the Defense Production Act. The Cold War-era law, last invoked during the Covid pandemic, grants the federal government sweeping powers to direct private industry toward national security priorities. The Pentagon also threatened to designate Anthropic a supply chain risk -- a label typically reserved for companies from adversary nations. But in response Anthropic said it would seek to overturn the ban. "We will challenge any supply chain risk designation in court," the San Francisco-based AI startup said in a lengthy statement that outlined the dangers of the Pentagon's demands. 'Dangerous precedent' US Defense Secretary Pete Hegseth said earlier he was directing the Pentagon to follow through on the latter threat, and that "effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." "Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon," Hegseth wrote on X. Calling Hegseth "the least qualified Secretary of Defense in our nation's history," top House Democrat Hakeem Jeffries praised what he called Anthropic's courage for pushing back "against this shocking invasion of privacy scheme." "Mass surveillance of American citizens is unacceptable," Jeffries added in his statement late Friday. The conflict had earlier drawn a show of solidarity from others in the industry, with hundreds of employees from AI giants Google DeepMind and OpenAI urging their companies to rally behind Anthropic in an open letter titled "We Will Not Be Divided." "We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight," the letter said. "They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand," it added.
[55]
Anthropic is standing up to the US Department of War and refusing to remove AI autonomous weapon and mass surveillance safeguards: 'We cannot in good conscience accede to their request'
'We will not knowingly provide a product that puts America's warfighters and civilians at risk.' Anthropic CEO Dario Amodei has released a statement on the company's website regarding its months-long dispute with the US Department of War over the use of its AI technology. In the statement. Amodei outlines his refusal to remove safeguards that prevent its AI products from being used for fully autonomous weapons and domestic mass surveillance purposes. "I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries," the statement begins. "Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. "However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," the statement continues. "Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: "Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. "Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons... may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk." "The Department of War has stated they will only contract with AI companies who accede to 'any lawful use' and remove safeguards in the cases mentioned above," Amodei continues. "They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a 'supply chain risk' -- a label reserved for US adversaries, never before applied to an American company -- and to invoke the Defense Production Act to force the safeguards' removal." "Regardless, these threats do not change our position: we cannot in good conscience accede to their request", the statement concludes. "We remain ready to continue our work to support the national security of the United States." In response, US undersecretary of defense Emil Michael said: "It's a shame that @DarioAmodei is a liar and has a God-complex. "He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company." Anthropic currently has a contract with the US department of defense worth up to $200 million. In response to the recent dispute, over 300 Google and OpenAI employees have signed an open letter in support of Anthropic's position. The letter ends: "We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight."
[56]
As Pentagon-Anthropic feud risks boiling over, military says it's made compromises to AI giant
Joe Walsh is a senior editor for digital politics at CBS News. Joe previously covered breaking news for Forbes and local news in Boston. As the U.S. military's partnership with artificial intelligence giant Anthropic teeters on the edge of collapse, the Pentagon's top technology official told CBS News the department has offered compromises in order to reach a deal with the company. The Pentagon has given Anthropic until Friday at 5:01 p.m. to either let the military use the company's AI model for "all lawful purposes" or risk losing a lucrative Pentagon contract. The AI startup has sought guardrails that explicitly bar its powerful Claude model from being used to conduct mass surveillance of Americans or carry out military operations on its own. The Pentagon's chief technology officer Emil Michael told CBS News on Thursday that the military has "made some very good concessions." In particular, the Defense Department offered to "put it in writing" that federal laws already prevent the military from conducting mass surveillance on Americans, and that internal policies restrict how the military can use autonomous weapons, according to Michael. He also said the military invited Anthropic to participate in its AI ethics board. Asked why the military will not specifically put in writing that Anthropic's model can't be used for mass surveillance of Americans or to make final targeting decisions without human involvement, Michael said those uses of AI are already barred by the law and by Pentagon policies. "At some level, you have to trust your military to do the right thing," said Michael. "But we do have to be prepared for the future. We do have be prepared for what China is doing," Michael said. "So we'll never say that we're not going to be able to defend ourselves in writing to a company." If the military and Anthropic do not reach a deal by Friday's deadline, the military plans to cut off its partnership with the company and designate it a supply chain risk, Pentagon spokesman Sean Parnell said earlier Thursday. Officials are also considering invoking the Defense Production Act to make Anthropic adhere to the military's requests, sources told CBS News. Michael did not confirm that the Defense Production Act could be used, but he said that "no company is going to take out any software that's being used in this department until we have an alternative." Michael added that he's working on partnerships with alternative AI firms. At risk for Anthropic is its status as the only AI company to have its model deployed on the Pentagon's classified networks, through a partnership with data analytics giant Palantir. Anthropic was awarded a $200 million contract with the Defense Department last summer to deploy its AI capabilities to advance national security. The feud has highlighted a broader disagreement among policymakers and tech firms over how best to mitigate the potential risks posed by AI. Anthropic CEO Dario Amodei has long been vocal about the potential dangers of unconstrained AI, and has made a focus on safety and transparency a core part of his company's identity. He's also backed what he calls "sensible AI regulation." In the case of its Pentagon contract, Anthropic wants to ensure that its Claude model is not used for final military targeting decisions, a source familiar with the matter previously told CBS News. Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment. The Trump administration, meanwhile, has argued that stringent AI regulations could stifle innovation and make it harder for the American AI industry to compete, and has warned against what it calls "woke" AI models. In a speech last month, Defense Secretary Pete Hegseth pledged, "we will not employ AI models that won't allow you to fight wars." Michael told CBS News that the disagreement is partially ideological, "and the way I describe that ideology is: they're afraid of the power of AI." He said that the military is only interested in using AI lawfully, and is looking to "treat it like any other technology" -- which means that if it isn't used for lawful purposes, "that's on us." "You can't put the rules and the policies of the United States military and the government in the hands of one private company," said Michael. CBS News has reached out to Anthropic for comment.
[57]
Why Anthropic and the US are at a standoff over AI military contract
AI company Anthropic and the US Pentagon are at a standoff. Here is all you need to know. The United States government is threatening to end military contracts with the company Anthropic unless it opens its AI technology for unrestricted military use. Anthropic makes the chatbot Claude and is the last of its peers to not supply its technology to a new US military internal network. CEO Dario Amodei repeatedly has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent. Anthropic won a $200 million (€167 million) contract from the US Department of Defence last July to "prototype frontier AI capabilities that advance US national security," Anthropic said. The company inked a partnership with Palantir Technologies in 2024 to integrate Claude into US intelligence software. Defence Secretary Pete Hegseth reportedly said on Tuesday he would end the $200 million (€167 million) contract and label the company a "supply chain risk" if Anthropic did not comply. If Anthropic is designated a supply chain risk under US procurement law, the government would be able to exclude the company from contract awards, remove the company's products from consideration and direct prime contractors not to use that supplier. Reports about Hegseth's meeting with Dario Amodei, Anthropic's CEO and cofounder, also said that Hegseth threatened to use the Defense Production Act against the company, a law that gives the US President broad authority to direct private companies to prioritise national security needs, which includes access to their technology. Euronews Next reached out to Anthropic and the US government's Department of Defence to confirm the allegations, but did not receive immediate replies. Anthropic has long pitched itself as the more responsible and safety-minded of the leading AI companies, ever since its founders quit OpenAI to form the startup in 2021. On Tuesday, Anthropic said in an interview with Time Magazine that it was dropping its safety pledge that it would not release an AI system unless it could guarantee that the safety measures were adequate. Instead, it launched a new version of its responsible scaling policy, which outlines the company's framework for mitigating catastrophic AI risks. Jared Kaplan, Anthropic's chief science officer, told the publication that keeping the company from training new models while their competitors raced ahead without safeguards would not help them keep up in the AI race. "If one AI developer paused development to implement safety measures while others moved forward with training and deploying AI systems without strong mitigations, that could result in a world that is less safe," Anthropic's new policy reads. "The developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit." The policy separates Anthropic's hopes for bringing safety standards to the industry from its own goals as a company, where safety is still a priority for them. Anthropic said its new policy means the company will set "ambitious yet achievable" safety roadmaps for its models as well as publish risk reports that will show anticipated risks and whether a model's release is justified.
[58]
Anthropic says U.S. military can use its AI systems for missile defense
In contract negotiations between senior Defense Department officials and leaders from AI giant Anthropic in December, the company agreed to allow the U.S. government to use its AI systems for missile and cyber defense purposes, a person familiar with the matter said, requesting anonymity to speak about private discussions. But that apparently did not satisfy the Pentagon. Following weeks of tension between the Defense Department and Anthropic over the company's restrictions on how its products can be used by the military, Defense Secretary Pete Hegseth issued a stark ultimatum to company CEO Dario Amodei on Tuesday: Allow the AI technology to be used for all legal military purposes by this Friday or be forced to cooperate, a senior Pentagon official told NBC News. The ultimatum, detailed to NBC News by a senior Pentagon official, comes as Anthropic -- a company that has heavily marketed its focus on AI safety -- tries to maintain firm policies preventing its systems from being used for mass domestic surveillance or direct use in lethal autonomous weapons. The December contract changes would allow for its systems to be widely used for cyber and missile defense, according to the person familiar with the matter. An Anthropic spokesperson told NBC News in a statement that "Every iteration of our proposed contract language would enable our models to support missile defense and similar uses." But the company's insistence on guardrails have continued to be a source of contention between Anthropic and the Defense Department. According to the senior Pentagon official, representatives from the department, including Undersecretary of Defense Emil Michael, recently discussed several hypothetical scenarios with Anthropic leadership about how the company's products might be employed by the military. As part of those discussions, the officials discussed how Anthropic's systems might be used if an adversary launched an intercontinental ballistic missile at the U.S. According to the Pentagon source, the officials discussed whether Anthropic's guardrails might somehow block a U.S. response to the launch. Anthropic officials said they could be called on to lift those restrictions, according to the official, but Pentagon leadership was not fully satisfied with Anthropic's adjustments and did not want to be beholden to the private company. According to an Anthropic spokesperson, any suggestion that CEO Amodei said the Pentagon would have to call the company in each missile defense operation is "patently false." In the latest escalation in negotiations, during Tuesday's meeting Pentagon leaders said they could invoke the Defense Production Act to force Anthropic to comply with the Pentagon's rules, according to the senior Pentagon official. The Act allows the president to control domestic companies critical to national security in times of need. In Tuesday's meeting, Pentagon leadership also invoked threats to instead label Anthropic as a "supply chain risk" and ban all defense business with the company if it does not align its terms of service for certain high-stakes uses with the Pentagon by Friday, the source said. "Anthropic has until 5:01pm Friday to get on board with the Department of War," the senior Pentagon official said of the ultimatum in a statement provided to NBC News, responding to questions about the meeting. "If they don't get on board, the Secretary of War will ensure the Defense Production Act is invoked on Anthropic, compelling them to be used by the Pentagon." "Additionally, the Secretary of War will also label Anthropic a supply chain risk," the official said. Asked about Tuesday's meeting, an Anthropic spokesperson said in a statement: "Dario expressed appreciation for the Department's work and thanked the Secretary for his service. We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do." Hegseth complimented Anthropic's products and said the Pentagon wanted to work with Anthropic, according to another person familiar with the meeting, who requested anonymity to speak candidly. The person confirmed that the department said it would terminate Anthropic's work with the Pentagon by Friday if it did not agree to its terms. According to reports from The Wall Street Journal and Axios, Anthropic's Claude systems were used during the operation to capture Venezuelan President Nicolás Maduro in January. It is unclear exactly how the systems were used. Hegseth sent a memo to senior Pentagon officers Jan. 9 announcing the Pentagon's drive toward an "AI-first warfighting force." He outlined a push to use AI models, like Anthropic's, for all legitimate military purposes, "free from usage policy constraints" set by individual AI companies. Anthropic is the only AI company whose products are actively used on classified networks, through its contract with Palantir, a data analytics company. A senior Pentagon official confirmed to NBC News that xAI reached a deal with the Pentagon on Monday to use its Grok chatbot system on classified networks, agreeing to allow its systems to be harnessed for "any lawful use" as Hegseth desired. Anthropic was one of four AI companies -- the others were OpenAI, Google DeepMind and xAI -- to get contracts worth up to $200 million in July to "prototype frontier AI capabilities that advance U.S. national security."
[59]
Anthropic Rolls Back Safety Protocols as It Waits to Find Out If It's Being Drafted by the Army
Let's run through a hypothetical situation real quick: Let's say you're an AI company that has made your calling card safety, and you are negotiating the use of your technology with the military, which has threatened to punish your business if you don't abandon your principles. You'd like to maintain your position as the safety-conscious company in the AI space, which has garnered you a significant amount of goodwill with the general public as you resist government pressure. Is now a good time to announce that you're rolling back some of your safety protocols and tell the Pentagon that you're cool with AI launching missiles in certain circumstances? Anthropic seems to think it is. On Tuesday, the company announced that it was updating its Responsible Scaling Policy, a framework it first introduced in 2023 with the goal of mitigating catastrophic risks associated with AI systems. The company has held the policy up as a differentiator between it and its competitors, a promise that it puts safety first, even at the risk of potentially falling behind other frontier models that exercise less caution. Previously, Anthropic's RSP stated, "We will not train or deploy models capable of causing catastrophic harm unless we have implemented safety and security measures that will keep risks below acceptable levels.†Now, the company claims it's not so sure that's worth it if that means losing ground. “We felt that it wouldn't actually help anyone for us to stop training AI models,†Jared Kaplan, Anthropic’s chief science officer, told TIME. “We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments â€| if competitors are blazing ahead.†Anthropic does credit its original RSP for incentivizing it to develop stronger safeguards for its model, but has basically said that because other companies haven't adopted similar restraints, it needs more flexibility that red lines don't offer. "The Responsible Scaling Policy was always planned to be a living document: a policy that had the flexibility to change as AI models become more capable,†the company said in a blog post. Anthropic said it will continue to publish risk reports, but is going to run with “nonbinding but publicly-declared†safety goals rather than firm internal standards. A generous reading of that would be a commitment to public accountability. A less charitable read might be that the company knows there is no way for the public to actually enforce these standards, so why bother restraining itself? Anthropic told the Wall Street Journal that the change to its RSP is unrelated to its ongoing negotiations with the Pentagon, which just yesterday gave the company an ultimatum to loosen its safety guardrails so that the military can use its AI models as it sees fit or face consequences. But it's hard not to read the change in that light. Anthropic has maintained two primary red lines as it relates to the use of its technology for military operations: it will not allow its models to be used for mass domestic surveillance or to develop fully autonomous weapons that would operate without human involvement. Defense Secretary Pete Hegseth seems unwilling to accept that, and threatened to cancel Anthropic's government contracts, declare Anthropic a “supply chain risk,†and/or invoke the Defense Production Act to force the company to build a model for the military’s desired purposes. But it appears the company has already been negotiating carveouts that don't quite cross the red line. On Wednesday, Semafor reported the Pentagon asked Anthropic in December if it would allow its model to be used to autonomously launch missiles to shoot down other missiles. Reportedly, Anthropic said the Pentagon should reach out to ask before moving forward with such a use caseâ€"though Semafor reported that Anthropic was and continues to be willing to create a missile defense carveout for its policies. It's possible, maybe even likely, that Anthropic was always going to loosen the restrictions it has placed on itself. It's also possible that change was always going to come this week, regardless of the standoff with the Defense Department over AI safeguards. But given the position Anthropic finds itself in, it does become difficult not to view the situation as the company starting to compromise on its principles. Gizmodo reached out to Anthropic for more information, but the company did not offer comment prior to publication.
[60]
Google, OpenAI employees call for unified front on military use
Why it matters: The signatures could pressure Google and OpenAI to join Anthropic in drawing a line in the sand over what they won't allow governments -- even the U.S. government -- to do with their technology. Driving the news: The letter comes as Anthropic has reiterated its insistence that its technology not be used to surveil U.S. citizens or for autonomous weapons. * "The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," the letter states. "They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand." * It calls on Google and OpenAI leadership to "put aside their differences and stand together" against the Pentagon's demands. * The organizers say they are unaffiliated with any AI company or political group. Signatures are verified, and signers can remain anonymous, according to the letter. By the numbers: The letter had been signed by more than 160 people from Google and more than 40 people from OpenAI as of 5:30 p.m. PT on Thursday, though some signed the letter anonymously. * Google parent Alphabet employed nearly 200,000 people as of last year, while OpenAI's workforce is estimated at fewer than 10,000 people. Between the lines: Some in Congress have also called on the government to back down from its fight with Anthropic. Google reversed its internal prohibition on AI for weapons and surveillance in February 2025. What we're watching: The letter tests whether rank-and-file AI workers who rally behind Anthropic's position will convince their own employers to adopt similar red lines. If you work at an AI lab and you'd like to share your experience with Axios, find us on Signal. You can message Ina Fried at Ina.16 or Madison Mills at madymills.21 to chat.
[61]
Nvidia CEO Jensen Huang says conflict between Pentagon and Anthropic is 'not the end of the world'
* The Pentagon and Athropic continue to butt heads over use of Claude * Pete Hegseth has given Anthropic until Friday to comply * Anthropic could lose its contract or be forced to given the Pentagon full Claude access Following the rift that has formed between the Pentagon and Anthropic over use of the Claude AI model for military purposes, Nvidia CEO Jensen Huang has said it is "not the end of the world." Huang told CNBC both sides have "reasonable perspectives" - as the Pentagon has the right to decide how to use technology provided in contracts, and Anthropic has the right to decide how its models are used. However, Anthropic has the potential to lose its $200 million contract with the Department of Defense unless common ground is found. AI use for 'all lawful purposes' The Pentagon previously requested Anthropic, OpenAI, Google, and xAI allow the use of their AI models for "all lawful purposes," to which Anthropic put up the most resistance over fears its AI models could be used for autonomous weapons systems and mass domestic surveillance. If the disagreement is not resolved, the Pentagon could invoke the Defense Production Act (DPA), which would allow the President to force Anthropic to comply with the Pentagon's requests. US Defense Secretary Pete Hegseth has already threatened to invoke the DPA and has label the company as a "supply chain risk." Hegseth has given Anthropic until Friday to comply with the Pentagon's request. US intelligence agencies such as the FBI and NSA have previously undertaken illegal mass surveillance campaigns against US citizens, such as the COINTELPRO project during much of the Vietnam war, the illegal use of the Communications Assistance for Law Enforcement Act (CALEA) in the 1990's, and the use of the Patriot Act post 9/11 for covert and illegal mass surveillance. Anthropic and Nvidia hold a strategic partnership with each other, as in exchange for Anthropic adopting the Nvidia architecture, Nvidia committed $5 billion in investments in return. Huang added, "I hope that they can work it out, but if it doesn't get worked out, it's also not the end of the world."
[62]
US military leaders meet with Anthropic to argue against Claude safeguards
Anthropic presents itself as most safety-forward AI firm and Pentagon has threatened penalties if it does not yield US military leaders including Pete Hegseth, the defense secretary, met with executives from the artificial intelligence firm Anthropic on Tuesday to hash out a dispute over what the government will be able to do with the company's powerful AI model. Hegseth gave Dario Amodei, the Anthropic CEO, until the end of the day Friday to agree to the department's terms or face penalties, Axios reported. Anthropic, which presents itself as the most safety-forward of the leading AI companies, has been mired in weeks of disagreement with the Pentagon over how the military is allowed to use its large language model, Claude. US defense officials have pushed for unfettered access to Claude's capabilities, while Anthropic has reportedly resisted allowing its product to be used for mass surveillance or autonomous weapons systems that can use AI to kill people without human input. The Department of Defense (DoD) has integrated Claude into its operations, but has threatened to sever the relationship over what its top brass perceives as roadblocks erected by Anthropic. At stake in the negotiations is whether the AI industry will push back against government demand for the military use of their products, something that has long been controversial among researchers and ethical AI advocates. Defense officials have already threatened punitive measures against Anthropic if it does not comply, including canceling a massive contract with the company and designating it a "supply chain risk". The DoD struck deals with several major AI firms including Anthropic, Google and OpenAI in July last year, offering them contracts worth up to $200m. Until this week, however, Anthropic's Claude product was the only model permitted for use in the military's classified systems. The DoD signed a deal on Monday which allowed the use in classified systems by military personnel of Elon Musk's xAI chatbot, which has faced recent backlash over producing nonconsensual sexualized images of children. Both xAI and OpenAI have agreed to the government's terms on the uses of their AI, according to he Washington Post, with a defense official stating that OpenAI had allowed its model to be used for "all lawful purposes". OpenAI did not immediately respond to a request for comment on their agreement with the government. The meeting between Anthropic and the Pentagon is taking place a month after the US military reportedly used Claude to assist in its capture of Venezuelan leader Nicolás Maduro. There has been a widespread push from the Trump administration to integrate AI into the military, while Donald Trump has repeatedly vowed that the US will win a global AI arms race to dominate the technology. Emil Michael, the Pentagon's chief technology officer and a former Uber executive, has publicly campaigned for Anthropic to "cross the Rubicon" and agree to the government's terms. "I think if someone wants to make money from the government, from the US Department of War, those guardrails ought to be tuned for our use cases - so long as they're lawful," Michael told Defense Scoop last week. Anthropic's Amodei has meanwhile long spoken out in favor of greater regulation on AI, while his company has backed a political action committee advocating for stronger safeguards over artificial intelligence. Amodei opposed Trump during the 2024 US presidential campaign and Anthropic has hired several former Biden staffers, which the Wall Street Journal reported was a contributing factor in a pro-Trump venture capital firm backing out of investing in Anthropic earlier this year. The Pentagon has poured billions of dollars in recent years into pursuing AI-enabled technologies ranging from unmanned aerial drones to automated targeting systems. The advancement of these technologies has accelerated ethical questions around how much decision making power to cede to AI when it comes to lethal force. These debates are no longer theoretical, with fighting in Ukraine featuring deadly semiautonomous drones that can operate without human control.
[63]
Pentagon Open to AI Talks With Anthropic Before Friday Deadline
The Pentagon is open to more talks with Anthropic PBC ahead of a 5 pm Friday deadline to loosen restrictions on the use of its artificial intelligence technology, a senior Defense Department official said on Friday. Under Secretary of Defense for Research and Engineering Emil Michael said on Friday morning that the Pentagon remains open to dialogue with Anthropic, despite the company's "unpredictable" behavior in an acrimonious standoff over AI safeguards. "So long as they're in good faith, we're always open to talks," Michael said in an interview on Bloomberg Television. "Up until that deadline, I'm open to more talks and I told them so." Michael's comments come on the final day that Anthropic has to accede to the Pentagon's demands to drop certain AI safeguards or face severe consequences. The Defense Department official made the comments on Friday morning, a day after he blasted Anthropic CEO Dario Amodei in a series of X posts, calling him a liar and accusing him of having "a God-complex." "He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk," Michael wrote on X. At stake is up to $200 million in work that Anthropic had agreed to do for the military, along with contracts for other government agencies that could also be imperiled. Amodei, whose company doesn't want its technology used for mass surveillance of Americans or with weapons that have no human oversight, said he hopes the Defense Department will revisit its current position of only working with contractors who will agree to an all-lawful-use standard. If Anthropic fails to drop its conditions, the Defense Department has vowed to declare the company a supply-chain risk, a move that would preclude it from working with other defense contractors. The Pentagon has also threatened to invoke the Cold War-era Defense Production Act to use Anthropic's software over the company's objections.
[64]
Anthropic is refusing to bend on AI safeguards as dispute with Pentagon nears deadline
CEO Dario Amodei said his company 'cannot in good conscience accede' to the Pentagon. A public showdown between the Trump administration and Anthropic is hitting an impasse as military officials demand the artificial intelligence company bend its ethical policies by Friday or risk damaging its business. Anthropic CEO Dario Amodei drew a sharp red line 24 hours before the deadline, declaring his company "cannot in good conscience accede" to the Pentagon's final demand to allow unrestricted use of its technology. Anthropic, maker of the chatbot Claude, can afford to lose a defense contract. But the ultimatum this week from Defense Secretary Pete Hegseth posed broader risks at the peak of the company's meteoric rise from a little-known computer science research lab in San Francisco to one of the world's most valuable startups. If Amodei doesn't budge, military officials have warned they will not just pull Anthropic's contract but also "deem them a supply chain risk," a designation typically stamped on foreign adversaries that could derail the company's critical partnerships with other businesses. And if Amodei were to cave, he could lose trust in the booming AI industry, particularly from top talent drawn to the company for its promises of responsibly building better-than-human AI that, without safeguards, could pose catastrophic risks. Anthropic said it sought narrow assurances from the Pentagon that Claude won't be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language "framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will." That was after Sean Parnell, the Pentagon's top spokesman, posted on social media that "we will not let ANY company dictate the terms regarding how we make operational decisions" and added the company has "until 5:01 p.m. ET on Friday to decide" if it would meet the demands or face consequences. Emil Michael, the defense undersecretary for research and engineering, later lashed out at Amodei, alleging on X that he "has a God-complex" and "wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk." That message hasn't resonated in much of Silicon Valley, where a growing number of tech workers from Anthropic's top rivals, OpenAI and Google, voiced support for Amodei's stand late Thursday in an open letter. OpenAI and Google, along with Elon Musk's xAI, also have contracts to supply their AI models to the military. "The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," the open letter says. "They're trying to divide each company with fear that the other will give in." Also raising concerns about the Pentagon's approach were Republican and Democratic lawmakers and a former leader of the Defense Department's AI initiatives. "Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end," wrote retired Air Force Gen. Jack Shanahan in a social media post. Shanahan faced a different wave of tech worker opposition during the first Trump administration when he led Maven, a project to use AI technology to analyze drone footage and target weapons. So many Google employees protested its participation in Project Maven at the time that the tech giant declined to renew the contract and then pledged not to use AI in weaponry. "Since I was square in the middle of Project Maven & Google, it's reasonable to assume I would take the Pentagon's side here," Shanahan wrote Thursday on social media. "Yet I'm sympathetic to Anthropic's position. More so than I was to Google's in 2018." He said Claude is already being widely used across the government, including in classified settings, and Anthropic's red lines are "reasonable." He said the AI large language models that power chatbots like Claude are also "not ready for prime time in national security settings," particularly not for fully autonomous weapons. "They're not trying to play cute here," he wrote. Parnell asserted Thursday that the Pentagon wants to " use Anthropic's model for all lawful purposes" and said opening up use of the technology would prevent the company from "jeopardizing critical military operations," though neither he nor other officials have detailed how they want to use the technology. The military "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement," Parnell wrote. When Hegseth and Amodei met Tuesday, military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn't approve. Amodei said Thursday that "those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." He said he hopes the Pentagon will reconsider given Claude's value to the military, but, if not, Anthropic "will work to enable a smooth transition to another provider." -- - AP reporter Konstantin Toropin contributed to this report.
[65]
News from the front - Anthropic returns fire on the US Department of War and rejects demands not to block AI from launching missiles on its own
He did it, he actually did it! Confronted with an ultimatum from the US Department of War to drop contractual clauses that prevent its tech from being used for mass surveillance of domestic citizens or the autonomous launching of weapons without a human being pressing the actual button, Anthropic CEO Dario Amodei has stuck to his metaphorical guns and refused to comply. With a deadline looming of 5pm today Washington time, Pete Hesgeth, Secretary of State at the Department of War, had threatened a number of possible outcomes if Anthropic didn't fold, including being blacklisted as a security risk or having Cold War legislation brought to bear to compel it to do what the Administration wants. But Amodei got his retaliation in first, issuing a public statement on Thursday that made it clear Anthropic would rather risk losing the business than buckling to his demands. He went out of his way to make clear that: Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner. But he added: However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today's technology can safely and reliably do. Fully-autonous weapons may be critical to national defense at some point, he acknowledges, but for the moment: Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk... without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. In a shot over the head of the Department of War, he pointed out that the two clauses to which it is objecting now were in the contract that it signed with the supplier, not something that Anthropic has sought to add to the mix: To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date. And he reached out to the Department to suggest: We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. That leads to only one conclusion: Regardless, these threats do not change our position: we cannot in good conscience accede to their request. At time of writing, several hours before today's 5pm deadline, there has been no official reaction from Hesgeth, although it's safe to assume that being told such a firm 'no' in such a public way isn't going to sit well, even if it was his Department that picked the fight in the first place. But political avatars within the Administration have been returning fire. The highest profile one to date was Emil Michael, US Under-secretrary of War and Chief Technology Officer at the Pentagon, who turned to personal attacks on Amodei in a series of social media posts: It's a shame that [he] is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk..Anthropic is lying...What we are talking about is allowing our warfighters to use AI without having to call [Amodei] for permission to shoot down an enemy drone swarms that would kill Americans. (It should be noted that at no point has that last idea been floated or insisted upon by Anthropic and no evidence has been presented that Amodei has sought to or would wish to insert himself in the chain-of-command in such a way. In fact, he goes out of his way in the public statement to say he understands where military decision-making responsibility lies.) Meanwhile Sean Parnell, former United States Army Captain and currently Assistant to the Secretary of War for Public Affairs, protesting: The Department of War has no interest in using AI to conduct mass surveillance of Americans, which is illegal, nor do we want to use AI to develop autonomous weapons that operate without human invovement. This narrative is fake and being peddled by leftists in the media. To which, at the risk of being mis-characterised as a leftist in the tech media, the obvious question surely is,'What's the problem then? Anthropic's contractual terms map onto your own proclaimed position, don't they?'. Parnell's problem is: We will not let ANY company dictate the terms regarding how we make operational decisions. Others in the Administration have also spoken out. Sarah B. Rogers, Under Secretary for Public Diplomacy at the Department of State, says: There are a lot of instances where the Government and its AI provider -- and US law -- concur on what ought to be out-of-bounds. Mass domestic surveillance is one obvious example! But the contractor can't have procedural carte blanche to cut the cord if there's a dispute. And Jeremy Lewin, Under Secretary of State for Foreign Assistance, Humanitarian Affairs & Religious Freedom, for example, insists: This isn't about Anthropic or the specific conditions at issue. It's about the broader premise that technology deeply embedded in our military must be under the exclusive control of our duly elected/appointed leaders. No private company can dictate normative terms of use -- which can change and are subject to interpretation -- for our most sensitive national security systems. The Department of War obviously can't trust a system a private company can switch off at any moment. Whatever happens after 5pm, this one is going to run and run. Political opponents of the Administration have been making their views clear as well, such as Senator Mark Warner from Virginia and Vice-Chairman of the Senate Intel Committee, who asks: Does anybody really want Pete Hegseth to decide what's appropriate and not appropriate use of Artificial Intelligence?...Companies have to make some concessions to work with government. That's a legitimate debate. What kind of data is being collected, what kind of weapons should be used, with or without a human in the loop? - those are policy questions. And I don't know about you, but I don't trust Pete Hegseth to make those decisions. We've got to stand up in this world where Artificial Intelligence could bring a lot of good, but also has an awful lot of challenges, and we sure as hell don't need the so-called Secretary of War to be making these choices! What happens next? Will the Administration really dare to put a US company at the forefront of the global AI sector that Trump 2.0 has declared America must dominate on a security risk blacklist along with the likes of Huwaei? Would it be able to square shifting overnight from having Anthropic's tech approved as the only one good enough for maximum security work at the Pentagon to suddenly being deemed a security risk to the nation, just because it won't alter the terms of a contract that were already there when the Department signed it? As Amodei noted: They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a "supply chain risk" -- a label reserved for US adversaries, never before applied to an American company -- and to invoke the Defense Production Act to force the safeguards' removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security. The answer, of course, is, quite possibly. What will happen to Anthropic if the Adminstration does declare war? Will it be blacklisted by the commercial enterprise sector as 'UnAmerican' as well as seeing public sector work dry up? Or will it build on the massive PR that it's had this week and see business actually increase from enterprise buyers who do value the importance of 'human in the loop' guardrails at a time when AI is in its infancy and still prone to toddler tantrums? What about the rest of the tech sector? How will it react to all this? What happens to Anthropic could, presumably, happen to anyone else in this space? At Google and OpenAI, employees have been signing up to public petitions in support of their competitor, but, of course, some of the 'usual suspects' have leapt on Anthropic's decision to score points. For example, Alex Kop, CEO of Palantir, the man who's happy to boast of his company, "Sometimes we kill people, hope you're in favor of that!", raged: Do you really think a warfighter is going to trust a software company that pulls the plug because something becomes controversial, with their life? And because it's no show without Punch, Elon Musk turned up right on cue with a concise attempt to open a new ideological/racial front in the war of words: More to come as events unfold after 5pm. For now though, Anthropic appears to be be at war with the US Government.
[66]
Even as Anthropic moves deeper into enterprise, it hits a wall at the DOD - SiliconANGLE
Even as Anthropic moves deeper into enterprise, it hits a wall at the DOD U.S. Defense Secretary Pete Hegseth met with Anthropic PBC Chief Executive Dario Amodei in Washington D.C. today, where he delivered a stark warning - either remove restrictions on how the military uses the company's Claude AI chatbot, or face severe consequences. Hegseth reportedly told Amodei (pictured) that the Pentagon may effectively blacklist his company by designating it as a "supply chain risk," or else force it to comply with his demands through the Defense Production Act, if he doesn't change his mind. He reportedly gave him a deadline of Friday. Anthropic's Claude chatbot is currently the only major chatbot that's approved to work with America's classified military systems, but the Pentagon has issues with certain restrictions on how it should be used. Hegseth is demanding that the company lift those restrictions and allow the military to use Claude for "all lawful use," Axios reported. But Anthropic is refusing to budge over two issues - it doesn't want Claude to be used to control weapons, nor does it want to partake in any mass surveillance of U.S. citizens. One source familiar with the company's stance said Amodei doesn't believe artificial intelligence systems are reliable enough to be trusted with weapons. He's also worried that there are no laws governing how AI can be used for surveillance. On the other hand, Pentagon chiefs believe that the military's use of any technology should be governed by U.S. law, not the private usage policies of the companies that develop them. If Anthropic were to be designated as a supply chain risk, that would prohibit any company with a military contract from doing business with it. It would be a major blow for the AI company, which has secured dozens of enterprise contracts over the last couple of years. Normally, the designation is reserved for companies with connections to hostile governments. As for the DPA, this is a law that gives the U.S. President the authority to force companies to prioritize and expand production for national defense reasons. Originally conceived for use in times of war, the Act was most recently invoked during the coronavirus pandemic, forcing companies such as General Motors Co. to mass produce ventilators and masks. According to Axios' sources, the meeting between Hegseth and Amodei was cordial enough, but in no way was it "warm and fuzzy" as both men doubled down on their stance. Amodei reiterated that he could not support the use of his models to operate weapons without human oversight or engage in mass surveillance, and insisted that these red lines have never comprised any military operations. Hegseth, meanwhile, reportedly praised Anthropic and said he'd like to continue working with the company, but refused to back down. In a statement to CNN later, a spokesperson for the Pentagon said the issue "has nothing to do with mass surveillance and autonomous weapons being used," before adding that the military has "always followed the law." The concern is that "you can't lead tactical ops by exception," and that "legality is the Pentagon's responsibility as the end user." The tension between the two stems from the fact that the Department of Defense doesn't have any alternatives to Anthropic's Claude at this time. While it has reportedly reached a deal to use xAI Corp.'s Grok model with classified systems, it's thought that switching to another provider would be a massive headache and cause severe disruption to the Pentagon's operations. Dean Ball, senior fellow at the Foundation for American Innovation and former senior policy advisor on AI in Trump's White House, told TechCrunch that the lack of redundancy is the reason for the Pentagon's aggressive stance. "The DOD has no backups. This is a single-vendor situation here," he said. "They can't fix that overnight." Despite the apparent threats looming large, Anthropic accelerated its enterprise-focused strategy today when it announced new updates to Claude Cowork and a host of partnerships with software-as-a-service companies. The update will enable applications including Salesforce Inc.'s Slack, Intuit, DocuSign LegalZoom, FactSet and Google's Gmail to integrate with Claude Cowork, which is a platform for building AI agents that understand business context. Those agents can connect with third-party software tools via Anthropic's Model Context Protocol. Anthropic positions Claude Cowork as a kind of "central brain" for knowledge workers to engage with AI. When the platform was launched in earlier this month, it sent shockwaves through the stock market as investors initially perceived it as a threat to the business models of many SaaS companies. Hundreds of billions of dollars in value were wiped out, affecting firms including Thomson Reuters Corp., LegalZoom.com Inc. and Intuit Inc. The weekslong selloff in SaaS stocks carried over into this week, driven in part by a viral Substack post from Citrini Research that predicted the profound impact AI could have on the economy in the near future. IBM Corp.'s stock fell 13% on Monday, its worst single-day decline since October 2000, after Anthropic said in another update that its tools can now help to modernize applications running on Cobol, a legacy programming language for software that runs on mainframe computers. However, many of the affected stocks finally rebounded following Anthropic's announcement today. Salesforce's stock was up 4%, while IBM, Docusign and LegalZoom all rose 2%. Shares of Thomsen Reuters gained 11%, while FactSet was up 6%. Wedbush Securities' analysts said in a note that Anthropic's update shows that AI's threat to SaaS companies is "overblown". They argued that AI models cannot replace the complex workflows that are "deeply embedded" in modern software infrastructure. "The reality is that these new AI tools will not rip and replace existing software ecosystems and data environments," the analysts said. "These tools are only as useful as the data they can reach."
[67]
Hegseth threatens to force AI firm to share tech, escalating Anthropic standoff
Dario Amodei, CEO of Anthropic, speaks during the Anthropic Builder Summit in Bengaluru, India, on Feb. 16. (Priyanshu Singh/Reuters) Defense Secretary Pete Hegseth has threatened Anthropic that it could invoke powers that would allow the government to force the artificial intelligence firm to share its novel technology in the name of national security if it does not agree by Friday to terms favorable to the military, people familiar with the ongoing discussions said. But Anthropic is prepared to walk away from negotiations -- and its $200 million contract with the Defense Department -- if concerns over the use of its technology for autonomous weapons or mass surveillance are not addressed, according to the people familiar with the discussions. Anthropic is the first firm to integrate its technology into the Pentagon's classified networks, and the firm has aggressively positioned itself to be a key player in national security. In a meeting with Hegseth on Tuesday, Dario Amodei, the company's co-founder and chief executive, held firm that its AI model Claude should not be used to power autonomous weapons or conduct mass surveillance of Americans, said the people familiar with the discussions. Tensions have risen between the firm and the Pentagon in recent weeks over how Anthropic's AI was applied during the raid to capture Venezuelan President Nicolás Maduro. Defense officials responded swiftly, suggesting that if Anthropic did not allow the Pentagon to apply the AI as it wants to, within lawful limits, the company would be considered a supply-chain risk, costing it and any firm subcontracting its AI future business opportunities. At the Tuesday meeting, Hegseth went further, saying Anthropic could be subject to the Defense Production Act -- which enables the government to gain control of firms and their products -- in the name of national security. The DPA was used during the covid pandemic to address medical supply shortfalls. Overall, the meeting was serious but respectful, according to one of the people familiar with the discussions, with Hegseth praising Anthropic's technology. The secretary said he wanted to continue to work with the company, but threatened to cancel its contract by the end of the week, said the person, who spoke on the condition of anonymity to describe a private meeting. Amodei argued that neither of the limits he is seeking would impinge on the department's work, the person said. "During the conversation, Dario expressed appreciation for the Department's work and thanked the Secretary for his service," Anthropic said in a statement. "We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do." The meeting comes after escalating criticism of Anthropic by Pentagon officials. Hegseth and his team have insisted in recent weeks that the military have free rein to use AI tools as it sees fit, limited only by the law rather than guardrails set by the companies that make the systems. Defense officials say other leading companies have agreed, at least for unclassified work, casting Anthropic as a holdout. Anthropic and Amodei are trying to walk a fine line, positioning themselves as more than willing to work with the Pentagon and describing AI as a vital technology to allow democratic countries to defend themselves. But shortly after Hegseth set forth his views in an internal directive, Amodei published an essay warning of the dangers of fully autonomous weapons and mass surveillance tools. He wrote that while democratic countries could be expected to have limits on the use of such systems, "some of these safeguards are already gradually eroding in some democracies." The Pentagon has sped up its efforts to integrate AI into its weapons systems, driven by competition with China -- which is racing to acquire AI technology for its military -- and new dangers such as super-fast hypersonic missiles that are difficult for humans to react to. The conflicts in Ukraine and Gaza have provided a preview of the role AI could play in a future war, with the widespread use of cheap semiautonomous drones and tools that analyze vast amounts of information to identify targets to strike. The U.S. Air Force has tested an AI-piloted flight jet in recent years, finding that it can beat elite pilots by cutting tiny fractions of a second off turns and maneuvers. Fully autonomous weapons are probably still several years away, experts say. The Defense Department's current policy requires any system to undergo levels of review and have safeguards to ensure that humans would retain the decision-making on use of force. The policy will be reviewed as needed, officials have said. Modern military operations are complex, involving thousands of people making life-and-death decisions quickly, said Emelia Probasco, a senior fellow at Georgetown University's Center for Security and Emerging Technology. Not surprisingly, those people make mistakes, Probasco said, and AI tools could manage campaigns in all sorts of ways short of pulling the trigger. "Everyone is still trying to think what is the best way to use these systems to improve our decisions," said Probasco, a former Navy officer. "Nobody's really got the definitive answer yet."
[68]
The Pentagon brands Anthropic's CEO a 'liar' with a 'God complex' as deadline looms | Fortune
Pentagon officials have publicly questioned the character of Anthropic CEO Dario Amodei. Meanwhile, employees at competing AI labs have signed open letters supporting Anthropic's position. OpenAI CEO Sam Altman told his employees in a memo on Thursday, according to reporting from Axios, that OpenAI would push for the same limitations on autonomous weapons and mass surveillance that Anthropic has as it negotates to extend the use of ChatGPT, currently available to the military for non-sensitive use cases, to more classified domains. The Anthropic-Pentagon fight is now threatening to spiral into an industry-wide rebellion among tech workers at AI companies over how the AI systems they are building are used by the military. On Thursday, more than 100 workers at Google sent a letter to Jeff Dean, the company's chief scientist, also asking for similar limits on how the company's Gemini AI models are used by the U.S. military, according to the New York Times. On Thursday, Amodei published a lengthy statement explaining why the company believes there should be restrictions on the use of his company's AI technology for autonomous weapons and mass surveillance. These are the two areas where Anthropic currently restricts use of its models by the military, both in its contract terms and through safeguards it has built direclty into its Claude models. The Pentagon wants these limitations removed and for Anthropic to agree that the U.S. military can use its models can be used "for any lawful purpose." Frontier AI systems are "not reliable enough to power fully autonomous weapons" and without proper oversight, they "cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day," Amodei wrote in his statement. On surveillance, he argued that powerful AI can now stitch together individually innocuous public data, such as location records, browsing history, and social associations, into a comprehensive portrait of any American citizen's life at scale. Emil Michael, the U.S.'s Under Secretary of War, called Amodei "a liar" with a "God-complex" in response, accusing the CEO of wanting "to personally control the U.S Military" in posts on the social media platform X. In a separate post, Michael also characterized Anthropic's Claude Constitution -- an internal document outlining the values and principles the company builds into its AI -- as a corporate plot to "impose on Americans their corporate laws." The Pentagon has demanded Anthropic remove the contract limitations it objects to by 5:01 p.m. Friday or face having its $200 million contract with the U.S. military canceled or, in a more extreme move, be labeled "a supply-chain risk," which would effectively bar any company doing business with the military from using Anthropic's technology. This kind of step is normally reserved for foreign adversaries such as China's Huawei or the Russian cybersecurity firm Kaspersky. "Using it against a domestic company for reasons of them not being willing to bend on some principles of this sort is really quite escalatory and unprecedented," Seán Ó hÉigeartaigh, executive director of Cambridge's Centre for the Study of Existential Risk, told Fortune. The Department of War has also threatened to invoke the Cold War-era Defense Production Act, using the law to compel Anthropic to hand over an unrestricted version of Claude on the grounds that the government deems it essential to national security. If the Pentagon does go down this route, they will be using powers intended only for emergencies to resolve a contract dispute during peacetime. There is some precedent for this: the Biden Administration also invoked the DPA in 2023 to compel frontier AI labs to hand over information about the safety of their AI models. But compelling a company to produce a product, as opposed to simply provide information, comes closer to nationalization of a leading technology company. "If they are being effectively coerced into allowing their technology to be used in ways that even they themselves say is not reliable in high-stakes life and death situations like on the battlefield," Ó hÉigeartaigh said, "that sets a very dangerous precedent." The Department of War has publicly stated it has no intention of conducting mass surveillance or removing humans from weapons targeting decisions but the dispute could rest on how either side is defining "autonomous" or "surveillance" in practice. Representatives for the Department did not immediately respond to a request for comment from Fortune. An Anthropic spokesperson told Fortune that the company was continuing "to engage in good faith" with the Department of War. However, the spokesperson said that contract language received overnight had made "virtually no progress" on the core issues. New language "framed as compromise" was "paired with legalese that would allow those safeguards to be disregarded at will," they said. Amodei has called the threats from the Department of War "inherently contradictory" as "one labels us a security risk; the other labels Claude as essential to national security." Anthropic has won praise from some corners for its willingness to stand firm. Harvard law professor Lawrence Lessig praised the company's statement as "a beautiful act of integrity and principle" and called it "incredibly rare for our time." Rivals OpenAI and xAI have agreed to Pentagon contracts that allow their models to be used for all lawful purposes, with xAI going further by also agreeing to deploy its systems in some classified settings. But more than 330 current employees at rival labs Google DeepMind and OpenAI have also published an open letter in support of Anthropic which urges their own leadership to follow the company's lead. "They're trying to divide each company with fear that the other will give in," the letter read. "That strategy only works if none of us know where the others stand." The signatories included senior research scientists and both named and anonymous researchers from both companies. Ó hÉigeartaigh said that the outcomes of the dispute could extend well beyond Anthropic itself. "If the Pentagon comes out on top of this," he said, "it will establish precedents that will not be good for the independence of these companies, or their ability to hold to ethical standards."
[69]
Anthropic vs. Pentagon AI standoff nears critical deadline
Anthropic and the Pentagon are facing off over deploying AI for military use, with a day left until a government-imposed deadline. Earlier this week, the Department of Defense delivered an ultimatum to Anthropic compelling the company to yield to its demands by Friday afternoon. Pentagon officials are seeking unrestricted access to Anthropic's Claude AI model, which is currently viewed as a more powerful product compared to other AI products on the market like Grok. Anthropic, though, has pressed for assurances its AI won't be engaged in mass surveillance of Americans or used in autonomous weapons that don't require human oversight. CBS News reported on Wednesday evening that the Pentagon had sent Anthropic its latest offer to resolve the standoff. No details about the offer were immediately available. The Defense Department has reportedly threatened to label Anthropic as a "supply chain" risk that could lead to the loss of its government contracts, a move usually reserved for foreign rivals. It might also invoke the Defense Production Act, an extraordinary step that could pave the way for the U.S. government to commandeer the company's AI technology. The Pentagon and Anthropic did not immediately respond to a request for comment. Analysts have pointed to a contradiction in the Trump administration's hardline approach to the company. Labeling Anthropic as a supply chain risk would bar the government from using its products. Yet invoking the Defense Production Act would allow it to claim Anthropic's AI model is essential to national security. Anthropic has cultivated a reputation as a "safety-first"AI company, and its CEO Dario Amodei has said that the AI products it is developing must be regulated. But the company announced on Tuesday it was dialing back its safety commitments so its AI models can better compete with other AI products. Other AI-minded executives appear to be paying close attention to the showdown. Nvidia $NVDA CEO Jensen Huang said he wants a negotiated resolution between the Pentagon and Anthropic."I hope that they can work it out, but if it doesn't get worked out, it's also not the end of the world," Huang told CNBC.
[70]
Anthropic faces lose-lose scenario in Pentagon conflict as deadline for policy change looms
Anthropic faces a 'lose-lose' battle as it faces off with the Pentagon Anthropic is heading into Friday in a no-win situation. The artificial intelligence startup has until 5:01 p.m. ET to decide whether it will agree to allow the Department of Defense to use its models in all lawful use cases without limitation. If it doesn't, Defense Secretary Pete Hegseth has threatened to label the company a "supply chain risk" or force it to comply by invoking the Defense Production Act. Anthropic signed a $200 million contract with the DoD in July, and was the first AI lab to integrate its models into mission workflows on classified networks. The company has been negotiating the terms of its agreement with the agency, and has asked for assurance that its technology won't be used for fully autonomous weapons or domestic mass surveillance of Americans. "In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Anthropic CEO Dario Amodei, who co-founded the company in 2021, wrote in a statement on Thursday. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do." The DoD has refused to budge, and negotiations have devolved into a stalemate that's turned into the most high-profile test to date of Anthropic's stated values. The company has spent years carefully crafting its reputation as the champion of safe and responsible AI deployment, positioning itself in contrast to OpenAI, where Amodei worked before leaving to start Anthropic.
[71]
Anthropic weakens its safety pledge in the wake of the Pentagon's pressure campaign
Two stories about the Claude maker Anthropic broke on Tuesday that, when combined, arguably paint a chilling picture. First, US Defense Secretary Pete Hegseth is reportedly pressuring Anthropic to yield its AI safeguards and give the military unrestrained access to its Claude AI chatbot. The company then chose the same day that the Hegseth news broke to drop its centerpiece safety pledge. On Tuesday, Anthropic said it was modifying its Responsible Scaling Policy (RSP) to lower safety guardrails. Up until now, the company's core pledge has been to stop training new AI models unless specific safety guidelines can be guaranteed in advance. This policy, which set hard tripwires to halt development, was a big part of Anthropic's pitch to businesses and consumers. "Two and a half years later, our honest assessment is that some parts of this theory of change have played out as we hoped, but others have not," Anthropic wrote. Now, its updated policy approaches safety relatively, rather than with strict red lines. Anthropic's quotes in an interview with Time sound reasonable enough in a vacuum. "We felt that it wouldn't actually help anyone for us to stop training AI models," Jared Kaplan, Anthropic's chief science officer, told Time. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments... if competitors are blazing ahead." But you could also read those quotes as the latest example of a hot startup's ethics becoming grayer as its valuation rises. (Remember Google's old "Don't be evil" mantra that it later removed from its code of conduct?) The latest versions of Claude have drawn widespread praise, especially in coding. In February, Anthropic raised $30 billion in new investments. It now has a valuation of $380 billion. (Speaking of the competition Kaplan referred to, rival OpenAI is currently valued at over $850 billion.) In place of Anthropic's previous tripwires, it will implement new "Risk Reports" and "Frontier Safety Roadmaps." These disclosure models are designed to provide transparency to the public in place of those hard lines in the sand. Anthropic says the change was motivated by a "collective action problem" stemming from the competitive AI landscape and the US's anti-regulatory approach. "If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe," the new RSP reads. "The developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit." Neither Anthropic's announcement nor the Time exclusive mentions the elephant in the room: the Pentagon's pressure campaign. On Tuesday, Axios reported that Hegseth told Anthropic CEO Dario Amodei that the company has until Friday to give the military unfettered access to its AI model or face penalties. The company has reportedly offered to adopt its usage policies for the Pentagon. However, it wouldn't allow its model to be used for the mass surveillance of Americans or weapons that fire without human involvement. If Anthropic doesn't relent, experts say its best bet would be legal action. But will the Pentagon's proposed penalties be enough to scare a profit-driven startup into compliance? Hegseths' threats reportedly include invoking the Defense Production Act, which gives the president authority to direct private companies prioritize certain contracts in the name of national defense. The military could also sever its contract with Anthropic and designate it as a supply chain risk. That would force other companies working with the Pentagon to certify that Claude isn't included in their workflows. Claude is the only AI model currently used for the military's most sensitive work. "The only reason we're still talking to these people is we need them and we need them now," a defense official told Axios. "The problem for these guys is they are that good." Claude was reportedly used in the Maduro raid in Venezuela, a topic Amodei is said to have raised with its partner Palantir. Time's story about the new RSP included reactions from a nonprofit director focused on AI risks. Chris Painter, director of METR, described the changes as both understandable and perhaps an ill omen. "I like the emphasis on transparent risk reporting and publicly verifiable safety roadmaps," he said. However, he also raised concerns that the more flexible RSP could lead to a "frog-boiling" effect. In other words, when safety becomes a gray area, a seemingly never-ending series of rationalizations could take the company down the very dark path it once condemned. Painter said the new RSP shows that Anthropic "believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities. This is more evidence that society is not prepared for the potential catastrophic risks posed by AI."
[72]
Hegseth and Anthropic CEO set to meet as debate intensifies over the military's use of AI
WASHINGTON (AP) -- Defense Secretary Pete Hegseth plans to meet Tuesday with the CEO of Anthropic, with the artificial intelligence company the only one of its peers to not supply its technology to a new U.S. military internal network. Anthropic, maker of the chatbot Claude, declined to comment on the meeting but CEO Dario Amodei has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent. The meeting between Hegseth and Amodei was confirmed by a defense official who was not authorized to comment publicly and spoke on condition of anonymity. It underscores the debate over AI's role in national security and concerns about how the technology could be used in high-stakes situations involving lethal force, sensitive information or government surveillance. It also comes as Hegseth has vowed to root out what he calls a "woke culture" in the armed forces. "A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow," Amodei wrote in an essay last month. The Pentagon announced last summer that it was awarding defense contracts to four AI companies -- Anthropic, Google, OpenAI and Elon Musk's xAI. Each contract is worth up to $200 million. Anthropic was the first AI company to get approved for classified military networks, where it works with partners like Palantir. The other three companies, for now, are only operating in unclassified environments. By early this year, Hegseth was highlighting only two of them: xAI and Google. The defense secretary said in a January speech at Musk's space flight company, SpaceX, in South Texas that he was shrugging off any AI models "that won't allow you to fight wars." Hegseth said his vision for military AI systems means that they operate "without ideological constraints that limit lawful military applications," before adding that the Pentagon's "AI will not be woke." In January, Hegseth said Musk's artificial intelligence chatbot Grok would join the Pentagon network, called GenAI.mil. The announcement came days after Grok -- which is embedded into X, the social media network owned by Musk -- drew global scrutiny for generating highly sexualized deepfake images of people without their consent. OpenAI announced in early February that it, too, would join the military's secure AI platform, enabling service members to use a custom version of ChatGPT for unclassified tasks. Anthropic has long pitched itself as the more responsible and safety-minded of the leading AI companies, ever since its founders quit OpenAI to form the startup in 2021. The uncertainty with the Pentagon is putting those intentions to the test, according to Owen Daniels, associate director of analysis and fellow at Georgetown University's Center for Security and Emerging Technology. "Anthropic's peers, including Meta, Google and xAI, have been willing to comply with the department's policy on using models for all lawful applications," Owens said. "So the company's bargaining power here is limited, and it risks losing influence in the department's push to adopt AI." In the AI craze that followed the release of ChatGPT, Anthropic closely aligned with President Joe Biden's administration in volunteering to subject its AI systems to third-party scrutiny to guard against national security risks. Amodei, the CEO, has warned of AI's potentially catastrophic dangers while rejecting the label that he's an AI "doomer." He argued in the January essay that "we are considerably closer to real danger in 2026 than we were in 2023'' but that those risks should be managed in a "realistic, pragmatic manner." This would not be the first time Anthropic's advocacy for stricter AI safeguards has put it at odds with the Trump administration. Anthropic needled chipmaker Nvidia publicly, criticizing Trump's proposals to loosen export controls to enable some AI computer chips to be sold in China. The AI company, however, remains a close partner with Nvidia. The Trump administration and Anthropic also have been on opposite sides of a lobbying push to regulate AI in U.S. states. Trump's top AI adviser, David Sacks, accused Anthropic in October of "running a sophisticated regulatory capture strategy based on fear-mongering." Sacks made the remarks on X in response to an Anthropic co-founder, Jack Clark, writing about his attempt to balance technological optimism with "appropriate fear" about the steady march toward more capable AI systems. Anthropic hired a number of ex-Biden officials soon after Trump's return to the White House, but it's also tried to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official from Trump's first term, to its board of directors. The Pentagon-Anthropic debate is reminiscent of an uproar several years ago when some tech workers objected to their companies' participation in Project Maven, a Pentagon drone surveillance program. While some workers quit over the project and Google itself dropped out, the Pentagon's reliance on drone surveillance has only increased. Similarly, "the use of AI in military contexts is already a reality and it is not going away," Owens said. "Some contexts are lower stakes, including for back-office work, but battlefield deployments of AI entail different, higher-stakes risks," he said, referring to the use of lethal force or weapons like nuclear arms. "Military users are aware of these risks and have been thinking about mitigation for almost a decade."
[73]
OpenAI Wins Defense Contract After US Halts Anthropic Use
OpenAI has reached an agreement with the United States Department of Defense to deploy its artificial intelligence models on classified military networks, just hours after the White House ordered federal agencies to stop using technology from rival firm Anthropic. In a late Friday post on X, OpenAI CEO Sam Altman announced the deal, saying the company would provide its models inside the Pentagon's "classified network." He wrote that the department showed "deep respect for safety" and a willingness to work within the company's operating limits. The announcement came amid a turbulent week for the AI sector. Earlier the same day, Defense Secretary Pete Hegseth labeled Anthropic a "Supply-Chain Risk to National Security," a designation typically applied to foreign adversaries. The ruling requires defense contractors to certify they are not using the company's models. President Donald Trump simultaneously directed every US federal agency to immediately halt use of Anthropic technology, with a six-month transition period for agencies already relying on its systems. Related: Crypto VC Paradigm expands into AI, robotics with $1.5B fund: WSJ Anthropic was the first AI lab to deploy models across the Pentagon's classified environment under a $200 million contract signed in July. Negotiations collapsed after the company sought guarantees that its software would not be used for autonomous weapons or domestic mass surveillance. The Defense Department insisted the technology be available for all lawful military purposes. In a statement, Anthropic said it was "deeply saddened" by the designation and intends to challenge the decision in court. The company warned the move could set a precedent affecting how American technology firms negotiate with government agencies, as political scrutiny of AI partnerships continues to intensify. Altman said OpenAI maintains similar restrictions and that they were written into the new agreement. According to him, the company prohibits domestic mass surveillance and requires human responsibility in decisions involving the use of force, including automated weapons systems. Related: Pantera, Franklin Templeton join Sentient Arena to test AI agents Meanwhile, some users on X voiced skepticism. "I just canceled ChatGPT and bought Claude Pro Max," Christopher Hale, an American Democratic politician, wrote. "One stands up for the God-given rights of the American people. The other folds to tyrants," he added. "2019 OpenAI: we will never help build weapons or surveillance tools. 2026 OpenAI: department of War, hold my classified cloud instance. Integrity arc go brrrrrrr," one crypto user wrote.
[74]
Anthropic refuses to bend to Pentagon on AI safeguards as dispute nears deadline
A public showdown between the Trump administration and Anthropic is hitting an impasse as military officials demand the artificial intelligence company bend its ethical policies by Friday or risk damaging its business. Anthropic CEO Dario Amodei drew a sharp red line 24 hours before the deadline, declaring his company "cannot in good conscience accede" to the Pentagon's final demand to allow unrestricted use of its technology. Anthropic, maker of the chatbot Claude, can afford to lose a defense contract. But the ultimatum this week from Defense Secretary Pete Hegseth posed broader risks at the peak of the company's meteoric rise from a little-known computer science research lab in San Francisco to one of the world's most valuable startups. If Amodei doesn't budge, military officials have warned they will not just pull Anthropic's contract but also "deem them a supply chain risk," a designation typically stamped on foreign adversaries that could derail the company's critical partnerships with other businesses. And if Amodei were to cave, he could lose trust in the booming AI industry, particularly from top talent drawn to the company for its promises of responsibly building better-than-human AI that, without safeguards, could pose catastrophic risks. Anthropic said it sought narrow assurances from the Pentagon that Claude won't be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language "framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will." That was after Sean Parnell, the Pentagon's top spokesman, posted on social media that "we will not let ANY company dictate the terms regarding how we make operational decisions" and added the company has "until 5:01 p.m. ET on Friday to decide" if it would meet the demands or face consequences. Emil Michael, the defense undersecretary for research and engineering, later lashed out at Amodei, alleging on X that he "has a God-complex" and "wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk." That message hasn't resonated in much of Silicon Valley, where a growing number of tech workers from Anthropic's top rivals, OpenAI and Google, voiced support for Amodei's stand late Thursday in an open letter. OpenAI and Google, along with Elon Musk's xAI, also have contracts to supply their AI models to the military. "The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," the open letter says. "They're trying to divide each company with fear that the other will give in." Also raising concerns about the Pentagon's approach were Republican and Democratic lawmakers and a former leader of the Defense Department's AI initiatives. "Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end," wrote retired Air Force Gen. Jack Shanahan in a social media post. Shanahan faced a different wave of tech worker opposition during the first Trump administration when he led Maven, a project to use AI technology to analyze drone footage and target weapons. So many Google employees protested its participation in Project Maven at the time that the tech giant declined to renew the contract and then pledged not to use AI in weaponry. "Since I was square in the middle of Project Maven & Google, it's reasonable to assume I would take the Pentagon's side here," Shanahan wrote Thursday on social media. "Yet I'm sympathetic to Anthropic's position. More so than I was to Google's in 2018." He said Claude is already being widely used across the government, including in classified settings, and Anthropic's red lines are "reasonable." He said the AI large language models that power chatbots like Claude are also "not ready for prime time in national security settings," particularly not for fully autonomous weapons. "They're not trying to play cute here," he wrote. Parnell asserted Thursday that the Pentagon wants to " use Anthropic's model for all lawful purposes" and said opening up use of the technology would prevent the company from "jeopardizing critical military operations," though neither he nor other officials have detailed how they want to use the technology. The military "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement," Parnell wrote. When Hegseth and Amodei met Tuesday, military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn't approve. Amodei said Thursday that "those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." He said he hopes the Pentagon will reconsider given Claude's value to the military, but, if not, Anthropic "will work to enable a smooth transition to another provider." -- - AP reporter Konstantin Toropin contributed to this report.
[75]
Anthropic CEO rejects Pentagon ultimatum on unrestricted AI use
Anthropic CEO Dario Amodei stated that he cannot agree to the Pentagon's request for unrestricted access to the company's AI systems. Amodei identified two specific use cases that violate company principles: mass surveillance of Americans and fully autonomous weapons with no human in the loop. Defense Secretary Pete Hegseth set a Friday 5:01 p.m. deadline for Anthropic to acquiesce or face consequences. The Pentagon threatened to label Anthropic a supply chain risk or invoke the Defense Production Act to force compliance. Amodei noted the contradiction in the Pentagon's threats, stating one labels the company a security risk while the other labels its AI essential to national security. Amodei stated that if the Department chooses to offboard Anthropic, the company will work to ensure a smooth transition to another provider. Anthropic is currently the only frontier AI lab with classified-ready systems for the military, though the DOD is reportedly preparing xAI for the role.
[76]
Anthropic rejects 'bully' Pentagon's latest offer in AI stand-off
Washington | Anthropic CEO Dario Amodei said he "cannot in good conscience accede" to the Pentagon's demands to allow unrestricted use of its technology, deepening his company's public clash with the Trump administration that is threatening to pull its contract and take other drastic steps by Friday (Saturday AEDT). The maker of the AI chatbot Claude said in a statement it's not walking away from negotiations, but that new contract language received from the Defence Department "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons".
[77]
The Pentagon Is Pushing Anthropic to Make the Most Evil A.I. Possible. Will It?
Are you sure you want to unsubscribe from email alerts for Alex Kirshner? Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. Anthropic, the maker of Claude, wants to be seen as the major A.I. company most focused on safety. The company has spent a lot of time telling reporters about its commitment to developing A.I. to be as ethical and helpful as possible. Scenarios in which Claude destroys things have seemingly been top of mind for Anthropic's researchers. Now, we'll see if the company's leadership is serious about it. The federal government wants Anthropic to hand unrestricted access to its tools to the Department of Defense. Anthropic, the New York Times and several other outlets have reported, has tried to condition its services in two ways: One, it can't be used to build autonomous weapons that could fire without human oversight; and two, it can't be used for mass surveillance of American citizens. The Defense Department has not stated why it wants unfettered access to Anthropic's tools and why. It has not said why Anthropic's "no mass surveillance of Americans" and "no fully autonomous killing" provisions are unacceptable. But while the company holds out, U.S. Secretary of Defense Pete Hegseth has threatened to categorize Anthropic as a "supply chain risk," a move that could blacklist the company from the government and its contractors. At the same time, the government has reportedly considered invoking the Defense Production Act in an effort to force Anthropic to hand over what Hegseth wants. (Weird for a supposed supply chain risk, but sure.) Anthropic boss Dario Amodei has until Friday evening to make his call, Axios reported. And so here we are: The people who made an A.I. so good that it's the only one the Defense Department uses for its most sensitive tasks will decide whether to blink. If Anthropic is serious about A.I. safety, it has to reject Hegseth's demands. The reasons have only a small bit to do with Hegseth and everything to do with guarding against the most basic fears about this technology. Many of us want different things out of A.I. Some people want bullet-pointed summaries of summaries. Some want to make funny pictures and videos. Some want to build software. Some want to talk to a sex robot. Some don't want to have to pay attention in college lectures. Some spend all day on ChatGPT, while others would have preferred this field had never launched. Amid such controversial and rapid growth of an industry, it's hard to find consensus -- but there's one area where we can: None of us want A.I. to kill us. Hegseth may be an extra bad steward of technology that could do that -- more emboldened than most of his predecessors to turn it loose -- but autonomous killing technology is bad in anyone's hands, and Anthropic's stated problem here is with the development of weapons that might not keep a person in the loop. This would be a five-alarm fire under any president and any defense secretary, even one without an apparent history of alcohol problems and enthusiasm for flouting international law. Such is the nature of an autonomous machine, which could get up to all kinds of murderous shenanigans no matter who was heading the Defense Department. Anthropic has already, just this week, started to dial back its core commitment to A.I. safety. The company's previous policy was to pause development work on its model if it concluded that work had become dangerous. However, it said it would stop doing that if competitors released similar or superior models. Anthropic did not become a $380 billion company by not throwing the kitchen sink at its competition with ChatGPT, xAI, Gemini, and the like, so it now says it will throw that caution to the wind. Giving in to the Pentagon would be something different, though, no matter what Hegseth actually wants to do with the full range of Anthropic's power. The more Hegseth pursues military actions that go right up to the line of international law and then cross it, as he did with the boat strikes in the Caribbean Sea, the easier it is to understand his desire for a killing tool that could make it extremely difficult to find a human at fault for any law-breaking. If the Pentagon doesn't want to use the tech for a dragnet that could compromise Americans' privacy or liberty, great. Even the world's biggest Pete Hegseth fan might take issue with his successors inheriting that capability. In the meantime, neither Hegseth nor President Donald Trump has come out and said clearly that they don't want to use Anthropic's A.I. for fully automated killing or mass domestic surveillance. There is no congressional bill about to go to Trump's desk telling him that his underlings can't use A.I. for those purposes. Anthropic is keenly aware that anyone using Claude, especially those at the levers of power, might use it to destroy things. The company recently released a "constitution" for Claude, an effort to both guide the machine's behavior and demonstrate that it has not been full of it when it has talked of building an A.I. that would be helpful, not destructive. The document seems explicitly written to not be all that explicit. Anthropic would rather not be painted into a situation where it would be impossible to accommodate any demand from the U.S. government. But it mentions wanting Claude to not "undermine appropriate oversight mechanisms of A.I." If letting Claude kill people on its own is not shirking oversight mechanisms, then we might reasonably question which "oversight mechanisms" Anthropic is talking about. Anthropic says Claude is trained to value the human ability to "adjust, correct, retrain, or shut down A.I. systems." If an A.I. system can kill without a human holding its leash, those words don't mean anything. There is a section of the constitution titled "avoiding problematic concentrations of power." The company says that it is "especially concerned about the use of A.I. to help individual humans or small groups gain unprecedented and illegitimate forms of concentrated power." In order to avoid that pitfall, Claude "should generally try to preserve functioning societal structures, democratic institutions, and human oversight mechanisms, and to avoid taking actions that would concentrate power inappropriately or undermine checks and balances." A fully autonomous death merchant would undermine some checks and balances. Anthropic might defend giving it to the government on the grounds that an elected government isn't "illegitimate," but it would be a wormy way of justifying a business decision that could end countless human lives. There may well be no ethical A.I., and Anthropic's work to create one could be lip service (likely), naive (very possible), or even genuine (could be!), yet strained by its goal to be worth a zillion dollars. Giving the government carte blanche to carry out the worst possible use cases of A.I. would clear up the issue quickly and reveal Anthropic's moral value proposition to be a lie. Claude demonstrates truly impressive reasoning. It's why the Pentagon wants it so badly, and why a DOD source admitted to Axios, "The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good." It's not a bad situation for a tech startup to be in. But it shouldn't even take a market-leading piece of technology to work through one of the simplest equations imaginable: You say you'll give your product to someone as long as he doesn't use it to spy on the whole country or automate death. He balks at your terms. What will he do if you give him what he wants?
[78]
Hegseth threatens to cancel Anthropic's $200 million contract over "woke AI" concerns
Defense Secretary Pete Hegseth will terminate Anthropic's $200 million contract with the military by Friday unless the artificial intelligence lab agrees to loosening its safety standards. The threat came on Tuesday during a meeting between Hegseth and Anthropic CEO Dario Amodei, according to a person with direct knowledge of the meeting who was not authorized to speak publicly. For months, Amodei has insisted that using AI for domestic mass surveillance and AI-controlled weapons are ethical lines the company will not cross, calling such use "illegitimate" and "prone to abuse." According to a source familiar with the Hegseth meeting, Amodeo stressed those positions again on Tuesday. Hegseth has said Anthropic needs to allow the U.S. to use its AI in all "lawful" purposes, which could include AI-directed warfare and for surveillance. The Defense Department did not immediately respond to a request for comment. According to the source with knowledge of the meeting, Hegseth said officials will use the Defense Production Act, a law from the 1950s usually invoked during national emergencies to force companies to produce certain products considered critical to national security, or label Anthropic a "supply chain risk" if it continues to balk at the Trump administration's demands. Anthropic's hard-line on domestic surveillance and AI weapons has been labeled "woke AI" by Hegseth and other Trump administration officials. White House AI czar David Sacks helped draft an executive order last year that targeted tech companies over the claim. AI experts say "woke AI" is a nebulous and ill-defined term that Trump officials seem to be using to describe any and all safety protections on powerful AI tools and the belief that AI chatbots have liberal bias baked into their models. Competing AI firms such as OpenAI and Google have agreed to have their AI tools used in any "lawful" scenarios, as has Elon Musk's xAI, which this week was approved for use in classified settings. But administration officials granted Anthropic a $200 million contract last summer after considering it the most advanced and secure model for sensitive military applications.
[79]
Opinion | What Both Anthropic and the Pentagon Get Wrong
Mr. Kendall was the secretary of the Air Force in the Biden administration. At 5:01 p.m. Friday, the Pentagon may be at war. I'm not referring to Iran, nor to any other shooting war -- but a potentially existential conflict between two parties, nonetheless: The artificial intelligence company Anthropic and the Department of Defense are fighting over the contractual terms for its continued use of Anthropic's A.I. model. Anthropic is insisting that the government agree to specific restrictions that would prevent the use of its model to conduct widespread surveillance of Americans or to control autonomous weapons like drones without a human in what is called the "kill chain." The company reiterated on Thursday that it has no intention to change its position. The government says that the only requirement its contractors can insist on is that their products be used lawfully. There is a lot at stake, and neither side is offering the correct solution. A.I. is poised to be the most transformative technology of our generation, perhaps of any generation, and we need to ensure the government and the private enterprises that develop these technologies have a constructive and mutually beneficial relationship consistent with American values. That can happen only if we use the mechanisms our country's founders put in place to define the rules of the game, level the playing field and balance interests across the government and among individuals and businesses: through regulatory legislation passed by Congress. The tool Anthropic is providing to the government is enormously powerful; like other tools, it can inherently be used for good or evil. Anthropic is rightly concerned that its tool could be used in ways that are unsafe or malicious. The company doesn't want to see its A.I. model used without human control, which could result in the killing of noncombatants or friendly troops by automated weapons, nor deployed to spy broadly on Americans in ways that could violate dearly held values like privacy and freedom from illegal search and seizure or could suppress political dissent. Most Americans would probably agree. On its side, the Department of Defense will not accept constraints on the use of products it has purchased. The government has a point. America's national security team needs to have the freedom to use the products it buys within the law and not be beholden to preferences from the sellers. The government is trying to force Anthropic to capitulate with two threats: invoking the Defense Production Act to force Anthropic to provide its product with no additional restrictions, and designating Anthropic as a "supply chain risk" contractor. The first of these is unusual but consistent with the law. Claude, Anthropic's large language model, is the only A.I. product approved for use on classified Pentagon networks. It is not unreasonable for the government to assert that it must have access to Claude for national security reasons until a comparable product from a competitor becomes available (something that appears to be fairly imminent).
[80]
Anthropic refuses to bend to Pentagon on AI safeguards
One of your browser extensions seems to be blocking the video player from loading. To watch this content, you may need to disable it on this site. A public showdown between the Trump administration and Anthropic is hitting an impasse as military officials demand the artificial intelligence company bend its ethical policies by Friday or risk damaging its business. Anthropic CEO Dario Amodei drew a sharp red line 24 hours before the deadline, declaring his company "cannot in good conscience accede" to the Pentagon's final demand to allow unrestricted use of its technology.
[81]
Pentagon officials sent Anthropic best and final offer for military use of its AI amid dispute, sources say
Pentagon officials on Wednesday night sent Anthropic their best and final offer in negotiations for use of the company's artificial intelligence technology, just ahead of a government-imposed deadline, according to sources familiar with the discussions. It was unclear whether the offer substantially changed what the government has been seeking from the AI startup, or whether the company had agreed. Defense Secretary Pete Hegseth set a deadline of Friday evening for the company to grant all lawful use for its AI technology or face the loss of its business with the U.S. military, sources familiar with the situation told CBS News. Spokespeople for the company didn't immediately respond to a request for comment Thursday morning. A senior Pentagon official said Thursday Anthropic will face not just the loss of business but being labeled a supply chain risk. Pentagon officials are also considering invoking the Defense Production Act to make Anthropic adhere to what the military is seeking, which is full control of its AI technology for use in military operations, sources told CBS News. The company was awarded a $200 million contract by the Pentagon in July to develop AI capabilities that would advance U.S. national security. Anthropic has repeatedly asked defense officials to agree to guardrails that would restrict its AI model, called Claude, from conducting mass surveillance of Americans, sources said. Trump officials noted that this sort of surveillance is illegal and the Pentagon follows the law. The officials also said the military is simply asking for a license to use the AI strictly for lawful activities. Anthropic's CEO, Dario Amodei, also wants to ensure Claude is not used by the Pentagon for final targeting decisions in military operations without any human involvement, one source familiar with the negotiations said. Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment, the person said. In a meeting at the Pentagon on Tuesday morning, Hegseth gave Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model, according to sources familiar with the matter.
[82]
Anthropic rejects Pentagon's "final offer" in AI safeguards fight
Why it matters: A deadline of Friday at 5:01pm is fast approaching for Anthropic to let the Pentagon use its model Claude as it sees fit or potentially face severe consequences. What they're saying: "The contract language we received overnight from the Department of War made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons," Anthropic said in a statement. * "New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will. Despite DOW's recent public statements, these narrow safeguards have been the crux of our negotiations for months." * Anthropic is not walking away from the table, even as significant gaps remain with less than 24 hours before the deadline. The company expects further negotiations. Catch up quick: The Pentagon and Anthropic are in a high-stakes feud over the limits Anthropic wants to place on the department's use of its AI model Claude: no mass surveillance or autonomous weapons. * The Pentagon this week started laying the groundwork for one consequence -- blacklisting the company as a supply chain risk -- by asking defense contractors including Boeing and Lockheed Martin to assess their exposure to Anthropic. * Alternatively, Hegseth threatened to invoke the Defense Production Act to compel Anthropic to provide its model without any restrictions. Such an order may be on murky legal ground. The big picture: The Pentagon's requirement that AI models be offered for "all lawful purposes" in classified settings is not unique to Anthropic. * While Anthropic has been the only model used in classified settings to date, xAI recently signed a contract under the all lawful purposes standard for classified work. * Negotiations to bring OpenAI and Google into the classified space are accelerating. What's next: Amodei said the company remains committed to continuing talks. Editor's note: This story has been updated with additional details throughout.
[83]
Hundreds of Google, OpenAI employees back Anthropic in Pentagon fight
Hundreds of employees of Google and OpenAI are backing artificial intelligence technology company Anthropic, which faces a Friday evening deadline to give the Pentagon permission to use its AI system as it wishes or face repercussions from the department. Employees who signed the letter alleged the Pentagon was trying to "get them to agree to what Anthropic has refused," which could imply the Pentagon has inquired with the top AI companies about similar access to their technology. The letter is still accepting signatures. "We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight," reads the letter, signed by more than 430 employees. The Pentagon has remained in a standoff with Anthropic for weeks over military access to its AI model Claude, which the company has made so that it cannot be used to conduct mass surveillance or develop lethal autonomous weapons. The Pentagon put a 5:01 p.m. Friday deadline for the company to grant the Pentagon access to Claude and threatened to enact the Defense Production Act (DPA), rescind a $200 million contract and label the company as a "supply chain risk" if it does not comply. Anthropic CEO Dario Amodei said in a Thursday statement that his company "cannot in good conscience accede to their request." Sean Parnell, chief Pentagon spokesperson, responded Thursday that the department has "no interest" in using AI to conduct mass surveillance of U.S. citizens nor to develop and operate autonomous weapons. OpenAI CEO Sam Altman said Friday that he agrees with Anthropic's red lines for its AI model. The companies are competitors in the AI industry. "I don't personally think the Pentagon should be threatening DPA against these companies," Altman told CNBC's "Squawk Box" Friday morning. "... as long as it is going to comply with legal protections and the few red lines that the field, we have, I think we share with Anthropic and that other companies also independently agree with, I think it is important to do that." More than 100 employees on Google's AI team signed an internal letter they sent to Jeff Dean, the chief scientist of Google DeepMind, according to a New York Times report. The report said employees who signed the letter didn't want Google to allow military access to Google's Gemini AI to surveil U.S. citizens or to steer autonomous lethal weapons. "Please do everything in your power to stop any deal which crosses these basic red lines," the employees reportedly wrote. "We love working at Google and want to be proud of our work."
[84]
Why Sam Altman Says OpenAI Has the Same 'Red Lines' as Its Rival, Anthropic
A twist came on Friday, when OpenAI cofounder and CEO Sam Altman voiced support for Anthropic, his company's chief competitor, in Anthropic's face-off with the Pentagon, and says he is working on a deal with the government that would adhere to the same safety standards as his rival In an interview with CNBC's Squawk Box, Altman said he strongly believes that AI companies should work with the United States government, provided that the government "is going to comply with legal protections and the sort of the few red lines" that are commonplace across the industry. There are two major "red lines" at the heart of Anthropic's negotiations with the Pentagon that the company says it will not cross: Allowing AI to power fully-autonomous weapons that can be fired without human input, and using AI for mass domestic surveillance of Americans. In a blog post on Thursday, Anthropic CEO Dario Amodei said that the company "cannot in good conscience accede" to the Pentagon's request for Claude to be allowed to conduct this kind of work.
[85]
Pentagon Anthropic Feud Has Sales and AI Warfare at Stake as Friday Deadline Looms
By David Jeans, Jeffrey Dastin and Deepa Seetharaman NEW YORK, Feb 27 (Reuters) - An explosive feud between the Pentagon and top artificial intelligence lab Anthropic is set to come to a head by 5:01 p.m. (2201 GMT) on Friday over concerns about how the military could use AI at war. The dispute, barreling toward a deadline set by the Pentagon for resolution, is widely seen as a referendum on how powerful AI could be deployed by the military and how its risks are managed. The Pentagon wants any lawful use to be allowed and has threatened Anthropic's business if the startup does not scrap additional guardrails. "It's a shot across the bow about the future of artificial intelligence and its use on the battlefield," Chris Miller, the former acting secretary of defense, told Reuters. He added that the outcome will "be an acid test for those companies that claim to want to use AI humanely." The months-long spat has divided some industry leaders, military officials and lawmakers over whether AI should be wielded without constraints when its creator Anthropic said the technology was not yet reliable for fully autonomous weapons. Democratic Senator Elissa Slotkin weighed in on Thursday: "The average person does not think we should allow weapons systems to get into war and kill people without a human being overseeing that in some way." Speaking at a confirmation hearing for two assistant defense secretary nominees, Slotkin added: "I certainly don't think any American, Democrat or Republican, wants mass surveillance on the American people." The Pentagon, which the Trump administration renamed the Department of War, has pushed back on the dilemma as a false choice "peddled by leftists in the media." "The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement," Pentagon chief spokesperson Sean Parnell posted on X Thursday. NEGOTIATIONS FALTER The Pentagon has signed $200-million ceiling agreements with major AI labs in the past year, including Anthropic, OpenAI and Google. It is pushing companies to agree to scrap their usage policies in favor of abiding by an all-lawful use clause. Anthropic, continuing these talks, has maintained red lines over the military's use of its Claude AI models for autonomous weapons and domestic surveillance. Anthropic was first among these AI companies to work with classified information, through a supply deal via cloud provider Amazon. Anthropic CEO Dario Amodei, famous for quitting OpenAI in 2020 over concerns about AI technology's stewardship, has warned that AI has advanced faster than the law. Powerful technology could hoover up disparate material to gather intelligence on unwitting civilians, he said in a Thursday blog post, a prospect that critics view as a legal loophole. "Anthropic understands that the Department of War, not private companies, makes military decisions," but AI in narrow cases "can undermine, rather than defend, democratic values," Amodei said. Amodei met with Defense Secretary Pete Hegseth this week. Afterward, the Pentagon gestured toward compromise and sent the startup revised contract language. But the two parties remained at an apparent impasse. An Anthropic spokesperson said on Thursday, "The contract language we received overnight from the Department of War made virtually no progress" and would allow "safeguards to be disregarded at will." BUSINESS THREATS Key business for Anthropic is at stake. The Pentagon warned it would terminate its work with the startup and declare it a supply-chain risk if Anthropic did not accede to the department's demand for all-lawful use of AI. The designation, reserved typically for suppliers in adversary nations, means that defense contractors could be barred from deploying Anthropic's AI during work for the Pentagon. The setback comes as Anthropic races to win sales to businesses and government, with national security an area of focus. The Pentagon has asked contractors including Lockheed Martin to give an appraisal of their reliance on Anthropic ahead of the risk designation, Reuters reported on Wednesday. The defense industrial base totaled around 60,000 contractors including major public companies as of 2021. The Pentagon made a second threat, the legality of which some experts have questioned. "If they don't get on board, SecWar will ensure the Defense Production Act is invoked on Anthropic," a senior Pentagon official told Reuters, "compelling them to be used by the Pentagon regardless of if they want to or not." (Reporting by David Jeans in New York and Jeffrey Dastin and Deepa Seetharaman in San Francisco; Editing by Kenneth Li)
[86]
Anthropic Refuses To Permit Its AI To Autonomously Kill Humans
Thankfully, this finally appears to reveal the line the genAI company is not willing to cross There are no heroes in the world of GenAI, although it seems Anthropic CEO Dario Amodei has at least some boundaries he won't cross when it comes to the use of his company's AI in the military. Despite Antropic's AI Claude already being used widely by the Department of War for intelligence analysis, cyber operations and the like, according to Reuters (thanks PC Gamer), Anthropic has been pressured for months by the U.S. government to allow it to also be used in mass domestic surveillance and fully autonomous weaponsâ€"which is to say, to spy on U.S. citizens, and to be able to "decide" to kill people without human involvement. Now Amodei has put out a statement saying that his company will not be backing down. Amodei's release makes clear that Anthropic is not against the use of its AI by the U.S. military, explaining that "Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more." He also boasts that he's turned down hundreds of millions of dollars in contracts from the Chinese Communist Party on the grounds that it might be used militarily against the U.S. But there is a line, and while Amodei says they "have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner," what he won't allow is the use of their product to spy on American citizens, nor for it to be entirely in charge of weaponry. As farcical as it might sound to hear a CEO saying that he has no problem with his hallucinating generative-AI being used to spy on foreign citizens, and to be part of partially autonomous weapons, but then take moral objection to tweaks on this, it does remain the case that this is a defiant stance against the U.S. government's pressure, coming from both the Department of War and the Pentagon. That pressure is to remove safeguards from the AI, saying they will remove Claude from military operations if those safeguards are maintained and, thus, contracts will be lost. Amodei says Anthropic has been told it will be designated as "a supply chain risk" if it doesn't back down, which he claims is a term "reserved for U.S. adversaries, never before applied to an American company," and that the government may invoke the Defense Production Act to force the safeguards' removal. "Regardless," says Amodei, "these threats do not change our position: we cannot in good conscience accede to their request." It's perhaps somewhat telling that Amodei is well aware that a delusional AI cannot ever be put in autonomous control of weaponry, as frightening as it might be to realize just how entwined this collection of LLMs already is in the military. In reaction to this, employees at both Google and OpenAIâ€"two of Anthropic's main rivalsâ€"have signed an open letter supporting the company in defiance of the Department of War's threats, saying they "stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight."
[87]
Pete Hegseth Demands Anthropic Drop AI Safety Guardrails
Defense Secretary and guy who would really like you to know that he lifts, Pete Hegseth, is apparently trying to posturemogg Anthropic CEO Dario Amodei into submission so the military can return to indiscriminate killmaxxing. According to a report from Axios, the head of the wannabe War Department met with Anthropic's founder on Tuesday and issued an ultimatum to drop the safeguards that prevent Claude from being used for dubious and dangerous purposes, or the AI startup could potentially be labeled as a national security threat. The meeting, which a spokesperson for Anthropic confirmed to Gizmodo occurred Tuesday morning, was a culmination of an ongoing standoff between the company and the Trump administration, which has been something of a multi-front war for Anthropic. Previously, Trump's AI Czar, David Sacks, took specific aim at Anthropic for its public support of regulatory frameworks for AI models. But the showdown with Hegseth has stemmed from the Department of Defense's desire to integrate Anthropic's Claude into all parts of the military's operations despite the company's objections. The core of the issue, according to Axios, seems to be Anthropic's stance that its technology not be used for mass domestic surveillance or to develop fully autonomous weapons that would operate without human involvement. Those are lines that Hegseth and the DoD seem unwilling to acceptâ€"though frankly, it seems like any line is unacceptable to them. Axios described the Defense Department's desire as wanting "unfettered access" to Claude and reported that the agency previously raised objections to having to litigate individual use cases. The stalemate has reportedly led to Hegseth offering something of an ultimatum: either comply with the Department of Defense's demands or face the consequences. Those potential penalties included having the agency cancel contracts with the company, declaring Anthropic a "supply chain risk," or invoking the Defense Production Act to force the company to build a model for the military's desired purposes. A source familiar with the meeting confirmed to Gizmodo that a Friday deadline has been set for Anthropic to accept the Department of Defense's terms or have its contract terminated. The source also confirmed the potential penalties put forth by the DoD should Anthropic choose not to comply. “Anthropic CEO Dario Amodei met with Secretary Hegseth at the Pentagon this morning. During the conversation, Dario expressed appreciation for the Department's work and thanked the Secretary for his service," the spokesperson said. "We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do." It's hard not to read Hegseth's public strong-arm attempt as anything other than a show of force. A source familiar with the meeting said that Anthropic's red lines haven't actually interfered with any of the Pentagon's operations, and no one in the field has had their work stifled by Anthropic's safeguards. Even after the Pentagon reportedly used Claude in its raid that led to the capture of Venezuelan President Nicolás Maduro, Anthropic reportedly did not object. Given that, Hegseth's position seems a bit muddled. Anthropic is such a national security risk that it might need to be designated in a way that's similar to Chinese tech firms, and so key to military operations that the government may need to just take it over. Feels like Hegseth might have put a few too many plates on the bar for this lift, but we'll see.
[88]
Anthropic's autonomous weapons stance could prove out of step with modern war
Anthropic's stance on autonomous weapons may not survive the future Much of the AI world is watching closely as Anthropic tangles with the Pentagon over how the government can use the Claude models. Anthropic has a $200 million contract with the Pentagon, but the contract says the military can't use the AI company's models as the brains for autonomous weapons or for mass surveillance of Americans. Defense Secretary Pete Hegseth insists, after the fact, that the military should be able to use the Anthropic models for "all lawful purposes." Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon for a Tuesday morning meeting, in which he reportedly gave Anthropic until 5:01 p.m. Friday to comply with the Pentagon's demand. If Anthropic fails to do so, Hegseth threatened to invoke the Defense Production Act to compel the AI company to supply its models with no guardrails. Hegseth also said the government would declare Anthropic models to be a "supply chain risk," meaning that all government suppliers would be directed to avoid or discontinue use of Anthropic models. Amodei said in an interview after the Hegseth meeting that his company has no intention of complying with Hegseth's demands. (He's got a strong case: After all, government officials agreed to the terms.) Amodei explained that the military relies on human judgement to avoid violating people's constitutional rights. If AI is making the decisions, there will be no human being to object.
[89]
Anthropic Spurns Latest Pentagon Bid to Defuse Feud Over AI Work
Anthropic PBC rejected the Pentagon's latest offer to defuse a standoff over conditions the company has sought governing the use of its artificial intelligence software by the military, a confrontation that has jeopardized its defense work for the government. In a statement Thursday, an Anthropic spokesperson said that new language proposed by the Pentagon as a compromise failed to satisfy the firm's desire to preserve key safeguards that it has sought for any military use of its AI tools. Those have included company prohibitions on mass surveillance of Americans and on use of its technology in fully autonomous weapons. The Pentagon has rejected those demands and given the company until Friday to accept the government's terms or be declared a supply-chain risk -- a move that would potentially bar it from work with other defense contractors. US officials have said that the military wants to be able to use the company's AI tools in a lawful fashion but without any limits by Anthropic. "These threats do not change our position: we cannot in good conscience accede to their request," Anthropic Chief Executive Officer Dario Amodei said in a statement Thursday. Defense officials have pushed back and demanded the ability to use Claude, one of the only AI tools cleared for classified cloud work, without any restrictions from the company. The Defense Department has also threatened to use the Cold War-era Defense Production Act to use Anthropic's software anyway, over the company's objections. The Pentagon has no interest in mass surveillance or developing "autonomous weapons that operate without human involvement," spokesman Sean Parnell said earlier Thursday. "We will not let ANY company dictate the terms regarding how we make operational decisions," Parnell wrote in a post on X. "They have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk."
[90]
Former General sees Pentagon painting 'bullseye' on Anthropic but warns, 'they're not trying to play cute here' | Fortune
A public showdown between the Trump administration and Anthropic is hitting an impasse as military officials demand the artificial intelligence company bend its ethical policies by Friday or risk damaging its business. Anthropic CEO Dario Amodei drew a sharp red line 24 hours before the deadline, declaring his company "cannot in good conscience accede" to the Pentagon's final demand to allow unrestricted use of its technology. Anthropic, maker of the chatbot Claude, can afford to lose a defense contract. But the ultimatum this week from Defense Secretary Pete Hegseth posed broader risks at the peak of the company's meteoric rise from a little-known computer science research lab in San Francisco to one of the world's most valuable startups. If Amodei doesn't budge, military officials have warned they will not just pull Anthropic's contract but also "deem them a supply chain risk," a designation typically stamped on foreign adversaries that could derail the company's critical partnerships with other businesses. And if Amodei were to cave, he could lose trust in the booming AI industry, particularly from top talent drawn to the company for its promises of responsibly building better-than-human AI that, without safeguards, could pose catastrophic risks. Anthropic said it sought narrow assurances from the Pentagon that Claude won't be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language "framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will." That was after Sean Parnell, the Pentagon's top spokesman, posted on social media that "we will not let ANY company dictate the terms regarding how we make operational decisions" and added the company has "until 5:01 p.m. ET on Friday to decide" if it would meet the demands or face consequences. Emil Michael, the defense undersecretary for research and engineering, later lashed out at Amodei, alleging on X that he "has a God-complex" and "wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk." That message hasn't resonated in much of Silicon Valley, where a growing number of tech workers from Anthropic's top rivals, OpenAI and Google, voiced support for Amodei's stand late Thursday in an open letter. OpenAI and Google, along with Elon Musk's xAI, also have contracts to supply their AI models to the military. "The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," the open letter says. "They're trying to divide each company with fear that the other will give in." Also raising concerns about the Pentagon's approach were Republican and Democratic lawmakers and a former leader of the Defense Department's AI initiatives. "Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end," wrote retired Air Force Gen. Jack Shanahan in a social media post. Shanahan faced a different wave of tech worker opposition during the first Trump administration when he led Maven, a project to use AI technology to analyze drone footage and target weapons. So many Google employees protested its participation in Project Maven at the time that the tech giant declined to renew the contract and then pledged not to use AI in weaponry. "Since I was square in the middle of Project Maven & Google, it's reasonable to assume I would take the Pentagon's side here," Shanahan wrote Thursday on social media. "Yet I'm sympathetic to Anthropic's position. More so than I was to Google's in 2018." He said Claude is already being widely used across the government, including in classified settings, and Anthropic's red lines are "reasonable." He said the AI large language models that power chatbots like Claude are also "not ready for prime time in national security settings," particularly not for fully autonomous weapons. "They're not trying to play cute here," he wrote. Parnell asserted Thursday that the Pentagon wants to " use Anthropic's model for all lawful purposes" and said opening up use of the technology would prevent the company from "jeopardizing critical military operations," though neither he nor other officials have detailed how they want to use the technology. The military "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement," Parnell wrote. When Hegseth and Amodei met Tuesday, military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn't approve. Amodei said Thursday that "those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." He said he hopes the Pentagon will reconsider given Claude's value to the military, but, if not, Anthropic "will work to enable a smooth transition to another provider." -- - AP reporter Konstantin Toropin contributed to this report.
[91]
Anthropic eyes Pentagon deal after fallout over Maduro raid
Secretary of State Marco Rubio looks on as Defense Secretary Pete Hegseth speaks to the press on Jan. 7. (Sarah L. Voisin/The Washington Post) One of the nation's leading artificial intelligence firms is negotiating whether it can continue to work with the military, according to people familiar with the discussions, after Pentagon officials called their once-close relationship into question in the wake of January's raid to capture Venezuelan leader Nicolás Maduro. Anthropic's Claude model is one of a handful of leading AI systems that the Pentagon is using to rapidly build its capabilities in cyberwarfare, improve the performance of its autonomous weapons systems and increase the efficiency of its personnel. Defense Secretary Pete Hegseth's team has insisted in recent weeks that the military must have the freedom to use the powerful tools as it sees fit. Officials say other leading AI firms have gone along with the demand. OpenAI, the maker of ChatGPT, Google and Elon Musk's xAI have agreed to allow the Pentagon to use their systems for "all lawful purposes" on unclassified networks, a Defense official said, and are working on agreements for classified networks. (The Washington Post has a content partnership with OpenAI.) The companies did not respond to requests for comment. But Anthropic -- which has sought to position itself as the most safety-minded of the companies -- has corporate principles that may keep it from giving the Pentagon carte blanche. Unlike many traditional weapons, powerful AI systems can be deployed in many ways not foreseen by their designers and the dispute has raised questions about who should have the final say over their use by the military. While Anthropic has not said exactly what it's qualms are with the Pentagon's demands, its chief executive has recently warned of the dangers of autonomous weapons and AI-powered mass surveillance. In a statement to The Washington Post, Anthropic said it is "committed to using frontier AI in support of U.S. national security." "Claude is used for a wide variety of intelligence-related use cases across the government, including the [Defense Department], in line with our Usage Policy," Anthropic said. "We are having productive conversations, in good faith, with [the Defense Department] on how to continue that work and get these complex issues right." Until recent weeks Anthropic had been in an enviable position, with a $200 million contract and its technology uniquely approved for use within the Pentagon's classified networks. That quickly began to change, Trump administration officials say, following Anthropic's response to its recent use by the Pentagon in the Maduro operation. Technology developed by defense firm Palantir and Anthropic's Claude were used in preparation for the Jan. 3 raid, according a person familiar with the assault, who spoke on the condition of anonymity to share confidential details about the operation. During the raid, scores of Maduro's security guards and Venezuelan service members were killed. After the attack, a senior defense official said, an executive from Anthropic discussed the raid with an executive at Palantir, asking whether Anthropic's tools had been used. The Palantir executive relayed the question to the Defense Department, saying it implied that Anthropic might have disapproved of how Claude had been used, the official said. That prompted department leaders to call into doubt whether the company could be fully relied on. "They expressed concern over the Maduro raid, which is a huge problem for the department," one administration official said. However Anthropic said it had not discussed any specific operations with the Defense Department nor "discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters." The dispute appears to run deeper than any questions over the attack on Venezuela. Hegseth sees AI dominance as a must-have capability and his directives have pressed the military to move fast to embrace the technology. In January, he said that "speed wins" in an AI-driven future, and he has ordered the Pentagon to unblock data for AI to train, while pushing the department to move from "campaign planning to kill chain execution." "We must approach risk tradeoffs, 'equities,' and other subjective questions as if we were at war," Hegseth wrote in the January 2026 directive. Just over two weeks after Hegseth's directive came down, Dario Amodei, Anthropic's co-founder and chief executive, published an essay sketching a potential dystopia in which AI empowers a new generation of unstoppable weapons and surveillance tools. "We should worry about them in the hands of autocracies, but also worry that because they are so powerful, with so little accountability, there is a greatly increased risk of democratic governments turning them against their own people to seize power," Amodei wrote about swarms of AI-enabled drones. Such a weaponry is likely still many years away, but failing to reach an agreement could quickly have far-reaching consequences for the company. The Pentagon has suggested that it could be branded a "supply chain risk" something that would not only impact Anthropic, but any firm that uses the company's AI. The designation has typically been aimed at Chinese and Russian companies. "We may require that all our vendors and contractors certify that they don't use any Anthropic model," a defense official told The Post. In the past, firms have been able to have riders in their contracts with the Pentagon indemnifying them from liability if their technology is used in an unlawful way and allowing them to bind the government to only use the technology for lawful purposes. But it may be unreasonable for firms contracting with the Pentagon to try to set limitations on how their rapidly evolving technology can be applied, said Frank Kendall, who served as Air Force secretary during the Biden administration and oversaw its development of a fleet of autonomous warplanes. "The military's function is the application of violence, and if you're going to give anything to the Defense Department, it's likely going to be used to help kill people," Kendall said. The administration has held that its actions -- which also include U.S. strikes on alleged drug boats in the Caribbean, its deployment of active duty troops on U.S. soil and its decision to use lethal force in Minneapolis, killing two U.S. citizens -- have been lawful. But the Trump administration has also fired many of the independent military and Justice Department lawyers who would have had the ability to challenge the legality of those usages. "If you're worried about this administration doing unlawful things, you should just not work with them," Kendall said. The Pentagon has been integrating AI into some of its weapons systems for years but never at the speed at which it is now. That's partly driven by its competition with China and evolving threats like hypersonic missiles -- where a human's reaction time can be inadequate. But there's also been an emphasis on making sure AI's unpredictable learning could be fenced in. At Edwards Air Force Base in 2024, the Air Force flew its first AI fighter jet in dogfights -- and the jet, an F-16 that carried the AI in a computer in the back, was already besting elite test pilots by shaving milliseconds off turns and maneuvers. Even then, there was a human in the loop, a test pilot inside the jet who could disengage the AI as needed -- and the AI itself was kept in a system that was not connected to any networks. As the Air Force moved forward withe the AI, it said making sure the data it learned on was clean was the priority, to avoid security risks. In 2023, the Biden administration instructed the Pentagon that any AI use in systems would require levels of review, anti-tamper mechanisms and safeguards to ensure that humans would retain the decision on use of force. That policy is still in force but will be reviewed as needed, the administration official told The Post.
[92]
Anthropic faces Friday deadline in Defense AI clash with Hegseth
Defense Secretary Pete Hegseth has told Anthropic it has until Friday evening to give the military broad access to its artificial intelligence models, CNBC confirmed on Tuesday. If Anthropic fails to comply, Hegseth threatened to label the company a "supply chain risk" or invoke the Defense Production Act, according to sources familiar with the discussion, who asked not to be named because the matter was private. Anthropic's negotiations with the Department of Defense have stalled because it wants assurance that its models will not be used for autonomous weapons or mass surveillance of Americans. The DoD, meanwhile, wants the company to agree to "all lawful use cases" without limitation. A "supply chain risk" is a designation that's typically reserved for foreign adversaries, but it would require the DoD's vendors and contractors to certify that they do not use Anthropic's models. The Defense Production Act allows the president to control domestic industries under emergency authority when it's in the interest of national security. Hegseth set the deadline during a meeting with Anthropic CEO Dario Amodei on Tuesday morning, the people said. "We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do," an Anthropic spokesperson told CNBC in a statement.
[93]
The Pentagon has reportedly given Anthropic until Friday to let it use Claude as it sees fit
Defense Secretary Pete Hegseth will reportedly give Anthropic until Friday to drop certain guardrails for military use, as . The outlet also reported that CEO Dario Amodei yesterday as the Pentagon ratcheted up pressure on the AI company to give in to its demands. The makers of Claude have reportedly been offered an ultimatum: Either yield to the government's demands to remove limits for certain military applications, or potentially be forced to tailor its AI model to the government's needs under the Defense Production Act. Anthropic, for its part, that while it was willing to adopt certain policies for the Pentagon, it would not allow its model to be used for of Americans or for the development of autonomous weapons. Claude is currently the only AI model employed in some of the government's most sensitive work. "The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good," a defense official . The Pentagon is reportedly ramping up conversations with OpenAI and Google about using their models for classified work. ChatGPT and Gemini are already approved for unclassified government use. Elon Musk's xAI also with the DoD to use Grok in classified systems.
[94]
US Military warns Anthropic: Provide unrestricted AI or face penalties
US Defense Secretary Pete Hegseth is pressuring Anthropic to provide the military with unrestricted access to its Claude artificial intelligence model, issuing a Friday deadline and threatening penalties that could include invoking the Defense Production Act. In a coinciding development on the same day, Anthropic announced significant modifications to its Responsible Scaling Policy (RSP), effectively lowering its internal safety guardrails. This dual revelation highlights a growing tension between national security demands and the safety protocols maintained by leading AI developers, placing Anthropic at the center of a complex dispute involving government contracts, competitive market pressures, and ethical commitments. The Defense Department's ultimatum represents a direct challenge to Anthropic's existing usage policies. According to reports, Secretary Hegseth communicated to Anthropic CEO Dario Amodei that the company must grant the Pentagon unfettered access to Claude by Friday or face severe repercussions. While Anthropic has expressed a willingness to adapt its usage policies to accommodate the Pentagon's operational needs, the company has drawn a line regarding specific applications. Anthropic has explicitly refused to allow its technology to be utilized for the mass surveillance of American citizens or for autonomous weapons systems that operate without direct human intervention. This stance places the company in a precarious position as it navigates the requirements of a powerful potential client. The specific penalties under consideration by the Defense Department carry significant weight. Hegseth's threats reportedly include the invocation of the Defense Production Act, a federal law that grants the president broad authority to compel private companies to prioritize government contracts deemed essential for national defense. Beyond this, the military is considering severing its existing contract with Anthropic entirely. A further punitive measure involves designating Anthropic as a supply chain risk. Such a designation would have cascading effects, forcing other private companies that work with the Pentagon to certify that they do not incorporate Claude into their workflows, effectively isolating Anthropic from the broader defense industrial base. The urgency driving the Pentagon's pressure campaign stems from the unique capabilities of the Claude model. Currently, Claude is the sole AI model utilized by the US military for its most sensitive and high-stakes work. A defense official, citing the necessity of the technology, noted, "The only reason we're still talking to these people is we need them and we need them now." The official further elaborated on the model's reputation, stating, "The problem for these guys is they are that good." The utility of Claude was demonstrated in its reported use during the "Maduro raid" in Venezuela, a specific operational success that Anthropic CEO Dario Amodei is said to have highlighted during discussions with Palantir, the defense contractor partnering with Anthropic. Simultaneously, Anthropic revealed a fundamental shift in its safety philosophy. The company announced it was modifying its Responsible Scaling Policy, moving away from the strict adherence that previously defined its brand. Historically, Anthropic's core pledge involved a commitment to halt the training of new AI models if specific safety benchmarks could not be met in advance. This policy relied on "hard tripwires" -- non-negotiable red lines designed to stop development immediately if risk thresholds were breached. This cautious approach was a central marketing pillar, distinguishing the company from competitors by prioritizing safety over speed. The updated policy replaces these hard stops with a more flexible, relative framework. In place of the previous rigid boundaries, Anthropic is introducing "Risk Reports" and "Frontier Safety Roadmaps." These new mechanisms are intended to provide transparency to the public regarding safety assessments rather than enforcing automatic halts in development. The company explained the rationale behind this pivot, writing, "Two and a half years later, our honest assessment is that some parts of this theory of change have played out as we hoped, but others have not." The new approach acknowledges that safety is a dynamic landscape rather than a set of static requirements. In an interview with Time, Anthropic's chief science officer, Jared Kaplan, provided context for the decision, citing the intense competitive environment. "We felt that it wouldn't actually help anyone for us to stop training AI models," Kaplan stated. He elaborated on the geopolitical and commercial realities facing the company, saying, "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments... if competitors are blazing ahead." This sentiment reflects a strategic pivot toward maintaining market relevance in a sector where technological leadership is fleeting and highly contested. Anthropic's official statement on the policy change pointed to a "collective action problem" as a primary driver. The company argues that in an anti-regulatory environment with fierce competition, a unilateral pause on development would be counterproductive. The updated RSP reads, "If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe." The statement continues, "The developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit." Financially, Anthropic is operating at a scale that makes the decision to compromise on safety standards particularly notable. In February, the company secured $30 billion in new investments, raising its total valuation to $380 billion. This massive influx of capital places immense pressure on performance and growth. The competitive landscape further contextualizes this pressure; rival OpenAI currently holds a valuation exceeding $850 billion. Analysts suggest that the relaxed safety standards may be an attempt to accelerate development timelines to keep pace with industry leaders, prioritizing commercial expansion over the cautious restraints that originally defined the company. External experts have weighed in on the implications of Anthropic's policy reversal, expressing concern over the erosion of safety commitments. Chris Painter, the director of METR, a nonprofit organization focused on AI risks, offered a nuanced critique. "I like the emphasis on transparent risk reporting and publicly verifiable safety roadmaps," he told Time. However, Painter warned of the dangers inherent in this shift. He raised concerns that the more flexible RSP could lead to a "frog-boiling" effect, a metaphor for incremental changes that gradually erode safety standards until they become negligible. "When safety becomes a gray area, a seemingly never-ending series of rationalizations could take the company down the very dark path it once condemned," he noted. Painter's analysis extends beyond the specific policy mechanics to the broader state of the industry. He interprets Anthropic's move as a signal of the sector's unpreparedness for the risks it is creating. According to Painter, the new RSP indicates that Anthropic "believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities." He concludes that this development serves as "more evidence that society is not prepared for the potential catastrophic risks posed by AI." This perspective suggests that the policy change is not merely a strategic business decision but a symptom of a systemic inability to manage rapid technological advancement safely. Notably, neither Anthropic's official announcement regarding the RSP modification nor the reporting on the new policy mentioned the ongoing pressure campaign from the Pentagon. The convergence of these two stories on the same day suggests a potentially complex interplay between external government pressure and internal strategic realignment. As the Friday deadline set by Secretary Hegseth approaches, Anthropic faces a convergence of regulatory threats, competitive market forces, and scrutiny regarding its ethical commitments, the outcomes of which will likely influence the trajectory of AI development and deployment within the defense sector.
[95]
Pentagon threatens to take Anthropic's AI tech in defence standoff
Washington | US Defence Secretary Pete Hegseth has threatened to invoke war-time powers against Anthropic allowing the government to force the AI firm to hand over its novel technology in the name of national security, people familiar with the ongoing discussions said. Hegseth gave Anthropic's CEO a Friday deadline to open the company's artificial intelligence technology for unrestricted military use or risk losing its lucrative government contract, during tense talks on Tuesday (Wednesday AEDT).
[96]
Anthropic says won't give US military unconditional AI use
San Francisco (United States) (AFP) - AI company Anthropic said Thursday it would not give the US Defense Department unrestricted use of its technology despite being pressured to comply by the Pentagon. "These threats do not change our position: we cannot in good conscience accede to their request," Anthropic chief executive Dario Amodei said in a statement. Washington had given the artificial intelligence startup until Friday to agree to unconditional military use of its technology, even if it violates ethical standards at the company, or face being forced to comply under emergency federal powers. Amodei said Anthropic models have been deployed by the Pentagon and intelligence agencies to defend the country but that it draws an ethical line regarding its use for mass surveillance of US citizens and fully-autonomous weapons. "Using these systems for mass domestic surveillance is incompatible with democratic values," Amodei said. And leading AI systems are not yet reliable to be trusted to power deadly weapons without a human in ultimate control, he added. "We will not knowingly provide a product that puts America's warfighters and civilians at risk." After meeting with Anthropic early this week, the Pentagon delivered a stark ultimatum: agree to unrestricted military use of its technology by 5:01 pm (22:01 GMT) Friday or face being forced to comply under the Defense Production Act. The Cold War-era law, last used during the Covid pandemic, grants the federal government sweeping powers to compel private industry to prioritize national security needs. The Pentagon also threatened to label Anthropic a supply chain risk, a designation usually reserved for firms from adversary countries that could severely damage the company's ability to work with the US government and reputation. A senior Pentagon official at the time pushed back on the company's concerns, insisting the Defense Department had always operated within the law. "Legality is the Pentagon's responsibility as the end user," the official said, adding that the department "has only given out lawful orders." Officials also confirmed that an exchange regarding intercontinental ballistic missiles had taken place between Anthropic and the Pentagon, underscoring the sensitivity of the applications at the heart of the dispute. The Pentagon confirmed that Elon Musk's Grok system had been cleared for use in a classified setting, while other contracted companies -- OpenAI and Google -- were described as close to similar clearances, piling competitive pressure on Anthropic to fall in line. Anthropic was contracted alongside those companies last year to supply AI models for a range of military applications under a $200 million agreement. Former OpenAI employees founded Anthropic in 2021 on the premise that AI development should prioritize safety -- a philosophy that now puts it on a collision course with the Pentagon and the White House. "Anthropic understands that the Department of War, not private companies, makes military decisions," Amodei said. "However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values."
[97]
Pentagon officials sent Anthropic best and final offer for unrestricted military use of its AI, sources say
Pentagon officials on Wednesday night sent Anthropic their best and final offer in negotiations for use of the company's artificial intelligence technology, just ahead of a government-imposed deadline, according to sources familiar with the discussions. It was unclear whether the offer substantially changed what the government has been seeking from the AI startup, or whether the company had agreed. Defense Secretary Pete Hegseth set a deadline of Friday evening for the company to grant unrestricted military use for its AI technology or face the loss of its business with the U.S. military, sources familiar with the situation told CBS News. Spokespeople for the company and the Defense Department didn't immediately respond to a request for comment Thursday morning. Pentagon officials are considering invoking the Defense Production Act to make Anthropic adhere to what the military is seeking, which is full control of its AI technology for use in military operations, sources told CBS News. The company was awarded a $200 million contract by the Pentagon in July to develop AI capabilities that would advance U.S. national security. Anthropic has repeatedly asked defense officials to agree to guardrails that would restrict the AI model, called Claude, from conducting mass surveillance of Americans, sources said. Trump officials noted that this sort of surveillance is illegal and the Pentagon follows the law. The officials also said the military is simply asking for a license to use the AI strictly for lawful activities. Anthropic's CEO, Dario Amodei, also wants to ensure Claude is not used by the Pentagon for final targeting decisions in military operations without any human involvement, one source familiar with the negotiations said. Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment, the person said. In a meeting at the Pentagon on Tuesday morning, Hegseth gave Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model, according to sources familiar with the matter.
[98]
Anthropic Says Little Progress Made in Pentagon Talks Over A.I.
Julian Barnes reported from Washington and Sheera Frenkel from San Francisco. The Pentagon and the artificial intelligence company Anthropic continued their fight over how A.I. can be used in defense, a day before a deadline imposed by the Trump administration for the company to permit its powerful technology to be applied broadly for military operations. The two sides, which were negotiating the use of Anthropic's A.I. in classified systems as part of a $200 million contract, have been hurtling toward a 5:01 p.m. Friday deadline over a Pentagon demand that Anthropic provide unfettered access to its A.I. system without safeguards demanded by the company. On Wednesday, the Pentagon gave Anthropic assurances that it would not use the company's A.I. system, Claude, for mass surveillance of Americans or autonomous drone operations, which were the start-up's key concerns. But Anthropic said late Thursday that a new offer by the Pentagon fell short of what it was asking for. There was "virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons," the company said. "New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will." This is a developing story. Check back for updates.
[99]
Agreement reached with Department of War to deploy OpenAI models in classified network: CEO Sam Altman - The Economic Times
Sam Altman said OpenAI has agreed with the Department of War to run its AI models on the classified network. The company will control safeguards, model use, and cloud deployment, while the government will respect its "red lines." The deal follows tensions with Anthropic.OpenAI CEO Sam Altman said that the company has reached an agreement with the Department of War to deploy its models within their classified network, according to a report by Reuters. "In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome," Altman said in a post on X. According to Fortune, Altman told OpenAI employees that the government will allow the company to build its own "safety stack." This is a system of technical, policy, and human controls that sits between a powerful AI model and its real-world use. If a model refuses a task, the government will not force it to comply. OpenAI will keep control over how safeguards are applied, which models are used, and where they are deployed. The company will limit use to cloud systems rather than "edge systems," which in a military context could include drones or aircraft. In a major concession, the government has agreed to respect OpenAI's "red lines." This includes not using AI for autonomous weapons, domestic mass surveillance, or critical decision-making. The announcement comes after tensions between Secretary of War Pete Hegseth and OpenAI rival Anthropic became public, leading to the apparent cancellation of Anthropic's Pentagon and federal contracts.
[100]
OpenAI Reaches Deal To Deploy AI Models On U.S. Department Of Defense Classified Network
"In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome," Altman said in a post on X. Feb 27 (Reuters) - OpenAI CEO Sam Altman said on Friday it has reached an agreement with the U.S. Department of War to deploy its AI models on classified cloud networks. "In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome," Altman said in a post on X. (Reporting by Devika Nair in Bengaluru; Editing by Sam Holmes) This is a developing story. Please check back for updates.
[101]
Anthropic Defies Pentagon's Demands as Contract Deadline Looms
Pink Shuts Down Separation Rumors With Carey Hart: 'You Can Do Better' Earlier this week, the Pentagon told Anthropic that the government would cancel its $200 million contract if it not agree to give it broad access to its AI system, Claude. As Friday's deadline to accept the terms approaches, CEO Dario Amodei rejected the government's ultimatum and said "we cannot in good conscience accede to their request." In a statement released on Thursday, Amodei said the Pentagon's latest offer to change their contract does not satisfy the company's concerns that its AI could be used for mass surveillance of US citizens or in fully autonomous weapons. Amodei said the Department of Defense has "threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a 'supply chain risk' -- a label reserved for US adversaries, never before applied to an American company -- and to invoke the Defense Production Act to force the safeguards' removal." The executive pointed out: "These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." Sean Parnell, the Pentagon spokesman, stated on social media that the Pentagon had "no interest" in using AI for the two purposes Anthropic outlined, but he reiterated the demand that the government should be given access to the A.I. model "for all lawful purposes." If Anthropic fails to agree by 5:01 p.m. on Friday, it's not clear how the government plans to label the company a supply chain risk while simultaneously also invoking the Defense Production Act to force Anthropic to cooperate with the Pentagon.
[102]
Altman says OpenAI agrees with Anthropic's red lines in Pentagon dispute
OpenAI CEO Sam Altman said Friday that he agrees with Anthropic's red lines in its increasingly contentious negotiations with the Pentagon over the terms of use for the company's AI models. As the feud between Anthropic and the Defense Department (DOD) has reached a boiling point, the AI firm has refused to budge on lifting restrictions on two issues -- mass surveillance and lethal autonomous weapons. The Pentagon has pushed for the company to agree to language that allows for "all lawful uses" of its technology. The DOD has given Anthropic until Friday at 5:01 p.m. ET to agree to its terms. If the company does not, the department is threatening to cancel a $200 million contract, in addition to warning it could label the AI firm a "supply chain risk" or invoke the Defense Production Act (DPA), a federal power typically reserved for wartime or emergencies. "I don't personally think the Pentagon should be threatening DPA against these companies," Altman told CNBC's "Squawk Box" on Friday morning. "But I also think that companies that choose to work with the Pentagon, as long as it is going to comply with legal protections and the few red lines that the field, we have, I think we share with Anthropic and that other companies also independently agree with, I think it is important to do that," he continued. "For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety, and I've been happy that they've been supporting our warfighters," the OpenAI CEO added. OpenAI and Anthropic are major competitors in the commercial AI race, a rivalry highlighted by Anthropic's Super Bowl ad knocking OpenAI's plan to introduce advertising to it's free ChatGPT tool. Altman and Amodei notably declined to clasp hands during a group photo shoot at India's AI summit last week. Ahead of the Friday deadline, Anthropic CEO Dario Amodei said in a lengthy statement that the company "cannot in good conscience accede" to the Pentagon's terms. "Anthropic understands that the Department of War, not private companies, makes military decisions," he wrote in the statement released late Thursday. "We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner." "However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," he added. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do." The Pentagon has argued that it has "no interest" in using AI to conduct mass surveillance or develop autonomous weapons. Following Amodei's statement Thursday, Under Secretary of War Emil Michael accused the Anthropic CEO of having a "God-complex." "He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk," he wrote in a post on the social platform X. "The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company."
[103]
US Military Would Only Use Anthropic's AI Technology in Legal Ways, Pentagon Says
WASHINGTON (AP) -- The Pentagon's top spokesman has reiterated that the military wants to use Anthropic's artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands. Sean Parnell said Thursday on social media that the Pentagon "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement." Anthropic's policies prevent their models from being used for those purposes. It's the last of its peers to not supply its technology to a new U.S. military internal network. Parnell said the Pentagon wants to "use Anthropic's model for all lawful purposes" but didn't offer details on what that entailed. He said opening up use of the technology would prevent the company from "jeopardizing critical military operations." "We will not let ANY company dictate the terms regarding how we make operational decisions," he said. During a meeting on Tuesday between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei, military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn't approve. Parnell mentioned only two of those consequences in the Thursday post on X and said Anthropic has "until 5:01 PM ET on Friday to decide." "Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk," he wrote. Anthropic didn't immediately respond to a request for comment Thursday. It said in a statement after Tuesday's meeting that it "continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do."
[104]
Scoop: Pentagon takes first step toward blacklisting Anthropic
Why it matters: That penalty is usually reserved for companies from adversarial countries, such as Chinese tech giant Huawei. * Using it to punish a leading American tech firm, particularly one on which the military itself is currently reliant, would be unprecedented. Driving the news: The Pentagon reached out to Boeing and Lockheed Martin on Wednesday to ask about their exposure to Anthropic, two sources with knowledge of those conversations said. * A Boeing spokesperson did not immediately respond to a request for comment. * A Lockheed spokesperson confirmed: "Lockheed Martin has been contacted by the Department of War regarding an analysis of its exposure and reliance on Anthropic ahead of a potential supply chain risk declaration." * The Pentagon plans to reach out to "all the traditional primes" -- meaning the major contractors that supply things like fighter jets and weapons systems -- about whether and how they use Claude, a source familiar told Axios. The big picture: Claude is currently the only AI model running in the military's classified systems. It was used during the operation to capture Venezuela's Nicolás Maduro, through Anthropic's partnership with Palantir, and could foreseeably be used in a potential military campaign in Iran. * The Pentagon is impressed with Claude's performance, but furious that Anthropic has refused to lift its safeguards and let the military use it for "all lawful purposes." * Anthropic insists in particular on blocking Claude's use for the mass surveillance of Americans or to develop weapons that fire without human involvement. * The Pentagon insists it's unworkable to have to clear individual use cases with Anthropic. Friction point: During a tense meeting on Tuesday, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a deadline to agree to the Pentagon's terms: 5:01pm on Friday. * After that, Hegseth warned, the administration would either use the Defense Production Act to compel Anthropic to tailor its model to the military's needs, or else declare the company a supply chain risk. * While Anthropic could theoretically challenge it in court, invoking the DPA would let the military maintain access to Claude. * Wednesday's outreach suggests the military is leaning toward a supply chain risk designation. What they're saying: An Anthropic spokesperson said the meeting between Amodei and Hegseth had been a continuation of the "good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do." * The spokesperson did not comment on the potential supply chain risk designation. * The Pentagon told Axios it was "preparing to execute on any decision that the Secretary might make on Friday regarding Anthropic." * Referring to the possible supply chain risk designation earlier this week, a senior Defense official told Axios: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this." Reality check: Asking suppliers to analyze their own reliance on Claude and report back to the Pentagon is a lot different than immediately forcing them to cut ties. It's possible this is more brinksmanship on the Pentagon's side to try to convince Anthropic to fold. * But Anthropic has been insistent up to now that it will not back down on surveillance or autonomous weapons, two areas Amodei has personally raised when discussing the dangers of AI. The intrigue: Aside from the Pentagon feud, Anthropic has been on a hot streak: raking in new funding, elbowing out competitors, and burrowing itself deeper into the workflows of major corporations. * The supply chain risk designation could be a significant blow if a number of companies that work with the government remove Claude from their operations. * However, Anthropic could see some benefit in being viewed by potential customers and staffers as the company that stood its ground amid concerns of an AI arms race. What to watch: Elon Musk's xAI recently signed a deal to move into the military's classified systems, under the "all lawful use" standard that Anthropic has rejected. * Google and OpenAI, whose models are already available in unclassified systems, are also in negotiations about moving into the classified space. * One source familiar with those discussions described Claude as the most capable model in a number of military use cases, but described Google's Gemini as a strong alternative. * The Pentagon insists Google and OpenAI would have to lift their safeguards to get those contracts. What's next: The Friday deadline is fast approaching.
[105]
The Pentagon wants fewer AI limits. Anthropic doesn't. Here's why it matters
Dario Amodei, CEO of Anthropic, will head to the Pentagon on Tuesday to meet with Defense Secretary Pete Hegseth about how the military uses the company's artificial intelligence models. And it's likely to be a tense meeting, as sources first told Axios. Contract talks between the AI startup and the Department of Defense have gone off course in recent weeks as Anthropic has insisted on some safeguards for how its technology will be used. While the San Francisco-based company is willing to loosen some of its usage restrictions for the Department of Defense, it doesn't want its models used for at least two specific purposes: spying on Americans or developing autonomous weapons. Heading into Tuesday's meeting, the two factions seem to have differing views on how those contract talks have been proceeding. While a spokesperson for Anthropic said in a statement Monday that the company is having "productive conversations, in good faith" with the Pentagon, a Defense Department spokesman said last week that Anthropic's relationship with the Pentagon is under review. "Anthropic knows this is not a get-to-know-you meeting," a senior Defense official told Axios. "This is not a friendly meeting."
[106]
OpenAI Lands Pentagon Deal Hours After Trump Blacklists Anthropic -- Altman Says Department Of War Agreed To AI Safety Guardrails
OpenAI struck a deal on Friday to deploy its AI tools inside the Pentagon's classified systems, hours after the Trump administration formally blacklisted rival Anthropic. CEO Sam Altman announced the agreement on Friday on X, saying the Pentagon had agreed to two core safety principles: prohibitions on domestic mass surveillance, and a requirement for human oversight over the use of force, including in autonomous weapons systems. Altman said the Department of War (DoW) confirmed it agreed to those terms and that the startup will also embed OpenAI engineers on-site to ensure model safety. "We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted." What Led To This? The backdrop is a weeks-long standoff between the Pentagon and Anthropic, whose Claude AI system became the first model to run on classified military networks under a contract worth up to $200 million. Anthropic had baked the same two restrictions -- no autonomous weapons, no mass surveillance of U.S. citizens -- into that agreement. The Pentagon, which says it has never sought to use AI for those purposes, demanded the clauses be removed so it could deploy Claude for "all lawful purposes." When Anthropic refused, Defense Secretary Pete Hegseth designated the company a "supply chain risk" -- a label typically reserved for firms with ties to foreign adversaries -- and President Donald Trump ordered all federal agencies and military contractors to cut ties with the company. In a statement on Friday, Anthropic said it was "deeply saddened" by this and would challenge any supply chain risk designation against it in court. "We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government," the company said. The Divergence That Matters The core question now is what, exactly, OpenAI agreed to that Anthropic didn't -- because on paper, both companies had similar red lines. Altman said the Pentagon acknowledged the principles already reflected in U.S. law and policy. "We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept," Altman wrote. Anthropic argued the same thing and still got blacklisted. It is not clear what is different about OpenAI's deal with the Pentagon compared with what Anthropic wanted. Both companies have been contacted by Benzinga for clarification. Image via Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[107]
Pentagon's fight with Anthropic is anything but intelligent
The U.S. Department of Defense and Anthropic, the artificial intelligence firm, are at war. The AI company wants to limit the ways the U.S. military uses its product. The U.S. government had threatened to either end its contract with Anthropic or to use legislative authority to compel the business to do as the Pentagon wishes. On Thursday, Anthropic rejected the Pentagon's demands that it agree to grant unrestricted use of its technology by the military by Friday, according to reports, though it says it is not walking away from negotiations altogether. The fight is about two basic principles: the role of artificial intelligence in warfighting and the power that the U.S. federal government has to force private businesses to do its bidding, a fight that is taking place elsewhere in the U.S. in a variety of industries and on a host of different issues. Both have profound implications that go well beyond this particular skirmish.
[108]
Anthropic refuses Pentagon demand to lift AI safeguards
AI firm Anthropic has said it will not comply with a Pentagon demand to strip safety guardrails from its Claude model, after Defense Secretary Pete Hegseth threatened to cancel a $200m contract and label the company a "supply chain risk." CEO Dario Amodei said the company remains willing to support US national security, but only with safeguards in place. At the center of the dispute is the military's request for unrestricted, lawful use of Claude, including potential deployment in autonomous weapons and domestic surveillance. Anthropic has refused to allow such applications, arguing the technology is not safe or reliable enough for those roles. The standoff tests Anthropic's reputation as one of the AI sector's most safety-focused players. Losing the Pentagon contract (and being formally designated a supply chain risk) could severely limit the company's ability to work with other US defence partners...
[109]
Dario Amodei says he 'cannot in good conscience' bow to Pentagon's demands over AI use in military | Fortune
The maker of the AI chatbot Claude said in a statement that it's not walking away from negotiations but that new contract language received from the Defense Department "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons." Sean Parnell, the Pentagon's top spokesman, said earlier on social media that the military "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement." Anthropic's policies prevent its models from being used for those purposes. It's the last of its peers -- the Pentagon also has contracts with Google, OpenAI and Elon Musk's xAI -- to not supply its technology to a new U.S. military internal network. "It is the Department's prerogative to select contractors most aligned with their vision," Amodei wrote in a statement. "But given the substantial value that Anthropic's technology provides to our armed forces, we hope they reconsider." Defense Secretary Pete Hegseth gave Anthropic an ultimatum on Tuesday after meeting with Amodei: Allow the Pentagon to use the company's AI as it sees fit by Friday or risk losing its government contract. Military officials warned that they could go even further and designate the company as a supply chain risk or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products. Amodei said Thursday that "those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." In a post before Amodei's announcement, Parnell reiterated that the Pentagon wants to " use Anthropic's model for all lawful purposes" but didn't offer details on what that entailed. He said opening up use of the technology would prevent the company from "jeopardizing critical military operations." "We will not let ANY company dictate the terms regarding how we make operational decisions," he said. Emil Michael, defense undersecretary for research and engineering, later lashed out at the Anthropic CEO, alleging on X that Amodei "has a God-complex" and "wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk." The talks that escalated this week began months ago. Amodei said that if the Pentagon doesn't reconsider its position, Anthropic "will work to enable a smooth transition to another provider." Sen. Thom Tillis, a North Carolina Republican who is not seeking reelection, said the Pentagon has been handling the matter unprofessionally while Anthropic is "trying to do their best to help us from ourselves." "Why in the hell are we having this discussion in public?" Tillis told reporters. "This is not the way you deal with a strategic vendor that has contracts." He added, "When a company is resisting a market opportunity for fear of negative consequences, you should listen to them and then behind closed doors figure out what they're really trying to solve." Sen. Mark Warner of Virginia, the ranking Democrat on the Senate Intelligence Committee, said he was "deeply disturbed" by reports that the Pentagon is "working to bully a leading U.S. company." "Unfortunately, this is further indication that the Department of Defense seeks to completely ignore AI governance," Warner said in a statement. It "further underscores the need for Congress to enact strong, binding AI governance mechanisms for national security contexts." While Pentagon officials say they always will follow the law with their use of AI models, the department has taken steps to change the culture among the military legal ranks. Hegseth told Fox News last February, weeks after becoming defense secretary, that "ultimately, we want lawyers who give sound constitutional advice and don't exist to attempt to be roadblocks to anything." The same month, Hegseth also fired the top lawyers for the Army and the Air Force without explanation. The Navy's top lawyer had resigned shortly after the election in late 2024. ___ O'Brien reported from Providence, Rhode Island. Associated Press writer Ben Finley contributed to this report.
[110]
Pentagon Hardens Its Ultimatum to Anthropic in Feud Over AI Use
The Pentagon escalated its ongoing dispute with Anthropic PBC on Thursday, making public a threat to effectively ban the artificial intelligence startup from the US military's vast supply chain. In a social media post, the Defense Department's main spokesman Sean Parnell warned Anthropic of a deadline of Friday 5:01 pm in Washington to allow the Pentagon unfettered use of Anthropic's Claude Gov AI tools after the company had previously insisted on some safeguards. "This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk," Parnell wrote. A senior Pentagon official confirmed Thursday that the Defense Department had sent its final offer to Anthropic on Wednesday. In its discussions with the Pentagon, Anthropic has asked US officials to refrain from using its products to create weapons that autonomously target enemy combatants or conduct mass surveillance of US citizens, according to people familiar with the matter. The Pentagon has pushed back and demanded the ability to use Claude, one of the only AI tools cleared for classified cloud work, without any restrictions from the company. The Defense Department has also threatened to use the Cold War-era Defense Production Act to use Anthropic's software anyway. Parnell's X post on Thursday represented the department's first, on-the-record statement spelling out potential consequences. The Pentagon has no interest in mass surveillance or developing "autonomous weapons that operate without human involvement," Parnell wrote. "We will not let ANY company dictate the terms regarding how we make operational decisions," he continued. "They have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk."
[111]
OpenAI reaches deal to work with Dept. of Defense classified documents, CEO Altman announces
OpenAI CEO Sam Altman attends an event to pitch AI for businesses in Tokyo, Japan, February 3, 2025; illustrative. OpenAI will begin working with the Department of Defense to provide AI services for classified documents, CEO Sam Altman announced on Friday night. "Tonight, we reached an agreement with the Department of War [Rebrand made by the Trump administration to the Department of Defense] to deploy our models in their classified network," Altman said in a statement. According to a report by Axios, no new contract has been signed yet between the Pentagon and OpenAI for this agreement, which would allow ChatGPT to be used safely while working in classified settings. According to Altman, the Department of Defense agreed to OpenAI's two main requirements for using their technology: "Prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems." "We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs [specialized technical professionals working directly with customers to deploy AI models into production environments] to help with our models and to ensure their safety, we will deploy on cloud networks only," Altman announced. Trump clashes with Anthropic over AI use by US Gov't The decision follows an announcement by US President Donald Trump to halt all federal agency operations using software developed by Anthropic, the company behind the AI model Claude. "I am directing every federal agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!" Trump said in a Truth Social post. Anthropic's requirements were reportedly the same as OpenAI's, with a prohibition on using Claude for mass surveillance of Americans or to develop fully autonomous weapons. Altman also asked for "all AI companies to be treated the same way" during his agreement announcement, saying that OpenAI has "expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements." In a statement, Anthropic said it would challenge any risk designation by the Department of Defense in court. "We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government," the company said. "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons." The main conflict arose after a report by The Wall Street Journal, where Claude was allegedly utilized, through Anthropic's existing partnership with the software firm Palantir, in the operation to capture former Venezuelan president Nicolas Maduro. Reports also mentioned that the Pentagon was working with Google and xAI to establish more permissive contracts and use their AI models (Gemini and Grok, respectively) while working with classified material.
[112]
Anthropic CEO Dario Amodei to meet with Defense Secretary Pete Hegseth on AI DoD model use
Negotiations between Anthropic and the Department of Defense have hit a snag in recent weeks as the two organizations have clashed over the terms of use for Anthropic's technology. Anthropic wants assurance that its models will not be used for autonomous weapons or to spy on Americans. The DoD has made clear it wants to use Anthropic's models "for all lawful use cases," without limitation. As of February, Anthropic is the only AI company that has deployed its models on the DoD's classified networks and provided customized models to national security customers. The company was awarded a $200 million contract with the DoD last year.
[113]
Rivals support Anthropic in AI standoff with Pentagon
At the heart of the dispute is Anthropic's refusal to allow its Claude models to be used for the mass surveillance of US citizens or deployed in fully autonomous weapons systems. An open letter titled "We Will Not Be Divided," signed as of Friday by 336 Google DeepMind staffers and 68 from OpenAI, called on tech leaders to hold the line together. Hundreds of employees at AI giants Google DeepMind and OpenAI have urged their companies to set aside their bitter rivalries and rally behind Anthropic in its standoff with the Pentagon. Washington gave the AI startup until Friday afternoon to agree to unconditional military use of its technology - even where that clashes with the company's own ethical standards. At the heart of the dispute is Anthropic's refusal to allow its Claude models to be used for the mass surveillance of US citizens or deployed in fully autonomous weapons systems. The conflict has drawn a show of solidarity from the industry. An open letter titled "We Will Not Be Divided," signed as of Friday by 336 Google DeepMind staffers and 68 from OpenAI, called on tech leaders to hold the line together. "We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight," the letter said. "They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand," it added. OpenAI CEO Sam Altman told employees on Thursday that he too was seeking an agreement with the Pentagon that would include red lines similar to Anthropic's, and that he hoped to help broker a resolution, the Wall Street Journal first reported. "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions," he wrote. The Department of Defence, for its part, has pushed back on Anthropic's position, insisting it always operates within the law and that contracted suppliers cannot set their own terms on how their products are used by the military. The Pentagon's ultimatum requires Anthropic to agree to unrestricted military use of its technology by 5:01 pm (22:01 GMT) Friday or face compulsion under the Defence Production Act. The Cold War-era law, last invoked during the Covid pandemic, grants the federal government sweeping powers to direct private industry toward national security priorities. The Pentagon has also threatened to designate Anthropic a supply chain risk -- a label typically reserved for companies from adversary nations - which could severely damage its ability to work with the US government and harm its broader reputation. Industry representatives in Washington are pressing hard for a negotiated outcome, warning that the confrontation risks damaging the AI sector as a whole. "Decisions about military AI cannot be settled through ad hoc standoffs between the Pentagon and individual firms," said Daniel Castro, vice president of the Information Technology and Innovation Foundation. "If certain AI capabilities are deemed essential for national defence, those expectations should be debated openly and written into law."
[114]
Anthropic CEO Says AI Company 'Cannot In Good Conscience Accede' To Pentagon's Demands
WASHINGTON (AP) -- Anthropic CEO Dario Amodei said Thursday the artificial intelligence company "cannot in good conscience accede" to the Pentagon's demands to allow wider use of its technology. The company said in a statement that it's not walking away from negotiation but that new contract language received from the Defense Department "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons." The Pentagon's top spokesman has reiterated that the military wants to use Anthropic's artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands. Sean Parnell said Thursday on social media that the Pentagon "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement." Anthropic's policies prevent its models, such as its chatbot Claude, from being used for those purposes. It's the last of its peers -- the Pentagon also has contracts with Google, OpenAI and Elon Musk's xAI -- to not supply its technology to a new U.S. military internal network. Parnell said the Pentagon wants to "use Anthropic's model for all lawful purposes" but didn't offer details on what that entailed. He said opening up use of the technology would prevent the company from "jeopardizing critical military operations." "We will not let ANY company dictate the terms regarding how we make operational decisions," he said. During a meeting on Tuesday between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei, military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn't approve. Parnell mentioned only two of those consequences in the Thursday post on X and said Anthropic has "until 5:01 PM ET on Friday to decide." "Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk," he wrote. Anthropic didn't immediately respond to a request for comment Thursday. It said in a statement after Tuesday's meeting that it "continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do." Sen. Thom Tillis, a North Carolina Republican who is not seeking reelection, said Thursday that the Pentagon has been handling the matter unprofessionally while Anthropic is "trying to do their best to help us from ourselves." "Why in the hell are we having this discussion in public?" Tillis told reporters. "This is not the way you deal with a strategic vendor that has contracts." He added, "When a company is resisting a market opportunity for fear of negative consequences, you should listen to them and then behind closed doors figure out what they're really trying to solve." Sen. Mark Warner of Virginia, the ranking Democrat on the Senate Intelligence Committee, said he was "deeply disturbed" by reports that the Pentagon is "working to bully a leading U.S. company." "Unfortunately, this is further indication that the Department of Defense seeks to completely ignore AI governance," Warner said in a statement. It "further underscores the need for Congress to enact strong, binding AI governance mechanisms for national security contexts." As Pentagon officials say they always will follow the law with their use of AI models, Hegseth told Fox News last February, weeks after becoming defense secretary, that "ultimately, we want lawyers who give sound constitutional advice and don't exist to attempt to be roadblocks to anything." ___ Associated Press writer Ben Finley contributed to this report.
[115]
OpenAI lands Pentagon deal after Anthropic standoff (OPENAI:Private)
OpenAI (OPENAI) said it has signed an agreement with the U.S. Department of Defense to deploy its models within a classified government network. In a post on X, CEO Sam Altman said, "Tonight, we reached an agreement with the Department of War to OpenAI secured a deal with the U.S. Department of Defense to deploy its models on classified networks, indicating increased government adoption and revenue potential for the company. Anthropic's refusal to allow its models to be used for autonomous weapons or mass surveillance led to a supply chain risk designation, which restricts its future government business and could have legal and reputational consequences. Recent actions by the government, such as designating Anthropic a supply chain risk and banning its use in federal agencies, create heightened regulatory and legal risks for AI companies seeking government contracts.
[116]
What's behind the Anthropic-Pentagon feud
Washington -- The Pentagon gave Anthropic an ultimatum this week: Give the U.S. military unrestricted use of its AI technology or face a ban from all government contracts. At the center of the issue is a question of who controls how artificial intelligence models are used, the Pentagon or the company's CEO. The Pentagon awarded Anthropic a $200 million contract in July to develop AI capabilities that would advance U.S. national security. Anthropic's rivals, including OpenAI, Google and xAI were also awarded $200 million contracts by the Pentagon last year. Anthropic is currently the only AI company to have its model deployed on the Pentagon's classified networks, through a partnership with data analytics giant Palantir. A senior Pentagon official told CBS News that Grok, which is owned by Elon Musk's xAI, is on board with being used in a classified setting, and other AI companies are close. The Pentagon announced last month that it's looking to accelerate its uses of AI, saying the technology could help the military "rapidly convert intelligence data" and "make our Warfighters more lethal and efficient." The standoff between the Pentagon and Anthropic was reportedly set off by the U.S. military's use of its technology, known as Claude, during the operation to capture former Venezuela President Nicolás Maduro in January. Anthropic has repeatedly asked the Pentagon to agree to certain guardrails, among them a restriction on using Claude to conduct mass surveillance of Americans, sources told CBS News. And the company also wants to ensure Claude is not used by the Pentagon for final targeting decisions in military operations without any human involvement, one source familiar with the matter said. Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment, the source said. When asked for comment, a senior Pentagon official said: "This has nothing to do with mass surveillance and autonomous weapons being used. The Pentagon has only given out lawful orders." Pentagon officials have expressed concerns to Anthropic that the company's guardrails could stand in the way of critical actions, such as responding to an intercontinental ballistic missile launched toward the United States. Any company-imposed restrictions "could create a dynamic where we start using them and get used to how those models work, and when it comes that we need to use it in an urgent situation, we're prevented from using it," Emil Michael, the undersecretary of defense for research, said at an event in February. On the question of when AI is used to strike or kill military targets and makes a mistake, who is liable -- the military or the AI company -- a defense official said: Legality is the Pentagon's responsibility as the end user. Anthropic CEO Dario Amodei has been vocal in expressing his concerns about the potential dangers of AI and has centered the company's brand around safety and transparency. In a lengthy essay last month, Amodei warned of the potential for abuse of the technologies, writing that "a powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow." "Democracies normally have safeguards that prevent their military and intelligence apparatus from being turned inwards against their own population, but because AI tools require so few people to operate, there is potential for them to circumvent these safeguards and the norms that support them. It is also worth noting that some of these safeguards are already gradually eroding in some democracies," he wrote. Amodei has long backed what he describes as "sensible AI regulation," including rules that would require AI companies to be transparent about the risks posed by their models and any steps taken to mitigate them. The Trump administration, meanwhile, has favored a lighter touch, and has argued that stringent AI regulations could stifle innovation and make it harder for the American AI industry to compete. The administration has sought to block what it calls "excessive" state-level regulations. At one point last year, venture capitalist and White House AI and crypto adviser David Sacks accused Anthropic of "fear-mongering" and suggested its interest in AI regulations is self-serving. In a January speech, Defense Secretary Pete Hegseth derided what he views as "social justice infusions that constrain and confuse our employment of this technology." "We will not employ AI models that won't allow you to fight wars," Hegseth declared. "We will judge AI models on this standard alone; factually accurate, mission relevant, without ideological constraints that limit lawful military applications. Department of War AI will not be woke. It will work for us. We're building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge." Hegseth gave Anthropic until Friday to agree to give the U.S. military unrestricted use of its technology or risk being blacklisted, sources familiar with the situation told CBS News. Pentagon officials are considering invoking the Defense Production Act to compel Anthropic to comply on national security grounds. Or, if an agreement can't be reached, defense officials have discussed declaring the company a "supply chain risk" to push it out of government, according to the sources.
[117]
Pentagon official: Anthropic CEO 'has a God-complex'
A top Pentagon official accused Anthropic CEO Dario Amodei on Thursday of having a "God-complex," as the Defense Department (DOD) and the company face off over the terms of use for its AI models. "It's a shame that @DarioAmodei is a liar and has a God-complex," Under Secretary of War Emil Michael wrote in a post on social platform X, after Anthropic said it could not accept the Pentagon's terms. "He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk," Michael continued. "The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company." The Pentagon and Anthropic have been locked in a dispute in recent weeks over the company's AI usage policy, which bars its models from being used for mass surveillance or lethal autonomous weapons. The AI firm has made these two issues red lines in its negotiations. The DOD has pushed for language that would allow for "all lawful uses," while arguing that they have "no interest" in using AI for conduct mass surveillance or develop autonomous weapons. Following a Tuesday meeting between Amodei and Defense Secretary Pete Hegseth, the department warned Anthropic that it would cancel its contract if it did not agree to the Trump administration's terms by Friday at 5:01 p.m. EST. They have also threatened to label the company a "supply chain risk" or invoke the Defense Production Act. Anthropic was one of four major AI companies to sign a $200 million contract with the DOD last summer. However, its AI model Claude was previously the only one approved for use on the classified side. The Pentagon recently reached a new agreement with xAI to use its model on classified systems. On Thursday night, Amodei released lengthy statement, saying Anthropic "cannot in good conscience accede" to the Pentagon's terms. "Anthropic understands that the Department of War, not private companies, makes military decisions," the CEO said. "We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner." "However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei continued. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do." He added, "Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now."
[118]
What to Know About Defense Protection Act and the Pentagon's Anthropic Ultimatum
NEW YORK (AP) -- Defense Secretary Pete Hegseth gave Anthropic an ultimatum this week: Open its artificial intelligence technology for unrestricted military use by Friday, or risk losing its government contract. Defense officials in the Trump administration also warned they could designate Anthropic, which makes the AI chatbot Claude, as a supply chain risk -- or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn't approve. Some experts say that using the law this way would be unprecedented, and could bring future legal challenges. The government's efforts to essentially force Anthropic's hand also underscore a wider, contentious debate over AI's role in national security. Here's what we know. What is the Defense Production Act? The Defense Production Act gives the federal government broad authority to direct private companies to meet the needs of national defense. The act was signed by President Harry S. Truman in 1950, amid concerns about supplies and equipment during the Korean War. But over its now decades-long history, the law's powers have been invoked not only in times of war but also for domestic emergency preparedness as well as recovery from terrorist attacks and natural disasters. One of the act's provisions allows the president to require companies to prioritize government contracts and orders deemed necessary for national defense, with the goal of ensuring the private sector is producing enough goods needed to meet a war effort or other national emergency. Other provisions give the president the ability to use loans and additional incentives to increase production of critical goods, and authorize the government to establish voluntary agreements with private industry. The DPA is "one of the government's most powerful and adaptable industrial policy tools," said Joel Dodge, an attorney and the director of industrial policy and economic security at the Vanderbilt Policy Accelerator. Anthropic is the last of its AI peers to not supply its technology to a new U.S. military internal network. Its CEO Dario Amodei repeatedly has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent. The Defense Department is considering invoking the DPA to give the military more authority to use Anthropic's products, even if the company doesn't approve of how, according to a person familiar with the matter and a senior Pentagon official. That could mean forcing Anthropic to adapt its model to the Pentagon's needs without built-in safety limits, or remove certain ethical restrictions from the company's contract language. Experts like Dodge say both would be "without precedent under the history of the DPA." "It's a powerful law," he said. "(But) it has never been used to compel a company to produce a product that it's deemed unsafe, or to dictate its terms of service." How has this law been used in the past? Trump in his first term and former President Joe Biden invoked the DPA to boost supplies to combat the COVID-19 pandemic. And during 2022's nationwide baby formula shortage, Biden used the law to speed production of formula and authorize flights to import supply from overseas. Biden also invoked the DPA in a 2023 executive order on AI, notably in efforts to require that companies share safety test results and other information with the government. Trump repealed the order at the start of his second term. Decades ago, the administrations of both President Bill Clinton and George W. Bush used the DPA to ensure that electricity and natural gas shippers continued supplying California utilities amid an energy crisis. And the law was used after Hurricane Maria struck Puerto Rico in 2017 to prioritize contracts for food, bottled water, manufactured housing units and the restoration of electrical systems. The DPA requires periodic reauthorization to remain in effect, which can expand or refine the scope of the law. According to congressional documents, its next expiration date is slated for Sept. 30 of this year. And depending on how the Defense Department's reported demands unfold, Anthropic could be at the top of lawmakers' minds. Possible next steps for Anthropic If the Defense Department uses the DPA provision aimed at prioritizing government contracts and ordering production of certain goods -- which the Anthropic case suggests it will -- a company can push back if the requested product isn't something it already produces, Dodge and others say, or if it deems the terms to be unreasonable. But the government may try and overrule that, notes Charlie Bullock, senior research fellow at the Institute for Law & AI. "If neither side backs down, it seems realistic that there would be litigation between Anthropic and the government," Bullock said. Some have also noted tension between the Pentagon's warning that it could designate Anthropic as a supply chain risk while also indicating that its products are so important to national defense that it needs to invoke the DPA -- two assertions that seem at odds with each other. "There are a lot of forces that I think the administration's counting on that would lead Anthropic to just give in on Friday and agree with its terms," Dodge said. If there's future litigation over a potential DPA order, Dodge doesn't expect the government to prevail because "it seems very out of bounds under the text of the law." But if the administration is successful, or Anthropic simply agrees to new terms, that could open up "a Pandora's box of what the government could do to assert power and control over private companies," he added. ___ Associated Press Writers Matt O'Brien in Providence, Rhode Island and Konstantin Toropin and David Klepper in Washington contributed to this report.
[119]
Pentagon Gives A.I. Company an Ultimatum
Julian E. Barnes reported from Washington, and Sheera Frenkel from San Francisco. The Pentagon delivered an ultimatum to Anthropic, the only artificial intelligence company currently operating on classified military systems, ordering the firm to bend to its demands by Friday. If the firm fails to agree by 5:01 p.m. on Friday, Defense Secretary Pete Hegseth said the Trump administration would invoke the Defense Production Act, compelling the use of its model by the military and labeling the company a supply chain risk, according to a senior Pentagon official. That step would put Anthropic's government contracts at risk. The two threats are fundamentally at odds: One would prevent the government from using the company's products, while the other would force the company to let the government use the products. Despite the contradiction, the threats reflect the level of anger in the top ranks of the Pentagon toward Anthropic for resisting its demands and how important the company's model has become to the military. "The Pentagon knows they are issuing an extreme threat. They are using every button or lever they have," said Jessica Tillipman, an associate dean at the George Washington University Law School. "The bigger issue here is that it waters down these designations. They are transforming what is designed to be national security tools into a point of leverage for business." Mr. Hegseth summoned Dario Amodei, the Anthropic chief executive, to the Pentagon on Tuesday for a morning meeting. The tone of the discussion was civil, but when Anthropic did not agree to Mr. Hegseth's demands, he leveled the threats against it, according to people briefed on the meeting. The New York Times spoke to people on both sides of the debate over Anthropic's work with the military, but they spoke on the condition that their names not be used to discuss the sensitive negotiations. Anthropic has argued that it was asking for reasonable assurances that its model would not be used for surveillance of Americans or in autonomous weapons, such as drone operations, that did not involve human oversight. Anthropic's supporters have contended that the company is being punished for being first on the classified system and creating a special model, Claude Gov, that does not have the same guardrails and restrictions that classified models have. Pentagon officials have said that using software and weapons lawfully is their responsibility, one they take seriously. But the officials say they cannot effectively allow all their contractors to specify how the equipment they sell to the Pentagon will be used, and that lawful use must be the only constraint. While the Defense Production Act gives the Pentagon wide-ranging powers, it is usually invoked in manufacturing contexts. It would be unusual for the act to be used on a software company, forcing Anthropic to make its product available for free. An Anthropic spokesman said that the company had continued good-faith conversations in the meeting at the Pentagon. The spokesman said the company wanted to support the government but needed to ensure that its models were used in line with what they could "reliably and responsibly do." But the senior Pentagon official rejected those demands and said the debate had nothing to do with those issues. The Pentagon wants all artificial intelligence contracts to stipulate that the military can use the models for any lawful purpose. The official confirmed that the Pentagon has an agreement with Elon Musk's company xAI to use its artificial intelligence model, Grok, on the classified system. But it will take time to integrate Grok onto classified cloud servers and into software from Palantir, a data analytics company that the military uses. More important, Anthropic's Claude is considered a superior product to Grok, regularly yielding more accurate information. The Pentagon also is close to an agreement with Google to bring its Gemini model onto the classified system, but the senior official said the deal was not complete. A person briefed on the meeting said Anthropic would continue to demand assurances that its models are not used for autonomous weapons programs or mass surveillance. Pentagon officials took issue with Anthropic after Palantir reported a conversation that one of its employees had had with a counterpart at the artificial intelligence company regarding the U.S. military operation last month to capture President Nicolás Maduro of Venezuela. In the meeting on Tuesday, Mr. Amodei said there had been a misunderstanding and that his company had not reached out to Palantir or the Pentagon about the Maduro operation, according to a person briefed on the meeting. Mr. Amodei insisted his company had never objected to or interfered with legitimate military operations.
[120]
Trump Bans Anthropic As Pentagon Reportedly Accepts OpenAI's Military AI Safeguards -- Anthony Scaramucci, Ilya Sutskever, Ross Gerber Weigh In
On Friday, President Donald Trump ordered federal agencies to phase out Anthropic's technology as the Pentagon reportedly agreed to OpenAI's safety terms for deploying artificial intelligence in classified military settings. Pentagon Aligns With OpenAI On Military AI Guardrails The Pentagon has agreed in principle to follow safeguards proposed by OpenAI for using its models in classified environments, Axios reported, citing a source familiar with the talks, though no contract has been finalized. OpenAI's framework reportedly bars the use of its AI for mass surveillance or autonomous weapons. It requires that models remain confined to secure cloud environments rather than be embedded directly into edge systems such as weapons platforms. The company also seeks continuous monitoring capabilities and security-cleared researchers to oversee deployments and advise on risk. CEO Sam Altman reportedly told employees the company's approach centers on strengthening safeguards as it learns from real-world use. The Pentagon and OpenAI did not immediately respond to Benzinga's requests for comment. Trump Orders Phase-Out Of Anthropic This development came hours after Trump directed federal agencies to cease using Anthropic's technology, citing what he described as unacceptable restrictions on lawful military applications. The order includes a six-month transition period for agencies currently using Anthropic's products. Defense Secretary Pete Hegseth said the military must have "full, unrestricted access" to AI systems for lawful defense purposes and announced plans to designate Anthropic as a supply-chain risk to national security. Anthropic has drawn red lines against mass surveillance and the use of autonomous weapons. The company did not immediately respond to Benzinga's request for comment. Tech Titans And Wall Street React The clash sparked immediate reaction across Silicon Valley and Wall Street. Ilya Sutskever, co-founder and former chief scientist of OpenAI, wrote on X that it was "extremely good" that Anthropic had not backed down. He added that it was "significant" that OpenAI had taken a similar stance. SkyBridge Capital founder Anthony Scaramucci described the episode as the government "bullying" a private company. Investor Ross Gerber said he directed employees at his firm to use Anthropic's Claude model "as much as possible." Elizabeth Warren Demands Transparency Sen. Elizabeth Warren (D-Mass.) also took to X and said that the American public "deserves to know" what Pentagon officials are planning and called on Hegseth to testify about the decision. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Image via Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[121]
Anthropic digs in heels in dispute with Pentagon, source says
Artificial intelligence lab Anthropic has no intention of easing its usage restrictions for military purposes, a person familiar with the matter said on Tuesday, adding talks continue after a meeting to discuss its future with the Pentagon. The meeting between Anthropic CEO Dario Amodei and U.S. Defense Secretary Pete Hegseth was scheduled to hash out a monthslong dispute. The AI startup has refused to remove safeguards that would prevent its technology from being used to target weapons autonomously and conduct U.S. domestic surveillance. Pentagon officials have argued the government should only be required to comply with U.S. law. During the meeting, Hegseth delivered an ultimatum to Anthropic: Get on board or the government would take drastic action, people familiar with the matter said. The options included labeling Anthropic as a supply-chain risk or have the Pentagon invoke a law, the Defense Production Act, that would force Anthropic to change its rules, the people said. The government gave Anthropic until Friday at 5 p.m. to respond, according to a senior Pentagon official with knowledge of the matter. The Pentagon did not immediately respond to a comment request. An Anthropic spokesperson said Tuesday's meeting "continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do."
[122]
Hegseth gives Anthropic CEO until Friday to back down in AI safeguards fight
The big picture: Hegseth told Amodei in a tense meeting on Tuesday that the Pentagon will either cut ties and declare Anthropic a "supply chain risk," or invoke the Defense Production Act to force the company to tailor its model to the military's needs. Why it matters: The Pentagon wants to punish Anthropic as the feud over AI safeguards grows increasingly nasty, but officials are also worried about the consequences of losing access to its industry-leading model, Claude. * "The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good," a Defense official told Axios ahead of the meeting. * Anthropic has said it is willing to adapt its usage policies for the Pentagon, but not to allow its model to be used for the mass surveillance of Americans or the development of weapons that fire without human involvement. * Anthropic's Claude is the only AI model currently used for the military's most sensitive work. Driving the news: A senior Defense official said the meeting was "not warm and fuzzy at all." Another source told Axios it remained "cordial" with no voices raised on either side, and that Hegseth praised Claude to Amodei. * Hegseth told Amodei he won't let any company dictate the terms under which the Pentagon makes operational decisions, or object to individual use cases. The intrigue: Hegseth specifically mentioned the Pentagon's claim that Anthropic raised concerns to its partner Palantir over the use of Claude during the Maduro raid. * Amodei denied that Anthropic raised any such concerns or even broached the topic with Palantir beyond standard operating conversations. * He reiterated that the company's red lines have never prevented the Pentagon from doing its work or posed an issue for anyone operating in the field. In the room: In a sign of how seriously the Pentagon is taking this dispute, Hegseth was joined in the meeting by Deputy Secretary Steve Feinberg, Under Secretary for Research and Engineering Emil Michael, Under Secretary for Acquisition and Sustainment Michael Duffy, Hegseth's chief spokesperson Sean Parnell and general counsel Earl Matthews, the Pentagon's top lawyer. The other side: Anthropic continued to strike a conciliatory tone after the meeting. * "During the conversation, Dario expressed appreciation for the Department's work and thanked the Secretary for his service," an Anthropic spokesperson said. * "We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do." How it works: The Defense Production Act gives the president the authority to compel private companies to accept and prioritize particular contracts as required for national defense. * It was used during the COVID-19 pandemic to increase production of vaccines and ventilators, for example. * The law is rarely used in such a blatantly adversarial way. The idea, the senior Defense official said, would be to force Anthropic to adapt its model to the Pentagon's needs, without any safeguards. * Anthropic could theoretically take the administration to court, arguing it's not providing the sort of commercially available product for which the DPA can be used to expedite production, but custom-built software already tailored to sensitive government uses, according to one defense consultant. * The Pentagon is also considering severing its contract with Anthropic and declaring the company a supply chain risk, which would require a plethora of other companies that work with the Pentagon to certify that Claude isn't used in their workflows. Friction point: Cutting ties would require the Pentagon to have a replacement ready for Claude, which is currently the only model used in classified systems. * The use of Claude in the Venezuela operation came through Anthropic's partnership with Palantir, the AI software company. * It's also used for a wide variety of more bureaucratic functions within the military. What to watch: Elon Musk's xAI recently signed a contract to bring its model, Grok, into classified settings, though it's unclear whether it would be able to fully replace Claude. * The Pentagon has been speeding up conversations with OpenAI and Google about moving their models -- already available for unclassified uses -- into classified systems, sources tell Axios. * One source familiar with the discussions said that right now, it appears Claude is ahead of the others in a number of applications relevant to the military, such as offensive cyber capabilities. * The one source said Gemini is seen as a potential replacement if and when a deal is reached. That would require Google to let the Pentagon uses its model for "all lawful purposes," the same terms that Anthropic rejected.
[123]
In its fight with Hegseth, Anthropic confronts perhaps the biggest crisis in its five-year existence | Fortune
AI company Anthropic is facing perhaps the biggest crisis in its five-year existence as it stares down a Friday deadline to remove restrictions on how the U.S. Department of War can use its technology or face the possibility that the Pentagon will take action that could cripple its business. Pete Hegseth, the U.S. secretary of war, has demanded that Anthropic remove restrictions it currently stipulates in its contracts that prohibit its AI models being used for mass surveillance or from being incorporated into lethal autonomous weapons, which can make decisions to attack without human intervention. Instead, Hegseth wants Anthropic to stipulate that its technology can be used for "any lawful purpose" that the Department of War wishes to pursue. If the company does not comply by Friday, Hegseth has threatened to not only cancel Anthropic's existing $200 million contract with his department, but to have the company labelled a "supply chain risk," meaning that no company doing business with the Department of War would be allowed to use Anthropic's models. That could eviscerate Anthropic's growth -- just as the company, which is currently valued at $380 billion, has been seeing significant commercial traction and is contemplating an initial public offering as soon as next year. A Tuesday meeting between Hegseth and Anthropic CEO Dario Amodei in Washington, D.C., failed to resolve the conflict and ended with Hegseth reiterating his ultimatum. The dispute comes against a backdrop of sometimes overt hostility towards Anthropic from other Trump administration officials. AI czar David Sacks in particular has publicly attacked the company on social media for representing "woke AI" and the "doomer industrial complex." Sacks has accused the company of engaging in a "sophisticated regulatory capture strategy based on fearmongering." His argument is basically that Anthropic executives disingenuously warn of extreme risks from AI systems in order to justify regulations on the technology with which only Anthropic and a few other AI companies can easily comply. Anthropic CEO Dario Amodei has called such views "inaccurate" and insisted that the company shares many policy goals with the Trump administration, including wanting to see the U.S. remain at the forefront of the development of AI technology. Nonetheless, Sacks and others within the administration may be hoping Hegseth makes good on his threats to blacklist Anthropic from the national security supply chain. Other AI companies, such as OpenAI and Google, have apparently not imposed restrictions on how the U.S. military uses their tech. Working with the military has been controversial among some technology workers. In 2018, Google faced a vocal staff rebellion over its decision to help the Pentagon with "Project Maven," an effort to use AI to analyze aerial surveillance imagery. The employee revolt forced Google to pull out of a bid to renew its contract to work on the project. But in the years since, the internet giant has quietly renewed its ties with the defense establishment, and in December, the Department of War announced it would deploy Google's Gemini AI models for a number of use cases. Owen Daniels, associate director of analysis at the Center for Security and Emerging Technology (CSET) at Georgetown University, told the Associated Press that "Anthropic's peers, including Meta, Google and xAI, have been willing to comply with the department's policy on using models for all lawful applications. So the company's bargaining power here is limited, and it risks losing influence in the department's push to adopt AI." But principles may be an unusually powerful motivator for Anthropic employees. The company was founded by a group of researchers who broke away from OpenAI in part because they were concerned that lab was allowing commercial pressures to divert it from its original mission of ensuring powerful AI is developed for humanity's benefit. And more recently, Anthropic staked out principled positions on not incorporating advertising into its Claude products and not developing chatbots specifically designed to be romantic or erotic companions. Given the company's culture, some outside commentators have speculated that at least some Anthropic staff will resign if the company gives in to Hegseth's demands and drops the limitations currently built into its government contracts. Hegseth has also said there is another option available to the Pentagon if Anthropic does not comply with its request voluntarily. This would involve using the Defense Production Act of 1950 to compel Anthropic to offer the military a version of its Claude model without any restrictions in place. That DPA, which was originally designed to allow the government to take charge of civilian manufacturing in the event of war, was invoked during the Covid-19 pandemic to compel companies to produce protective equipment and vaccines. Since then, it has been used numerous times, mostly by the Biden administration, even in the absence of a clear national emergency. For instance, in 2023 the Biden White House invoked the DPA to force tech companies to share information about the safety testing of their advanced AI models with the government. Katie Sweeten, who served until September 2025 as the Department of Justice's liaison to the Department of Defense and is now a partner at the law firm Scale, told CNN that Hegseth's position didn't make sense from a policy perspective. "I would assume we don't want to utilize the technology that is the supply chain risk, right? So I don't know how you square that," she said. Dean Ball, who served as an AI policy advisor to the Trump Administration, helping to draft its AI Action plan, and who is now a senior fellow at the Foundation for American Innovation, also called the Pentagon's position "incoherent" in a post on X. "How can one policy option be 'supply chain risk' (usually used on foreign adversaries) and the other be DPA (emergency commandeering of critical assets)?" he said. Ball told Tech Crunch that imposing the supply chain risk label would send a terrible message to any company doing business with the government. "It would basically be the government saying, 'If you disagree with us politically, we're going to try to put you out of business,'" he said. Some legal commentators noted that both sides of the dispute had some legitimate arguments. "We wouldn't want Lockheed Martin selling the military an F-35 and then telling the Pentagon which missions it could fly," Alan Rozenshtein, an associate professor of law at the University of Minnesota and a fellow at Brookings, said in a column posted on the site Lawfare. But Rozenshtein also argued that Congress, not the Pentagon, should set the rules for how the U.S. military deploys AI. "The terms governing how the military uses the most transformative technology of the century are being set through bilateral haggling between a defense secretary and a startup CEO, with no democratic input and no durable constraints," he wrote. As of midweek, Anthropic showed no signs of backing down from its position. Helen Toner, the interim executive director of Georgetown's CSET and a former OpenAI board member, posted on X that the Pentagon is likely underestimating the extent to which Anthropic may be reluctant to abandon its position because -- as weird as this sounds -- doing so might set a bad example for future versions of Claude. Anthropic researchers have increasingly voiced concerns about what each successive version of Claude learns about its own character based on training data that now includes news articles and social media commentary about Claude itself. But the company has compromised before when its back has been against the wall. In June 2025, Anthropic faced a potentially existential threat when a federal judge ruled that its use of libraries of pirated books to train its Claude AI models was likely a violation of copyright law. This left the company facing tens of billions of dollars in potential liabilities if it took the case to a full trial and lost. Instead of continuing to fight the case, Anthropic announced a $1.5 billion settlement with the copyright holders. And just this past week, Anthropic demonstrated again, in a different context, that it is sometimes willing to put pragmatism and commercial imperatives ahead of high-minded principles. The company updated its Responsible Scaling Policy (RSP), dropping a previous commitment to never train an AI model unless it could guarantee it had adequate safety controls in place. The new RSP instead simply commits Anthropic to matching or surpassing the safety efforts being made by competitors. It also says Anthropic will delay developing models if the company believes it has a clear lead over the competition and it also thinks the model is training presents a significant catastrophic risk. Jared Kaplan, Anthropic's head of research, told Time that "unilateral commitments" no longer made sense if "competitors are blazing ahead." Whether Anthropic will make a similar concession to commercial pressures in its fight with the Department of War remains to be seen.
[124]
Anthropic CEO says it 'cannot in good conscience accede' to Pentagon's demands for AI use - The Korea Times
WASHINGTON -- Anthropic CEO Dario Amodei said Thursday that the artificial intelligence company "cannot in good conscience accede" to the Pentagon's demands to allow unrestricted use of its technology, deepening a public clash with the Trump administration that is threatening to pull its contract and take other drastic steps by Friday. The maker of the AI chatbot Claude said in a statement that it's not walking away from negotiations but that new contract language received from the Defense Department "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons." Sean Parnell, the Pentagon's top spokesman, said earlier on social media that the military "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement." Anthropic's policies prevent its models from being used for those purposes. It's the last of its peers -- the Pentagon also has contracts with Google, OpenAI and Elon Musk's xAI -- to not supply its technology to a new U.S. military internal network. "It is the Department's prerogative to select contractors most aligned with their vision," Amodei wrote in a statement. "But given the substantial value that Anthropic's technology provides to our armed forces, we hope they reconsider." Defense Secretary Pete Hegseth gave Anthropic an ultimatum on Tuesday after meeting with Amodei: Allow the Pentagon to use the company's AI as it sees fit by Friday or risk losing its government contract. Military officials warned that they could go even further and designate the company as a supply chain risk or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products. Amodei said Thursday that "those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." In a post before Amodei's announcement, Parnell reiterated that the Pentagon wants to " use Anthropic's model for all lawful purposes" but didn't offer details on what that entailed. He said opening up use of the technology would prevent the company from "jeopardizing critical military operations." "We will not let ANY company dictate the terms regarding how we make operational decisions," he said. Emil Michael, defense undersecretary for research and engineering, later lashed out at the Anthropic CEO, alleging on X that Amodei "has a God-complex" and "wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk." The talks that escalated this week began months ago. Amodei said that if the Pentagon doesn't reconsider its position, Anthropic "will work to enable a smooth transition to another provider." Sen. Thom Tillis, a North Carolina Republican who is not seeking reelection, said the Pentagon has been handling the matter unprofessionally while Anthropic is "trying to do their best to help us from ourselves." "Why in the hell are we having this discussion in public?" Tillis told reporters. "This is not the way you deal with a strategic vendor that has contracts." He added, "When a company is resisting a market opportunity for fear of negative consequences, you should listen to them and then behind closed doors figure out what they're really trying to solve." Sen. Mark Warner of Virginia, the ranking Democrat on the Senate Intelligence Committee, said he was "deeply disturbed" by reports that the Pentagon is "working to bully a leading U.S. company." "Unfortunately, this is further indication that the Department of Defense seeks to completely ignore AI governance," Warner said in a statement. It "further underscores the need for Congress to enact strong, binding AI governance mechanisms for national security contexts." While Pentagon officials say they always will follow the law with their use of AI models, the department has taken steps to change the culture among the military legal ranks. Hegseth told Fox News last February, weeks after becoming defense secretary, that "ultimately, we want lawyers who give sound constitutional advice and don't exist to attempt to be roadblocks to anything." The same month, Hegseth also fired the top lawyers for the Army and the Air Force without explanation. The Navy's top lawyer had resigned shortly after the election in late 2024.
[125]
Anthropic Vs. the Pentagon Is a Fight for AI's Future
Consider it a preview of strange quandaries to come. In July, Anthropic PBC signed a deal with the Defense Department worth up to $200 million, calling it "a new chapter" in its "commitment to supporting US national security." The Pentagon was equally keen. Eight months later, the partnership has ruptured, the contract is at risk, the White House has labeled Anthropic with its gravest epithet ("woke"), and defense officials have threatened to designate the company a "supply chain risk." What went wrong? Welcome to defense contracting in the artificial intelligence era. Anthropic has been providing the Pentagon with a specialized version of its large language model, Claude, for use on classified systems. The company says it remains committed to national-security work, but it has also established "red lines" governing the use of its products: It doesn't want Claude deployed for "mass surveillance" of Americans or as part of a fully autonomous weapons system. Whether such stipulations were part of its contract isn't clear. (A US official has said the deal was signed "without a lot of specificity.") But the Defense Department is livid: It says the military should be free to wield such tools for "all lawful purposes" and that Anthropic's stated restrictions may be a battlefield liability. It also says that makers of competing LLMs -- including OpenAI Inc., Alphabet Inc. and X.AI Corp. -- have agreed to its terms in principle. At a meeting with Defense Secretary Pete Hegseth on Tuesday, Anthropic chief Dario Amodei was reportedly given till the end of the week to decide whether to comply. To an extent, the government has a point. Differing expectations between the Defense Department and its contractors are a recipe for chaos and dysfunction, especially with a tool like AI. If Anthropic objects to what the Pentagon wants to do with Claude, the company is free to opt out of the defense-contracting business; it can hardly expect to wield an after-the-fact veto over military decisions. (Bloomberg LP, the parent company of Bloomberg News, provides AI-powered solutions for the financial industry.) The Pentagon, for its part, is free to seek other vendors. But labeling Anthropic a supply chain risk makes no sense at all. Such a designation -- usually reserved for firms linked to foreign adversaries -- would effectively blacklist Anthropic, potentially undermine contractors that use its products and discourage other tech vendors from defense work. Not least: It could hobble a leading American AI lab, mostly out of spite. ("We are going to make sure they pay a price for forcing our hand like this," one defense official told Axios.) Immediate disputes aside, Anthropic's stated concerns are worth taking seriously, but not panicking over. Domestic surveillance by US intelligence agencies has been substantially constrained by statute, the courts and executive orders in recent decades. Even so, Congress should be asking if new AI tools could circumvent these existing rules or otherwise infringe on civil rights in unexpected ways. Transparency should be the byword. Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. Similarly, the use of autonomous weaponry is already governed by Defense Department policy, the laws of war, international norms and other limitations, while AI use is subject to the Pentagon's ethical principles and Responsible AI Strategy. Yet it's fair to ask if these constraints are sufficient. As AI evolves, lawmakers should clarify the definitions used in such policies, bolster their oversight of AI use and consider additional reporting requirements, with a view toward updating law or policy as needed. AI is poised to disrupt many aspects of modern life. Few are more consequential than national defense. What's clear is that Congress, not contractors, must establish the rules of the road for such technology. Lawmakers should start grappling with these dilemmas before it's too late. More From Bloomberg Opinion: * Anthropic Should Stand Its Ground Against the Pentagon: Dave Lee * AI Can Be Transformational and Still Be a Bubble: Editorial * You Won't Find Salvation in AI: Catherine Thorbecke Want more Bloomberg Opinion? OPIN <GO>. Web readers, click here . Or you can subscribe to our daily newsletter.
[126]
OpenAI working on Pentagon deal amid Anthropic-government impasse
OpenAI's CEO Sam Altman told his staff that the company was working on a deal that could help solve the impasse between Anthropic and the Pentagon over the use of AI on the battlefield, The Wall Street Journal reported. No Such a deal could set a precedent for safe, compliant AI model use in classified military environments, possibly influencing the entire AI defense sector's standards and partnership structures. Anthropic objects to Pentagon demands for unrestricted AI model access, citing safety, unreliability for autonomous weapons, and concerns about civilian and warfighter risk. OpenAI seeks to use technical safeguards, restrict uses (e.g., no autonomous weapons), deploy personnel for oversight, and insists on cloud-based deployment, aiming to uphold safety without surrendering full control.
[127]
Anthropic rejects Pentagon's requests in AI safeguards dispute, CEO says - The Economic Times
The Pentagon's dispute with Anthropic stems from the AI startup's refusal to remove safeguards that would prevent its technology from being used to target weapons autonomously and conduct surveillance in the United States.Anthropic cannot accede to the Pentagon's request in an AI safeguards dispute despite threats to remove the company from the Department of Defense's systems, the AI firm's CEO, Dario Amodei, said on Thursday. The Pentagon's dispute with Anthropic stems from the AI startup's refusal to remove safeguards that would prevent its technology from being used to target weapons autonomously and conduct surveillance in the United States. Anthropic, backed by Google and Amazon, has a contract with the department worth up to $200 million. The department has said it will contract only with AI companies that accede to "any lawful use" and remove safeguards, Amodei said on Thursday. Use cases for its AI such as mass domestic surveillance and fully autonomous weapons have never been included in Anthropic's contracts with the department and "we believe they should not be included now," Amodei said. Amodei added that the department threatened to remove Anthropic from its systems if the company maintained the safeguards and threatened to designate it a "supply chain risk and to invoke the Defense Production Act to force the safeguards' removal". "Regardless, these threats do not change our position: we cannot in good conscience accede to their request," Amodei said. Earlier in the day, Pentagon spokesperson Sean Parnell said on X that the department has no interest in using AI to conduct mass surveillance of Americans nor does it want to use AI to develop autonomous weapons that operate without human involvement. "Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes," Parnell said. The Pentagon did not immediately respond to a request for comment on Anthropic's statement. "It is the Department's prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic's technology provides to our armed forces, we hope they reconsider," Amodei said. "Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider," he added. An Anthropic spokesperson said the company remains "ready to continue talks and committed to operational continuity for the Department and America's warfighters."
[128]
Report: Hegseth Threatens Leading AI Company In Fight Over Alarming Pentagon Demands
Report: Hegseth Threatens Leading AI Company In Fight Over Alarming Pentagon Demands Defense Secretary Pete Hegseth demanded the AI company Anthropic allow its models to be used for the mass surveillance of Americans and the development of weapons that fire without human involvement in a meeting Tuesday with the company's CEO, Axios reported. Anthropic's Claude model is currently the only AI approved for use in the military's classified systems, but Hegseth has suggested the government could either declare the company a supply chain risk, essentially cutting the company off from work with the Pentagon and its numerous contractors, or try to use the Defense Production Act to force the company to produce a model fitting the Pentagon's demands. He gave Anthropic CEO Dario Amodei until Friday to comply. "During the conversation, Dario expressed appreciation for the Department's work and thanked the Secretary for his service," an Anthropic spokesperson told Axios. "We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do." The Pentagon's threats to cut out Anthropic entirely are seen as something of a bluff -- there is no readily available replacement for the product, though Elon Musk's xAI recently signed a contract to try to bring Grok into classified settings, and both OpenAI and Google could also adapt their models for classified use.
[129]
US Pentagon Pressures Anthropic To Lift AI Guardrails
The US Pentagon's ultimatum to Anthropic is the first clear stress test of whether voluntary AI safety commitments can withstand state procurement power. The dispute as of now tests whether Anthropic's commitments hold when a government customer demands unrestricted access. On February 24, 2026, Anthropic released Version 3.0 of its Responsible Scaling Policy (RSP), removing its earlier commitment to pause training if model capabilities outpaced safety controls and replacing it with a more flexible framework of "public goals". The update went live the same day the Pentagon issued its ultimatum. Although a source told CNN the revision was unrelated to the dispute, the change narrows Anthropic's strongest self-imposed safety constraint amid rising competitive and geopolitical pressure. The US government has not introduced new AI legislation in this case. Instead, the Pentagon has relied on procurement authority and national security law, issuing a deadline, referencing the Defense Production Act and initiating contractor outreach that could precede a supply chain risk designation. This approach matters because it demonstrates that governments can reshape AI deployment standards through buyer leverage rather than formal regulatory processes. When the state is the customer, contract terms become the real governance mechanism. India is developing AI governance under the Digital India Act and expanding its compute capacity. Defence and intelligence agencies will eventually procure advanced AI systems under contractual frameworks that define permissible use. If those contracts do not encode enforceable safeguards, vendor-managed commitments may not survive operational pressure. Voluntary safety frameworks can shape public discourse, but in defence contexts, enforceable procurement standards ultimately determine deployment boundaries. The United States (US) Department of Defense has given Anthropic until 5:01 PM Eastern Time on Friday, February 27, to remove safety restrictions on military use of its Claude AI model, according to CNN. US Defense Secretary Pete Hegseth delivered the demand directly to Anthropic CEO Dario Amodei during a February 24 meeting at the Pentagon. The Pentagon wants Claude available for all "lawful" purposes without company-imposed limits. However, Anthropic has refused to drop two restrictions: it will not allow Claude to support mass surveillance of American citizens, and it will not allow Claude to make final lethal targeting decisions sans human oversight. Moreover, the Pentagon has begun contacting major defence contractors, including Lockheed Martin and Boeing, to assess their reliance on Anthropic systems, according to Reuters. Officials are reportedly considering invoking the Defence Production Act and pursuing a supply chain risk designation: two legal mechanisms that are explained in detail below. Anthropic currently operates inside classified Pentagon networks through its partnership with Palantir, announced in November 2024, making it the only commercial AI company inside those systems. Anthropic was the only AI company inside classified systems, which gave it real leverage as the Pentagon needed it specifically. However, that leverage is now eroding. On Monday, xAI signed a deal to move Grok into classified Pentagon networks under "any lawful use" terms, one day before the Hegseth-Amodei meeting. At the same time, the Pentagon is also in negotiations with OpenAI and Google for classified access on similar terms. Importantly, Google's Gemini and xAI's Grok agreed to the "all lawful use" clause without conditions. OpenAI agreed to the same clause but offers the standard version of ChatGPT that civilian users access, meaning some guardrails remain in place. Pentagon spokesman Sean Parnell framed the expectation bluntly: "Our nation requires that our partners be willing to help our warfighters win in any fight." Even so, the Pentagon is not entirely indifferent to vendor performance. A senior Pentagon official acknowledged the bind while referring to Claude: "The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good." Furthermore, Nvidia CEO Jensen Huang, speaking to CNBC on Wednesday, said the dispute is not the end of the world. He remarked that: "I hope that they can work it out, but if it doesn't get worked out, it's also not the end of the world." His comments indicate that major AI infrastructure providers view substitution as feasible if negotiations fail, reinforcing the Pentagon's leverage. The conflict intensified after a January 3, 2026, US military operation in Caracas, Venezuela, in which US special operations forces captured Venezuela's then head of state Nicolás Maduro. Reports subsequently confirmed that military personnel used Claude during the mission through the Palantir partnership. That deployment had deeper roots. In late 2024, Palantir integrated Claude into classified Pentagon systems at Impact Level 6, the security tier for data up to "secret" level, making Anthropic the first commercial AI company to operate inside those networks. At the time, Anthropic's head of sales Kate Earle Jensen said the company was "proud to be at the forefront of bringing responsible AI solutions to US classified environments". Claude was therefore already embedded in sensitive defence workflows when the Caracas operation took place, though neither the Pentagon nor Anthropic has publicly detailed the model's exact role. Following the Venezuela raid operation, internal discussions strained relations between Anthropic and defence officials. Pentagon officials interpreted questions raised by Anthropic personnel as signalling discomfort with how Claude had been deployed. Anthropic, however, denied that it sought to challenge or block any specific military mission, with Amodei telling Hegseth directly that the company never broached the topic with Palantir beyond standard operating conversations. By mid-February 2026, Undersecretary of Defense Emil Michael publicly stated that negotiations had stalled, with the disagreement crystallising around a central issue: whether Anthropic could continue enforcing usage restrictions once Claude operated inside national security systems. 1.Full operational flexibility: Hegseth conveyed that once the Department of Defense deploys a system inside classified networks, it expects full operational flexibility and assumes full legal responsibility for how that technology is deployed. Officials ground this in procurement authority, arguing that vendors should not retain the ability to restrict lawful military applications. "You can't lead tactical ops by exception," a Pentagon official told CNN. "Legality is the Pentagon's responsibility as the end user," they added. 2.No company involvement in crisis scenarios: Furthermore, reporting indicates that in discussions that took place in late 2025, Pentagon representatives questioned whether Anthropic's guardrails could create delays in crisis scenarios, and objected to any arrangement requiring company involvement to lift restrictions during urgent operations. Officials are reportedly considering two mechanisms. 1. The Defense Production Act: The Defense Production Act, enacted in 1950, allows the US government to require domestic industries to prioritise contracts deemed necessary for national defence. US Presidents have used it to direct industrial production during COVID-19 and manage supply chains during wartime, though the government has never previously used it to compel access to a commercial AI model. A Pentagon official told NBC News that if Anthropic does not comply, "the Secretary of War will ensure the Defense Production Act is invoked on Anthropic, compelling them to be used by the Pentagon." 2.The supply chain risk designation: A supply chain risk designation allows federal authorities to restrict companies from government contracting if they pose national security concerns. Authorities have typically applied this to foreign firms like Huawei and ZTE, and applying it to a domestic AI company would mark a significant expansion of the tool's use. A former DOJ-Pentagon liaison noted the legal contradiction built into the threat: the Pentagon cannot simultaneously declare Anthropic a supply chain risk and compel it to work with the military. 1. Claude is not reliable enough for autonomous lethal decisions: Anthropic argues that Claude does not meet the reliability threshold required for autonomous lethal decision-making. Large language models (LLMs) generate probabilistic outputs and can hallucinate or produce inconsistent reasoning under pressure, which is why the company refuses to allow Claude to make final targeting decisions without human oversight. Amodei has described both uses as "illegitimate" and "prone to abuse", adding in a January essay: "My main fear is having too small a number of fingers on the button, such that one or a handful of people could essentially operate a drone army without needing any other humans to cooperate." 2. Mass surveillance is a hard limit: Anthropic also refuses to enable mass surveillance of American citizens, even if the government classifies such use as lawful. An Anthropic spokesperson said the company's conversations with the government "focused on a specific set of usage policy questions, including hard limits around fully autonomous weapons and mass domestic surveillance, none of which related to current operations." 3. Anthropic has not rejected defence collaboration entirely:Notably, in December 2025, Anthropic agreed to allow Claude to support missile defence and cyber defence use cases. The current dispute centres specifically on whether any company-imposed restrictions can remain once the model operates inside national security systems. Anthropic introduced its Responsible Scaling Policy in 2023 as a framework linking model capability thresholds to escalating safety requirements. Under the earlier version, the company stipulated that it should pause training more powerful models if their capabilities outstripped its ability to control them and ensure their safety. Version 3.0, released on February 24, 2026, removes that commitment. In its updated policy, Anthropic replaces fixed guardrails with a more flexible framework, including a "Frontier Safety Roadmap", and describes certain safeguards as "public goals" rather than hard commitments. The company acknowledged that the new framework is more adaptable than its prior policy. In its announcement, Anthropic cited broader political and market shifts, writing: "The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level." A source familiar with the matter told CNN that the policy revision was separate from and unrelated to the company's dispute with the Pentagon. Even so, the removal of the earlier pause commitment reduces the rigidity of Anthropic's original safety escalation mechanism at a time when competitive and geopolitical pressure is intensifying.
[130]
Hegseth demands full military access to Anthropic's AI model Claude and sets deadline for end of week
Journalist Jo Ling Kent joined CBS News in July 2023 as the senior business and technology correspondent for CBS News. Kent has more than 15 years of experience covering the intersection of technology and business in the U.S., as well as the emergence of China as a global economic power. Trust is breaking down between the Pentagon and Anthropic over the use of its AI model, sources familiar with the situation told CBS News. In a meeting at the Pentagon Tuesday morning, Defense Secretary Pete Hegseth gave Anthropic's CEO Dario Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model, the sources said. Officials are considering invoking the Defense Production Act to make Anthropic adhere to what the military is seeking, they said. Axios reported earlier on some of what transpired in the meeting. Defense officials want full control of Anthropic's AI technology for use in its military operations, sources told CBS News. The company was awarded a $200 million contract by the Pentagon in July to develop AI capabilities that would advance U.S. national security. Anthropic has repeatedly asked the Defense Department to agree to guardrails that would restrict the AI model, called Claude, from conducting mass surveillance of Americans, sources said. Defense officials noted that that's illegal and said the military is simply asking for a license to use the AI strictly for lawful activities. Amodei also wants to ensure Claude is not used by the Pentagon for final targeting decisions in military operations without any human involvement, one source familiar with the meeting said. Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment, the person said. But a senior Pentagon official said: "This has nothing to do with mass surveillance and autonomous weapons being used. The Pentagon has only given out lawful orders." The official said Grok, which is owned by Elon Musk's xAI, is on board with being used in a classified setting, and other AI companies are close. In Tuesday's meeting, Hegseth told Amodei that when the government purchases Boeing planes, the aerospace company has no say in how the Pentagon uses the planes. He argued the same should be true for the military's use of Claude. After Amodei left, officials discussed whether to use the Defense Production Act in this situation, which enables the government to exert control over domestic industries. But because officials say they aren't sure the government can trust Anthropic at this point, the Pentagon may decide to officially designate the company as a "supply chain risk" to push them out of government, two sources said. Anthropic was the first tech company authorized to work on the military's classified networks. An Anthropic spokesperson said in a statement, "We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do."
[131]
Anthropic says 'virtually no progress' on Pentagon AI talks as deadline looms
Anthropic said Thursday that "virtually no progress" had been made in the company's talks with the Pentagon over the terms of use for its AI models ahead of a Friday afternoon deadline. The Defense Department (DOD) delivered its last and final offer to the company on Wednesday night, asking the AI firm to allow the department to access Claude for "all lawful purposes." It is unclear changes the Pentagon has proposed as part of its latest offer to the company. "The contract language we received overnight from the Department of War made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons," an Anthropic spokesperson told The Hill in a statement on Thursday. "New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will," they added. "Despite DOW's recent public statements, these narrow safeguards have been the crux of our negotiations for months." The Pentagon has threatened to cancel Anthropic's contract if it does not agree to the department's terms by Friday afternoon. The AI firm was one of several companies that received a $200 million contract with the DOD last summer. Anthropic's usage policy bars its AI model from being used for mass surveillance or lethal autonomous weapons. These two issues have been the company's red lines in its weeks-long negotiations with the Pentagon. Anthropic CEO Dario Amodei accused the DOD of "inherently contradictory threats" in negotiations with the the AI giant. "Regardless, these threats do not change our position: we cannot in good conscience accede to their request," Amodei said in a lenghty statement. "It is the Department's prerogative to select contractors most aligned with their vision," he added. "But given the substantial value that Anthropic's technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters -- with our two requested safeguards in place." The Pentagon said earlier on Thursday that it has "no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement." "Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes," Sean Parnell, chief Pentagon spokesperson , wrote in a post on the social platform X. "This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions."
[132]
Anthropic Digs in Heels in Dispute With Pentagon, Source Says
NEW YORK, Feb 24 (Reuters) - Artificial intelligence lab Anthropic has no intention of easing its usage restrictions for military purposes, a person familiar with the matter said on Tuesday, following a meeting to discuss its future with the Pentagon. The meeting between Anthropic CEO Dario Amodei and U.S. Defense Secretary Pete Hegseth was scheduled to hash out a months-long dispute between the two sides. The AI startup has refused to remove safeguards that would prevent its technology from being used to target weapons autonomously and conduct U.S. domestic surveillance. Pentagon officials have argued the government should only be required to comply with U.S. law. During the meeting, Hegseth delivered an ultimatum to Anthropic: be deemed a supply-chain risk or the government would invoke a law that would force Anthropic to change its rules, the person familiar said. The government gave Anthropic until Friday to respond. The Pentagon did not immediately respond to a comment request. (Reporting by David Jeans in New York; writing by Deepa Seetharaman in San Francisco; editing by Kenneth Li Editing by Nick Zieminski)
[133]
Dario Amodei Is A 'Liar' With God Complex Says Trump Official As Anthropic Refuses To Comply With Pentagon's Request For Unrestricted AI - Alphabet (NASDAQ:GOOG)
On Thursday, Anthropic CEO Dario Amodei said that the AI startup "cannot in good conscience accede" to new Defense Department contract language that would permit unrestricted military use of its AI system, Claude. AI Contract Clash With The Pentagon In a blog post, the San Francisco-based startup said updated terms from the U.S. Department of Defense made virtually no progress in blocking the model's potential use for mass surveillance of Americans or fully autonomous weapons. Anthropic said the Department of War will only work with AI companies that agree to "any lawful use" and remove safeguards on surveillance and autonomous weapons. It said the agency has threatened to cut it off, label it a "supply chain risk" and use the Defense Production Act to force the changes. "Regardless, these threats do not change our position: we cannot in good conscience accede to their request," Amodei stated. Pentagon Issues Ultimatum Defense Secretary Pete Hegseth reportedly gave Anthropic until Friday to open its AI models for unrestricted use or risk losing its government contract. Pentagon spokesman Sean Parnell took to X and said the military "has no interest in using AI to conduct mass surveillance of Americans (which is illegal)" nor to develop weapons operating without human involvement. He added that the department wants to use Anthropic's model "for all lawful purposes." Under Secretary of War Emil Michael also took to X and called Amodei a "liar." "The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company," he wrote. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo Courtesy: gguy on Shutterstock.com Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[134]
Anthropic refuses to bend to Pentagon on AI safeguards as dispute nears deadline
A public showdown between the Trump administration and Anthropic is hitting an impasse as military officials demand the artificial intelligence company bend its ethical policies by Friday or risk damaging its business. Anthropic CEO Dario Amodei drew a sharp red line 24 hours before the deadline, declaring his company "cannot in good conscience accede" to the Pentagon's final demand to allow unrestricted use of its technology. Anthropic, maker of the chatbot Claude, can afford to lose a defense contract. But the ultimatum this week from Defense Secretary Pete Hegseth posed broader risks at the peak of the company's meteoric rise from a little-known computer science research lab in San Francisco to one of the world's most valuable startups. If Amodei doesn't budge, military officials have warned they will not just pull Anthropic's contract but also "deem them a supply chain risk," a designation typically stamped on foreign adversaries that could derail the company's critical partnerships with other businesses. And if Amodei were to cave, he could lose trust in the booming AI industry, particularly from top talent drawn to the company for its promises of responsibly building better-than-human AI that, without safeguards, could pose catastrophic risks. Anthropic said it sought narrow assurances from the Pentagon that Claude won't be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language "framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will." That was after Sean Parnell, the Pentagon's top spokesman, posted on social media that "we will not let ANY company dictate the terms regarding how we make operational decisions" and added the company has "until 5:01 p.m. ET on Friday to decide" if it would meet the demands or face consequences. Emil Michael, the defense undersecretary for research and engineering, later lashed out at Amodei, alleging on X that he "has a God-complex" and "wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk." That message hasn't resonated in much of Silicon Valley, where a growing number of tech workers from Anthropic's top rivals, OpenAI and Google, voiced support for Amodei's stand late Thursday in an open letter. OpenAI and Google, along with Elon Musk's xAI, also have contracts to supply their AI models to the military. "The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," the open letter says. "They're trying to divide each company with fear that the other will give in." Also raising concerns about the Pentagon's approach were Republican and Democratic lawmakers and a former leader of the Defense Department's AI initiatives. "Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end," wrote retired Air Force Gen. Jack Shanahan in a social media post. Shanahan faced a different wave of tech worker opposition during the first Trump administration when he led Maven, a project to use AI technology to analyze drone footage and target weapons. So many Google employees protested its participation in Project Maven at the time that the tech giant declined to renew the contract and then pledged not to use AI in weaponry. "Since I was square in the middle of Project Maven & Google, it's reasonable to assume I would take the Pentagon's side here," Shanahan wrote Thursday on social media. "Yet I'm sympathetic to Anthropic's position. More so than I was to Google's in 2018." He said Claude is already being widely used across the government, including in classified settings, and Anthropic's red lines are "reasonable." He said the AI large language models that power chatbots like Claude are also "not ready for prime time in national security settings," particularly not for fully autonomous weapons. "They're not trying to play cute here," he wrote. Parnell asserted Thursday that the Pentagon wants to " use Anthropic's model for all lawful purposes" and said opening up use of the technology would prevent the company from "jeopardizing critical military operations," though neither he nor other officials have detailed how they want to use the technology. The military "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement," Parnell wrote. When Hegseth and Amodei met Tuesday, military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn't approve. Amodei said Thursday that "those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." He said he hopes the Pentagon will reconsider given Claude's value to the military, but, if not, Anthropic "will work to enable a smooth transition to another provider."
[135]
Pentagon takes key step toward blacklisting Anthropic as Friday...
The Pentagon has reportedly asked Boeing and Lockheed Martin to detail their reliance on Anthropic's Claude chatbot ahead of a Friday deadline for the AI firm to either relax its safeguards or face blacklisting. The request, first reported by Axios, came after the Pentagon threatened last week to declare Anthropic a "supply chain risk" - a rare rebuke generally reserved for foreign firms like China's Huawei. It would cancel existing contracts and force other defense contractors to stop doing business with Anthropic. Boeing said it has no active contracts with Anthropic, while Lockheed confirmed it was contacted by the Pentagon and declined further comment, according to the report. As The Post reported, Defense Secretary Pete Hegseth warned Anthropic boss Dario Amodei at a tense meeting earlier this week that he has until Friday at 5:01 pm ET to remove restrictions on how the US military can use its chatbot. Hegseth said if that doesn't happen, the Pentagon could use the Defense Production Act to effectively force Anthropic to tailor Claude for its use. Some critics have pointed out that the "supply chain risk" designation and a potential use of the DPA could be seen as contradictory. Representatives for the Pentagon and Anthropic did not immediately return requests for comment. Amodei reiterated that Anthropic would not support the use of its technology to enable mass surveillance of Americans or to power weapons that can fire without human oversight. He insisted the company's red lines have never impacted a military operation. The Tuesday meeting between Amodei and Hegseth was described as cordial but tense, with the defense secretary praising Claude's capabilities even as he delivered the ultimatum. Claude is the only chatbot currently used by the US military in classified situations. However, a senior Defense official previously told The Post that Elon Musk's Grok AI model recently received clearance, and chatbots from other major companies are close -- giving the military a plausible alternative to Claude. Earlier this week, an Anthropic spokesperson said the firm had "continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do." In July, the Pentagon awarded Anthropic a $200 million contract that included an agreement to "prototype frontier AI capabilities that advance US national security." Tensions between Anthropic and the Trump administration have been on the rise for months. The Post first reported in November that the company's ties to the cult-like Effective Altruism movement and Democratic megadonors like LinkedIn cofounder Reid Hoffman were on the White House's radar.
[136]
Pentagon Summons Anthropic Chief in Dispute Over A.I. Limits
Julian Barnes reported from Washington, and Sheera Frenkel from San Francisco. Amid intense pressure from the Trump administration, Pentagon officials have summoned the chief executive of the artificial intelligence company Anthropic to Washington for a meeting on Tuesday to discuss how its technology is used on classified systems. The Defense Department and Anthropic agreed to a $200 million pilot contract last year. But a Jan. 9 memo by Defense Secretary Pete Hegseth calling on A.I. companies to remove restrictions on their technology led the two sides to renegotiate their contract. The Pentagon has signed an agreement with one company, Elon Musk's xAI, and is getting close to making a deal with Google, which makes the Gemini model, according to people briefed on the discussions. Defense Department officials hope to use those agreements to pressure Anthropic to allow its model to be used more broadly, they said. Google and xAI did not immediately respond to a request for comment. A Defense Department official declined to comment on any future announcements but confirmed that Mr. Hegseth would meet with Dario Amodei, the Anthropic chief, at the Pentagon. Anthropic, the official said, will be asked to agree to the same guardrails that the Defense Department is negotiating with the other artificial intelligence companies. In those negotiations, the Pentagon has said the contracts must allow the department to use the models as it sees fit, as long as those activities are lawful. But the department is allowing the companies to build safety provisions into their models, which the companies call "the safety stack." Anthropic was the first company authorized to work on the military's classified networks. The company said that it was willing to loosen its restrictions but has demanded that guardrails are put in place that stop its A.I. from being used for mass surveillance of Americans or deployed in autonomous weapons that had no humans in the loop, people involved in the discussions said. Trump Administration: Live Updates Updated Feb. 23, 2026, 3:11 p.m. ET People close to Anthropic have argued that the company has taken more care than its rivals to keep its technologies out of the hands of Chinese companies. In November, the start-up said that it had banned a Chinese-state-sponsored group that was using its technologies in a hacking campaign that targeted large tech companies, financial institutions, chemical manufacturing companies and government agencies. But earlier last year, OpenAI, another leading artificial intelligence company whose model is used on unclassified military networks, said it had discovered and worked to stop two different Chinese campaigns to use A.I. for surveillance. (The New York Times has filed a lawsuit against OpenAI and Microsoft, its partner.) Ahead of the meeting with Mr. Hegseth on Monday, Anthropic published a blog post stating that three Chinese A.I. companies siphoned information from Anthropic to try to improve their own A.I. models. Pentagon officials have acknowledged that removing Anthropic from the classified system would cause short-term disruptions. And experts say that military service members regularly use Anthropic along with technology from Palantir, a data analytics company, to analyze classified data, and that cutting the military off from Anthropic's s A.I. chatbot, Claude, would be counterproductive. Pentagon officials hope that the pending deals will give them some leverage. The xAI model is not considered as advanced or as reliable as Anthropic's, while Google's Gemini is considered a rival to Anthropic and OpenAI. People briefed on the talks say Google is eager to strike a deal. The company has spent heavily on data centers to be used exclusively by the government, but that computing capacity so far has been underused. Officials from Google and xAI did not immediately respond to a request for comment. OpenAI is not close to a deal. People briefed on the negotiations say that the company believes it needs to continue to work on its safety technology before its model is used on classified networks. The meeting between Mr. Hegseth and Mr. Amodei was reported earlier by Axios. Cade Metz contributed reporting from San Francisco.
[137]
Scoop: Hegseth to meet Anthropic CEO as Pentagon threatens banishment
Why it matters: Claude is the only AI model available in the military's classified systems, and the most capable model for sensitive defense and intelligence work. The Pentagon doesn't want to lose access to Claude, but is furious with Anthropic for refusing to lift its safeguards entirely. State of play: The two sides are heading into the meeting on two totally different pages. * An Anthropic spokesperson said: "We are having productive conversations, in good faith." * Defense officials say negotiations have shown no progress and are on the verge of breaking down. Anthropic is willing to loosen its existing usage restrictions, but wants to wall off two areas: the mass surveillance of Americans, and the development of weapons that fire without human involvement. * The company "is committed to using frontier AI in support of US national security," the spokesperson said. * The Pentagon says it's unduly restrictive to have to clear individual uses with the company, and has demanded that all AI labs make their models available for "all lawful uses." Friction point: The Pentagon has threatened to declare Anthropic a "supply chain risk" -- not only voiding its contracts, but forcing other companies that work with the Pentagon to certify they aren't using Claude in those workflows. * The Pentagon is discussing other potential tools to force Anthropic's hand. A Defense official said Hegseth would effectively be presenting Amodei with an ultimatum. * It would be a massive task to offboard Anthropic, which is deeply entrenched, and replace it with another AI lab that currently has inferior capabilities. Setting the scene: Amodei has been very vocal about the risks of AI-gone-wrong, and has positioned his company as the safety-first AI leader. * Officials have described a culture clash between Hegseth's brash Pentagon and the Silicon Valley firm. * The senior Pentagon official said: "The problem with Dario is, with him, it's ideological. We know who we're dealing with." Reality check: Beyond the personalities that will sit across from each other on Tuesday, there are deeper questions about the role AI can and should play in national security. * Anthropic isn't alone in worrying that U.S. law hasn't caught up to the way that AI can supercharge surveillance, or in worrying about where entrusting AI to power weapons systems might lead. Flashback: The use of Claude in the Maduro raid in January escalated the feud between the Pentagon and Anthropic. In the room: Leading the meeting from the Pentagon side will be Hegseth, Deputy Secretary Steve Feinberg and Under Secretary for Research and Engineering Emil Michael, who has been leading the negotiations with Anthropic and three other AI model-makers. * Anthropic declined to name its delegation.
[138]
Anthropic Holds Firm on Military AI Restrictions After Pentagon Meeting - Reuters By Investing.com
Investing.com -- Artificial intelligence lab Anthropic will not ease its usage restrictions for military purposes, according to Reuters, citing a person familiar with the matter, following a meeting between the company and the Pentagon. Anthropic CEO Dario Amodei met with U.S. Defense Secretary Pete Hegseth to discuss a months-long dispute between the two sides. The AI startup has refused to remove safeguards that would prevent its technology from being used to target weapons autonomously and conduct U.S. domestic surveillance. Pentagon officials have argued the government should only be required to comply with U.S. law. During the meeting, Hegseth delivered an ultimatum to Anthropic: be deemed a supply-chain risk or the government would invoke a law that would force Anthropic to change its rules, the person familiar said. The government gave Anthropic until Friday to respond. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
[139]
Hegseth issues an ultimatum to 'woke AI' Anthropic: Get with military program by Friday or lose $200 million | Fortune
With just days left before Hegseth's reported deadline for Anthropic to drop its seemingly woke demands for AI safety and guarantees of non-military use, Anthropic told Fortune that it "continued good-faith conversations" with the Pentagon. Anthropic is facing a deadline of 5:01 p.m. Friday to give the Pentagon unrestricted access to its AI technology or be blacklisted from the military supply chain, Axios reported, as confirmed by the Associated Press. The standoff follows months of negotiations between the Defense Department and Anthropic over how the military can use the company's AI. As Axios reported, Hegseth warned Anthropic the Pentagon could label the company a "supply chain risk," a designation reserved for foreign adversarial firms, such as the Chinese-based Huawei, if the company doesn't comply, forcing military contractors to cut ties with Anthropic. He also threatened to invoke the Defense Production Act, a law the Trump administration implemented during the COVID pandemic to encourage companies to expand production of medical supplies, a threat Hegseth reportedly reiterated Tuesday. An Anthropic spokesperson added that the company would support the government's functions in line with the company's principles for responsible AI, saying it will "continue to support the government's national security mission in line with what our models can reliably and responsibly do." The conflict has developed into a proxy war amid a broader debate as to who gets to set the terms on AI use: tech companies or the U.S. government. The Pentagon has awarded Anthropic, along with Google, OpenAI, and xAI, contracts worth up to $200 million last year. While up until recently, Anthropic has been the only AI company cleared for use by the Pentagon, the AI company has taken a hard-line stance against military applications of its AI, prohibiting its use in fully autonomous weapons and domestic surveillance. But Elon Musk's xAI this week reached a deal to let the Pentagon use its AI for classified systems, adding competition to Anthropic's once-exclusive partnership. The Pentagon reportedly used Anthropic's AI model Claude through Anthropic's partnership with Palantir during the U.S.'s raid in Venezuela, which culminated in the capture of former Venezuelan President Nicolás Maduro. Anthropic then reached out to Palantir, asking how the company's AI was used during the operation, which Palantir subsequently flagged to the Pentagon, according to The Hill. But Anthropic is slowly unraveling its strict commitment to safety. The AI company Tuesday released an updated version of its "Responsible Scaling Policy" (RSP) as originally published in September 2023 to remain competitive, stating the new policy is a reaction to changes to the market environment. "The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level," the Anthropic announcement read. Amodei has suggested a potential loosening of safety commitments, saying in an interview with podcast host Dwarkesh Patel that the company faces "commercial pressure" and that its strict safety measures have limited its ability to compete with rivals operating under less stringent rules. In an exclusive interview with Time, Jared Kaplan, Anthropic's chief science officer, said the changes to the RSP were made out of a concern for safety rather than of competition fears. "We felt that it wouldn't actually help anyone for us to stop training AI models," Kaplan said. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments ... if competitors are blazing ahead."
[140]
Anthropic CEO rejects Pentagon demand for unrestricted model access, says threats won't sway it
Anthropic (ANTHRO) on Thursday said that the company "cannot in good conscience" allow the Department of Defense to use its models in all lawful use cases without limitation, adding that the agency's threats do not change its position. "It is Anthropic opposes allowing the Department of Defense to use its AI models for all lawful use cases without limitation, citing concerns over the reliability and ethical risks, especially regarding fully autonomous weapons. The Department of Defense has pressured Anthropic, threatening to label it a supply chain risk or invoke the Defense Production Act, and delivered a final offer with a deadline to accept its terms. Anthropic has a $200 million contract to deploy its AI models on classified networks, and is in ongoing, tense negotiations with the Pentagon over permissible use cases.
[141]
Anthropic cannot accede to Pentagon's request in AI safeguards dispute, CEO says
AI firm Anthropic refuses the Pentagon's demand to remove safeguards preventing autonomous weapons targeting and domestic surveillance, despite threats of removal from defense systems. CEO Dario Amodei stated the company cannot in good conscience agree to such uses, even with a $200 million contract at stake. Anthropic hopes the Pentagon reconsiders its position. Anthropic cannot accede to the Pentagon's request in an AI safeguards dispute despite threats to remove the company from the Department of Defense's systems, the AI firm's CEO, Dario Amodei, said on Thursday. The Pentagon's dispute with Anthropic stems from the AI startup's refusal to remove safeguards that would prevent its technology from being used to target weapons autonomously and conduct surveillance in the United States. Anthropic, backed by Google and Amazon, has a contract with the department worth up to $200 million. The department has said it will contract only with AI companies that accede to "any lawful use" and remove safeguards, Amodei said on Thursday. Use cases for its AI such as mass domestic surveillance and fully autonomous weapons have never been included in Anthropic's contracts with the department and "we believe they should not be included now," Amodei said. Amodei added that the department threatened to remove Anthropic from its systems if the company maintained the safeguards and threatened to designate it a "supply chain risk and to invoke the Defense Production Act to force the safeguards' removal". "Regardless, these threats do not change our position: we cannot in good conscience accede to their request," Amodei said. Earlier in the day, Pentagon spokesperson Sean Parnell said on X that the department has no interest in using AI to conduct mass surveillance of Americans nor does it want to use AI to develop autonomous weapons that operate without human involvement. "Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes," Parnell said. The Pentagon did not immediately respond to a request for comment on Anthropic's statement. "It is the Department's prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic's technology provides to our armed forces, we hope they reconsider," Amodei said. "Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider," he added. An Anthropic spokesperson said the company remains "ready to continue talks and committed to operational continuity for the Department and America's warfighters."
[142]
Hegseth To Meet With Anthropic CEO Over AI Safety Restrictions: Report
Hegseth To Meet With Anthropic CEO Over AI Safety Restrictions: Report Defense Secretary Pete Hegseth reportedly plans to meet with Anthropic's CEO this week to renegotiate the military's use of the artificial intelligence company amid ongoing disputes over safety concerns. CEO Dario Amodei will meet with Hegseth at the Pentagon on Tuesday for what is expected to be a tense talk on lifting its AI program's safeguards, Axios reported. "Anthropic knows this is not a get-to-know-you meeting," a senior Defense official told Axios. "This is not a friendly meeting. This is a sh*t-or-get-off-the-pot meeting." The Department of Defense is considering cutting ties with the AI company if it does not lift restrictions on how the U.S. military uses its technology, including the AI chatbot Claude. Anthropic recently told defense officials that it does not want its AI used for mass surveillance of Americans or deployed in autonomous weapons that don't involve human decision-making. This drew fury from Hegseth and others in the Pentagon, The New York Times reported last week.
[143]
Anthropic's Pentagon Showdown Is About More Than AI Guardrails
As the Pentagon was pressing Anthropic PBC to drop the guardrails on its powerful artificial intelligence tools, a senior US defense official posed a hypothetical scenario to the company's safety-conscious chief executive officer, Dario Amodei. What if a nuclear-armed intercontinental ballistic missile were hurtling towards the US with only 90 seconds to spare, and Anthropic's AI were the only way to trigger a missile response to save the country, but the company's safeguards wouldn't allow it, the senior official mused in a December phone call. "Call me," was how Pentagon officials interpreted Amodei's answer, according to another senior defense official briefed on the discussion, who described being astounded by the billionaire's response. The prospect of having to track down both President Donald Trump and his briefcase of nuclear codes -- and a man at the helm of a privately owned $380 billion company that sits at the vanguard of AI -- represents an unthinkable change in nuclear strategy. It also highlights the growing global debate over where to set the boundaries on the use of a technology that's still nascent and remains error-prone. An Anthropic spokesperson rejected the Pentagon official's description of the December call, reported earlier by Semafor, as "patently false. Dario didn't say this, and every iteration of our proposed contract language would enable our models to support missile defense and similar uses," the spokesperson said. During that call, Anthropic conceded the Defense Department could use its AI tools for missile defense and cyber operations, according to a person familiar with the matter. Unsatisfied, the Pentagon continues to pressure Anthropic to further loosen its usage rules in a disagreement that has stretched on for at least two months. At a tense meeting Tuesday between Amodei, who supported Kamala Harris during the 2024 presidential campaign, and Pete Hegseth, Trump's hard-nosed Pentagon chief who has vowed a war on all things woke, the defense secretary threatened to invoke a 1950 law passed to force America's industrial base to supply goods such as metals and machine tools during the Korean War unless the company gives the Pentagon carte blanche to do as it pleases with its AI tools, within lawful limits. Their confrontation has exposed the Defense Department's reliance on Anthropic in a head-to-head military rivalry with US adversaries including China. Yet the battle also amplifies the tension between Silicon Valley and the Pentagon over who controls the future of AI as a tool of war and surveillance, including whether the rapidly evolving technology can be used in a lawful manner. "The constitutional protections in our military structures depend on the idea that there are humans who would -- we hope -- disobey illegal orders. With fully autonomous weapons, we don't necessarily have those protections," Amodei told a New York Times podcast earlier this month, voicing worries that there's insufficient oversight of how AI could be used in autonomous drone swarms. "We need, in some ways, to be protected against AI." Amodei and his team have shown little intention of backing down and insist they're on the right side of history. Only last month, Amodei was sounding dire warnings about the threat posed by fully autonomous weapons and the risks that AI might end up spying on the very people it's meant to protect. Hegseth is demanding that Anthropic drop those two tenets to allow the Pentagon to deploy military AI unencumbered by the company's safety requirements. If Anthropic refuses to yield by Friday, Hegseth warned Amodei he will use the Defense Production Act to compel Anthropic to provide its AI tools with no strings attached, according to people familiar with the matter. A failure to comply could lead the Pentagon to declare the company a supply-chain risk. That would require vendors such as Palantir Technologies Inc., which uses Anthropic in its battle management platform called Maven Smart System, to certify that they don't use the company's models. The move could deal a devastating blow to Anthropic's efforts to win government business. Such harsh actions are usually reserved for companies regarded as US adversaries, such as China's Huawei Technologies Co. and Russia's Kaspersky, says Alan Rozenshtein, associate professor of law at the University of Minnesota Law School who writes on AI and other military topics. Threats from the Pentagon to pull Anthropic out of the supply chain are "completely inappropriate," he said in an interview, adding it is also "deeply idiotic" given Anthropic is among the greatest hopes for America staying ahead in the AI race. Even so, Rozenshtein conceded the Pentagon also has a point: defense officials don't want to be dictated to about how to handle potentially life-and-death battle scenarios, or depend on commercial software subject to the whims of idealistic billionaire owners. On Wednesday, Axios reported that the Pentagon had asked contractors Boeing Co. and Lockheed Martin Corp. to provide assessments of their reliance on Anthropic products, a potential first step toward labeling the company a supply-chain risk. The dispute is taking shape as militaries around the world race to find ways to adopt artificial intelligence. It lies at the heart of not only how AI will be sent into war but also whether big companies can fend off the demands of a prickly MAGA administration, and still manage to grow their government and commercial business. 'Utopian Idealism' Anthropic's most recent troubles escalated when the Defense Department started to prepare its new military AI strategy, released last month. Although the Pentagon has sought to develop and deploy artificial intelligence for 60 years, the Trump administration is now seeking to accelerate its adoption for everything from campaign planning to so-called "kill chains," according to the new effort. The Pentagon's new AI strategy also promises to boot out "Utopian idealism" when it comes to so-called responsible AI. Some defense industry executives read that as a thinly veiled warning to Anthropic. Amodei, who co-founded the company after breaking away from OpenAI over safety concerns, has crafted his business based on principles he has expounded on in long public essays. "If we want AI to favor democracy and individual rights, we are going to have to fight for that outcome," he wrote in October 2024. He has argued that a coalition of democracies should use AI to achieve "robust military superiority," warning that AI also seems likely to enable much better propaganda and surveillance, which he described as "both major tools in the autocrat's toolkit." As recently as January, Amodei decried the dangers of "fully autonomous weapons," conjuring an image of millions of armed drones controlled by AI and asserting that certain uses "should be considered crimes against humanity." He also warned against AI surveillance and propaganda by authoritarian governments including China, while arguing there's a risk of democracies armed with the technology "turning on us." The Pentagon's new strategy closes the space for naysayers, however, including a requirement that the department must utilize AI models "free from usage policy constraints that may limit lawful military applications." Hegseth also directed the Pentagon to establish benchmarks for model objectivity as a primary procurement criterion within 90 days, and for the under secretary of war for acquisition and sustainment to incorporate standard "any lawful use" language into any Pentagon contract through which AI services are procured within 180 days. These provisions all directly challenge Anthropic, which has the most constrained usage policy of all the AI companies working on US military applications and details a host of limitations intended to prevent broader societal harm. Since September, when the company most recently updated its public usage policy, those restrictions have continued to include core tenets such as "Do Not Develop or Design Weapons" and "Do Not Compromise Computer or Network Systems." There are signs Anthropic's own safety commitments are slipping, however. The company says it may "tailor" its use restrictions for certain government contracts, but hasn't made public what it has already conceded in order to allow the military to use its AI tools. Anthropic's safeguards research team leader left the company earlier this month, saying how hard it is to let the company's values govern its actions and that employees "constantly face pressures to set aside what matters most." And on Tuesday, the company announced it had loosened its own central safety policy, saying that in order to remain competitive it will no longer delay the release of AI development that might be dangerous, unwinding a standard it has held since 2023. Classified Work Amodei's warnings about AI's perils and his calls for regulation have required striking a balance with his company's defense business and support for national security. Anthropic was the first generative AI company to reach a deal with the Pentagon, including for classified cloud, and was reportedly used during the US operation to capture Venezuelan strongman Nicolas Maduro. Unlike rival AI companies that don't yet work on classified cloud, that business exposes Anthropic to the very sorts of risks Amodei is most worried about. In the second half of 2024, the company negotiated a deal with the Pentagon for its AI tools to be used by Palantir's Maven Smart System, an AI-enabled battle management platform, according to a former defense official familiar with the matter. At the time, usage was mostly focused on simple chatbot tasks, such as summarizing content, the former official added. Since then, Maven Smart System has expanded its usage of generative AI, and Anthropic has also struck a separate pilot deal in July 2025 with the Pentagon's Chief Digital and AI Office for usage with a base of $2 million and ceiling of $200 million. Data compiled by Bloomberg Government show the Pentagon paid the company only $2 million last year. But while Anthropic relies overwhelmingly on commercial customers, it has also made clear it wants to rapidly expand its government business. In September, the company held what one attendee described as a massive event at Union Station in Washington to drum up support for its public sector work. This month, Anthropic signed its first deal for State Department to use Claude, although at $19,000 the value for now remains negligible. The company also struck a broad deal with the General Services Administration for federal government agencies to use Claude for a nominal $1 fee last year. The biggest element of Hegseth's threat is not to the existing $200 million ceiling contract with the Pentagon, but the potential of removing Anthropic from the supply chain. "In a worst-case scenario, that could make Anthropic a nonstarter for a whole huge segment of the American economy and could be almost fatal to their business," Gregory Allen, a senior adviser at the Center for Strategic and International Studies who previously worked in the Pentagon's AI shop, told Bloomberg Television on Tuesday. Allen said Claude's military user community "really likes what they're getting" and has never previously complained about rubbing up against the company's restrictions. Despite months of prolonged friction during negotiations, the Pentagon has continued using Anthropic in operations. One Pentagon worker familiar with Claude says its code base is easier than others to deal with, making up-to-minute changes easier, swifter and more reliable. But that doesn't mean rivals won't catch up, or that the Pentagon wouldn't put up with a lesser partner while their AI improves, the Pentagon worker added, describing a new flurry of defense AI contracts as an arms race. On the eve of Amodei's meeting with Hegseth and his top AI lieutenants, the Defense Department struck a deal with Elon Musk's xAI to use its Grok chatbot on classified cloud defense networks. The Pentagon has also approached OpenAI again about putting its AI on classified cloud in the past few weeks, according to two people familiar with the matter. Ripping and replacing Claude from Pentagon systems would set the US government's national security use of AI back by at least six months, said a person familiar with the matter who works in the field. That's because other model providers would be coming from behind, the person added. Anthropic is also confident users understand the company's policies, according to the person familiar with the way the company liaises with military and partner operators. But no matter what its usage policy might say or its ability to audit how its software is used, the company would never be privy to all the details of how its AI was deployed in classified and real-time operations, said the Pentagon worker. Boat Strikes One of the reasons companies such as Anthropic are forced to assert their own AI usage policy is because Congress has failed to stipulate how the Pentagon should think about the use of AI in weapons systems, said Rozenshtein. He argued the second Trump administration's political and military apparatus has also shown a reluctance to exercise lawful actions with care and wisdom. Several senators, human rights campaigners and former military officials have questioned whether the Pentagon's lethal strikes against vessels in the Caribbean - including one that killed two shipwrecked survivors - and the Maduro raid, were legal under international and US laws. "I am not sure any company can have confidence its products will be used legally under this Pentagon," said Jon Wolfsthal, arms control expert at the Center for a New American Security, who previously served in the Obama administration. "Imposing an ultimatum in this way will likely undermine public and corporate confidence in the Department of Defense's leadership." While Anthropic's hesitations are grabbing headlines, all weapons manufacturers provide guidelines for how systems can be reliably used before they are likely to fail, and one of Amodei's main points is that AI is not ready for reliable use of autonomy. His stance also comes at a time when other leading tech CEOs have actively courted the Trump administration and loosened their own policies about how AI can be applied in combat. Some of those rivals are already trying. OpenAI and xAI this year signed up to work on a $100 million Pentagon prize challenge to produce software to fly potentially lethal autonomous drone swarms, providing technology that will translate voice commands into digital instructions in OpenAI's case, Bloomberg has previously reported. This despite OpenAI CEO Sam Altman's assurances last year at a conference dedicated to modern conflict that "I don't think most of the world wants AI making weapons decisions." In the case of xAI, which Bloomberg has reported has partnered with its new owner SpaceX, the two companies will produce drone swarming technology as well. For a company now making $14 billion in annual run rate revenue, the potential loss of a $200 million contract is unlikely to undermine its financial footing. Still, a deteriorating relationship with the US government risks chipping away meaningful federal revenue over the long term, and the tiff may also jeopardize Anthropic's relationship with Palantir, which would be a larger blow. Anthropic has ignited ire elsewhere in the administration with Amodei's public opposition to Trump's decision to let Nvidia Corp. sell its H200 AI chips to China. During an interview with Bloomberg at Davos last month, Amodei characterized the move as a blunder. "It would be a big mistake to ship these chips," he said. "It's a bit like selling nuclear weapons to North Korea." The company has also defied Trump with its calls for AI regulation, clashing with efforts by White House AI Czar David Sacks to pass a nationwide moratorium on state-level rules. That fight is now spilling into the US midterm elections, where Anthropic has pledged $20 million to a political advocacy group called Public First that's backing congressional candidates who favor AI guardrails. Anthropic has also not been shy about flying its political colors. Several of the company's hires for its policy shop are Biden administration veterans, including Ben Buchanan, Tarun Chhabra and Aditi Kumar, and Amodei was a vocal Harris supporter in the 2024 presidential race. And yet the company has sought more recently to bring Trump-aligned officials into its fold, including appointing Chris Liddell, a former White House official from Trump's first term, to its board and seeking investment from 1789 Capital, a pro-Trump venture firm where one of the president's sons is a partner, the Wall Street Journal reported. Kori Schake, director of foreign and defense policy studies at the American Enterprise Institute, said private companies have the right to decline to have their products used for surveillance and targeting. While the Pentagon can use the Defense Production Act or give the business away to other vendors, she said, "a more adept Defense Department would find a mutually agreeable compromise so that they don't scare away top-notch talent from partnering with DOD." In the blog post Tuesday announcing its decision to relax one of its hallmark safety pledges, Anthropic made clear that it saw an unyielding atmosphere in Washington. "The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level," it wrote. The question now is whether Anthropic's new willingness to relax its safeguards when there's competitive pressure will translate into concessions that would defuse its standoff with the Pentagon. Amodei has until 5 p.m. on Friday to decide whether call bluff on Hegseth's ultimatum.
[144]
Pentagon draws scrutiny with Anthropic threats, Defense Production Act
The Pentagon is threatening to use the Defense Production Act (DPA) against Anthropic amid a dispute over the company's restrictions on its AI tools, in a move that many experts say is an unusual use of the measure. The Defense Department (DOD) warned Anthropic on Tuesday that it could invoke the DPA, which gives the president broad authority to control domestic industries in the name of national defense, to use the AI firm's tool on its own terms. The threat marks an escalation in the feud between the two parties. Negotiations appear to be at a standstill, with the Pentagon giving Anthropic until Friday to comply with its terms or face a cancellation of a $200 million contract and risk being labeled a "supply chain risk" or confronting the DPA. "It's the wrong purpose of the tool," Mark Dalton, senior policy director for technology and innovation at the R Street Institute, told The Hill. "The DPA exists for a capacity reason, like it's an industrial capacity policy, and to use it as leverage is, I think, irresponsible." Anthropic and the Pentagon have been locked in tense negotiations in recent weeks over the company's AI usage policy, which bars its AI model Claude from being used to conduct mass surveillance or develop lethal autonomous weapons. These two issues have become the company's red lines in the dispute. A source familiar with negotiations told The Hill on Monday that Anthropic's resistance stemmed from concerns that AI systems are not reliable enough to make life-or-death decisions and the technology significantly changes what is possible with domestic surveillance. Meanwhile, the Pentagon has pushed for the company to accept language that allows for "all lawful uses." On Wednesday night, the DOD sent its last and final offer to Anthropic, asking the AI giant to allow the department to access Claude for "all lawful purposes," a senior Pentagon official told The Hill on Thursday. CBS News reported earlier on the offer. The Hill has reached out to Anthropic for comment. Sean Parnell, chief Pentagon spokesperson, noted Thursday that the department has "no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement." "Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes," he wrote in a post on social platform X. "This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions." Under Secretary of War for Research and Engineering Emil Michael, who was at a Tuesday meeting between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei, similarly said Thursday that mass surveillance would be a violation of the Fourth Amendment and the Pentagon would never do it. The dispute is notable given that Anthropic's Claude has so far been the only AI model available to the department on its classified systems. However, the department recently reached a new agreement with Elon Musk's xAI to use its AI model on the classified side, a Pentagon official told The Hill, while Google's Gemini and OpenAI's ChatGPT are "close." Negotiations came to a head Tuesday, when Amodei met with Hegseth at the Pentagon, and the department gave the company an ultimatum -- agree to its terms by 5:01 p.m. EST on Friday or it will cancel their contract. The AI company was one of several firms, alongside Google, OpenAI and xAI, that scored a $200 million contract with the Defense Department last summer. The Pentagon also upped the ante by raising the prospect of invoking the DPA and "supply chain risk" label. While the department may only be employing these threats as negotiating tactics, the potential use of the DPA is drawing scrutiny from experts and Democratic lawmakers. "This is unprecedented," said Charlie Bullock, a senior research fellow at the Institute for Law & AI. He underscored that the Pentagon's DPA threat could fall outside the parameters of the statute or raise constitutional concerns, including under the First Amendment. "It's mostly for traditional manufacturing in times of emergency," Dalton added. "I get that everything feels emergent these days, but I don't see what national emergency using DPA for a software company solves." Democratic Sens. Elizabeth Warren (D-Mass.), who is on the Senate Armed Services Committee, and Andy Kim (D-N.J.) argued that Congress passed the DPA to aid the U.S. economy in times of needs, not to permit the Trump administration to "extort American companies that refuse to help the Pentagon to surveil Americans or build killer robots." "If Secretary Hegseth weaponizes the DPA against American companies, he will shatter the bipartisan consensus in support of a strong DPA - weakening our hand in competition with China and our ability to ensure American competitiveness," the pair said in a statement Wednesday. Previous administrations have invoked the DPA to boost production of goods during the COVID-19 pandemic. The Biden administration used the measure to up the production of baby formula, clean energy and vaccines, as well as to require AI companies to share safety information with the federal government. During his first term, President Trump used the law to address ventilator shortages. He has used the measure in his second term to increase the nation's production of critical minerals. Title I of the DPA allows the president to require individuals or companies to prioritize or accept contracts as is necessary to promote the national defense, according to the Congressional Research Service. Several experts suggested this is likely the portion of the law that the Pentagon would turn to in this case. "I don't believe he's abusing the law in doing so," said Greg Williams, director of the Center for Defense Information at the Project On Government Oversight (POGO). "However, I think it's unfortunate the Pentagon is not as interested as Anthropic appears to be in ensuring the safe, lawful and ethical use of artificial intelligence," he added. Neil Chilson, the head of AI policy at the Abundance Institute, who is a critic of the expanding uses of DPA Title I, said the measure's use in this case is at least within the scope of defense and intent than in previous instances. "Because we are talking about DOD procuring services for defense purposes, right? So at least at the very high level, this is closer to the intent of the statute," Chilson said in an interview with The Hill. Still, he said Pentagon's threat of invoking the DPA is "perhaps more aggressive, even if it's more within the sort of intent of the statute," compared to the use of the law by previous administrations. Following Tuesday's meeting, the Pentagon continued turning the screws on Anthropic, reaching out to defense contractors about its reliance on Claude, in what appears to be an initial step toward labeling the company a supply chain risk. Lockheed Martin spokesperson confirmed to The Hill that the Pentagon reached out to the defense contractor about its usage of Claude. The spokesperson declined to say when exactly the DOD reached out. Axios first reported on the outreach. Boeing Defense, Space and Security, a division of Boeing, does not have an active contract with Anthropic, a spokesperson confirmed to The Hill on Thursday. "What seems really sketchy here is this after the fact trying to change the terms of a contract without going through the regular contract negotiation process and what appears to be an abuse of labeling a contract a supply chain risk," Williams of POGO said. "A supply chain risk is somebody who might not deliver on something they've agreed to do," he added. "As far as I understand it, Anthropic is not suggesting they wouldn't deliver on the terms of their existing contract." James E. Baker, former chief judge of the U.S. Court of Appeals for the Armed Forces, also argued there is a discrepancy between invoking the DPA and slapping the supply chain risk designation on a company. "As [former DOD official] Greg Allen has stated, there is an inherent contradiction between threatening to designate a company a supply chain risk, and using the DPA to compel provision of the company's services," Baker, who is now the director of the Syracuse University Institute for Security Policy and Law, told The Hill on Thursday. But Chilsonof the Abundance Institute argued the Pentagon could both invoke the DPA and slap the supply chain risk designation on Anthropic. "I think logically it would make sense to say, like, we need a tool that lets us make decisions," he said. "It doesn't make decisions for us and if you're not going to provide that tool, then we're worried about the tools that you are providing to our downstream vendors." Jerry McGinn, director of the Center for the Industrial Base at the Center for Strategic and International Studies, argued that if a company does not want to do work in certain area and under certain conditions, they can pull out of the contract or the government can cancel it. "The government has the vehicles they need to stop the work if they want and find another provider," McGinn said in an interview with The Hill.
[145]
Hegseth and Anthropic CEO Set to Meet as Debate Intensifies Over the Military's Use of AI
WASHINGTON (AP) -- Defense Secretary Pete Hegseth plans to meet Tuesday with the CEO of Anthropic, with the artificial intelligence company the only one of its peers to not supply its technology to a new U.S. military internal network. Anthropic, maker of the chatbot Claude, declined to comment on the meeting but CEO Dario Amodei has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent. The meeting between Hegseth and Amodei was confirmed by a defense official who was not authorized to comment publicly and spoke on condition of anonymity. It underscores the debate over AI's role in national security and concerns about how the technology could be used in high-stakes situations involving lethal force, sensitive information or government surveillance. It also comes as Hegseth has vowed to root out what he calls a "woke culture" in the armed forces. "A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow," Amodei wrote in an essay last month. Anthropic is the only AI company approved for classified military networks The Pentagon announced last summer that it was awarding defense contracts to four AI companies -- Anthropic, Google, OpenAI and Elon Musk's xAI. Each contract is worth up to $200 million. Anthropic was the first AI company to get approved for classified military networks, where it works with partners like Palantir. The other three companies, for now, are only operating in unclassified environments. By early this year, Hegseth was highlighting only two of them: xAI and Google. The defense secretary said in a January speech at Musk's space flight company, SpaceX, in South Texas that he was shrugging off any AI models "that won't allow you to fight wars." Hegseth said his vision for military AI systems means that they operate "without ideological constraints that limit lawful military applications," before adding that the Pentagon's "AI will not be woke." In January, Hegseth said Musk's artificial intelligence chatbot Grok would join the Pentagon network, called GenAI.mil. The announcement came days after Grok -- which is embedded into X, the social media network owned by Musk -- drew global scrutiny for generating highly sexualized deepfake images of people without their consent. OpenAI announced in early February that it, too, would join the military's secure AI platform, enabling service members to use a custom version of ChatGPT for unclassified tasks. Anthropic calls itself more safety-minded Anthropic has long pitched itself as the more responsible and safety-minded of the leading AI companies, ever since its founders quit OpenAI to form the startup in 2021. The uncertainty with the Pentagon is putting those intentions to the test, according to Owen Daniels, associate director of analysis and fellow at Georgetown University's Center for Security and Emerging Technology. "Anthropic's peers, including Meta, Google and xAI, have been willing to comply with the department's policy on using models for all lawful applications," Owens said. "So the company's bargaining power here is limited, and it risks losing influence in the department's push to adopt AI." In the AI craze that followed the release of ChatGPT, Anthropic closely aligned with President Joe Biden's administration in volunteering to subject its AI systems to third-party scrutiny to guard against national security risks. Amodei, the CEO, has warned of AI's potentially catastrophic dangers while rejecting the label that he's an AI "doomer." He argued in the January essay that "we are considerably closer to real danger in 2026 than we were in 2023'' but that those risks should be managed in a "realistic, pragmatic manner." Anthropic has been at odds with the Trump administration This would not be the first time Anthropic's advocacy for stricter AI safeguards has put it at odds with the Trump administration. Anthropic needled chipmaker Nvidia publicly, criticizing Trump's proposals to loosen export controls to enable some AI computer chips to be sold in China. The AI company, however, remains a close partner with Nvidia. The Trump administration and Anthropic also have been on opposite sides of a lobbying push to regulate AI in U.S. states. Trump's top AI adviser, David Sacks, accused Anthropic in October of "running a sophisticated regulatory capture strategy based on fear-mongering." Sacks made the remarks on X in response to an Anthropic co-founder, Jack Clark, writing about his attempt to balance technological optimism with "appropriate fear" about the steady march toward more capable AI systems. Anthropic hired a number of ex-Biden officials soon after Trump's return to the White House, but it's also tried to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official from Trump's first term, to its board of directors. The Pentagon-Anthropic debate is reminiscent of an uproar several years ago when some tech workers objected to their companies' participation in Project Maven, a Pentagon drone surveillance program. While some workers quit over the project and Google itself dropped out, the Pentagon's reliance on drone surveillance has only increased. Similarly, "the use of AI in military contexts is already a reality and it is not going away," Owens said. "Some contexts are lower stakes, including for back-office work, but battlefield deployments of AI entail different, higher-stakes risks," he said, referring to the use of lethal force or weapons like nuclear arms. "Military users are aware of these risks and have been thinking about mitigation for almost a decade." ___ O'Brien reported from Providence, Rhode Island.
[146]
What to know about U.S. Defense Protection Act and the Pentagon's Anthropic ultimatum
NEW YORK -- U.S. Defense Secretary Pete Hegseth gave Anthropic an ultimatum this week: Open its artificial intelligence technology for unrestricted military use by Friday, or risk losing its government contract. Defence officials in the Trump administration also warned they could designate Anthropic, which makes the AI chatbot Claude, as a supply chain risk -- or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn't approve. Some experts say that using the law this way would be unprecedented, and could bring future legal challenges. The government's efforts to essentially force Anthropic's hand also underscore a wider, contentious debate over AI's role in national security. Here's what we know. The Defense Production Act gives the federal government broad authority to direct private companies to meet the needs of national defence. The act was signed by President Harry S. Truman in 1950 amid supply concerns during the Korean War. But over its now decades-long history, the law's powers have been invoked not only in times of war but also for domestic emergency preparedness, as well as recovery from terrorist attacks and natural disasters. One of the act's provisions allows the president to require companies to prioritize government contracts and orders deemed necessary for national defence, with the goal of ensuring the private sector is producing enough goods needed during war or other emergencies. Other provisions give the president the ability to use loans and additional incentives to increase production of critical goods, and authorize the government to establish voluntary agreements with private industry. The DPA is "one of the government's most powerful and adaptable industrial policy tools," said Joel Dodge, an attorney and the director of industrial policy and economic security at the Vanderbilt Policy Accelerator. Anthropic is the last of its AI peers to not supply its technology to a new U.S. military internal network. CEO Dario Amodei repeatedly has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent. The Pentagon has maintained that it has no interest in using AI for mass surveillance or to develop autonomous weapons to operate without human involvement. If the U.S. Defense Department does invoke the DPA to give the military more authority to use Anthropic's products without its approval, that could mean forcing the company to adapt its model to the Pentagon's needs without built-in safety limits, or remove certain ethical restrictions from the company's contract language. Experts like Dodge say both would be "without precedent under the history of the DPA." "It's a powerful law," he said. "(But) it has never been used to compel a company to produce a product that it's deemed unsafe, or to dictate its terms of service." Trump in his first term and former President Joe Biden invoked the DPA to boost supplies to combat the COVID-19 pandemic. And during 2022's nationwide baby formula shortage, Biden used the law to speed production of formula and authorize flights to import supply from overseas. Biden also invoked the DPA in a 2023 executive order on AI, notably in efforts to require that companies share safety test results and other information with the government. Trump repealed the order at the start of his second term. Decades ago, the administrations of both President Bill Clinton and George W. Bush used the DPA to ensure that electricity and natural gas shippers continued supplying California utilities amid an energy crisis. And the law was used after Hurricane Maria struck Puerto Rico in 2017 to prioritize contracts for food, bottled water, manufactured housing units and the restoration of electrical systems. The DPA requires periodic reauthorization to remain in effect, which can expand or refine the scope of the law. According to congressional documents, its next expiration date is slated for Sept. 30 of this year. Depending on how the U.S. Defense Department's reported demands unfold, Anthropic could be at the top of lawmakers' minds. If the U.S. Defense Department uses the DPA provision aimed at prioritizing government contracts and ordering production of certain goods -- which the Anthropic case suggests it would -- a company can push back if the requested product isn't something it already produces, Dodge and others say, or if it deems the terms to be unreasonable. But the government may try and overrule that, notes Charlie Bullock, senior research fellow at the Institute for Law & AI. "If neither side backs down, it seems realistic that there would be litigation between Anthropic and the government," Bullock said. Some have also noted tension between the Pentagon's warning that it could designate Anthropic as a supply chain risk while also indicating its products are so important to national defence that it needs to invoke the DPA -- two assertions that seem at odds with each other. Defence officials appeared to be backing away from the DPA option on Thursday, when Chief Pentagon spokesperson Sean Parnell wrote on social media that if Anthropic didn't agree to cooperate by 5:01 p.m. ET on Friday, "we will terminate our partnership with Anthropic and deem them a supply chain risk." "We will not let ANY company dictate the terms regarding how we make operational decisions," Parnell added. Dodge thinks the administration is counting on "a lot of forces" as it aims to get Anthropic to bend on Friday. If Anthropic agrees to new terms in the face of such threats, that could open up "a Pandora's box of what the government could do to assert power and control over private companies," Dodge said.
[147]
Defense Sec. Pete Hegseth gives Anthropic Friday deadline to remove...
Defense Secretary Pete Hegseth warned Anthropic boss Dario Amodei that he has until Friday evening to remove restrictions on how the US military can use the company's Claude AI chatbot - or potentially face major penalties. Hegseth, who delivered the ultimatum during a high-stakes meeting in Washington, DC, on Tuesday afternoon, told Amodei that the Pentagon could blacklist Anthropic by declaring it a "supply chain risk," a source familiar with the meeting told The Post. Alternatively, the Pentagon could use the Defense Production Act to effectively mandate that Anthropic allow use of Claude for all military purposes. As of the Tuesday meeting, the Claude chatbot was the only AI model approved for use on classified military systems. A senior Pentagon official said Anthropic has until 5:01 p.m. Eastern Time Friday to comply with the ultimatum. Elon Musk's Grok chatbot has received clearance for use in a classified setting, while chatbots offered by other major companies are close -- giving the military a plausible alternative to Claude, the senior official added. "The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good," a Defense Department official said of Anthropic to Axios ahead of the meeting. Amodei reiterated Tuesday that Anthropic would not support the use of its technology to enable mass surveillance of Americans or to power weapons that can fire without human oversight, said a source familiar with the meeting. The Anthropic boss also noted that the company's red lines have never impacted a military operation. A senior Defense Department official told Axios that the meeting was "not warm and fuzzy at all." Another source with knowledge of the meeting told The Post that the talks were cordial and respectful, with no raised voices. Hegseth praised the quality of Anthropic's products and said the Pentagon would like to continue working with the firm, the source added. An Anthropic spokesperson said the firm had "continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do." "During the conversation, Dario expressed appreciation for the Department's work and thanked the Secretary for his service," the spokesperson added. The meeting marked the culmination of months of tensions between the Pentagon and Anthropic - which has often irritated the Trump administration with its intense focus on safety in AI usage and development. The feud between the two sides recently escalated in January after Claude was used in the operation to arrest Venezuela's Nicolás Maduro. During the Tuesday meeting, Hegseth referenced the Pentagon's claim, first reported by Axios earlier this month, that Anthropic had complained to fellow contractor Palantir about how its technology was used in the Maduro raid. Amodei denied that he or anyone at Anthropic had any communication with Palantir or the Pentagon beyond normal operational discussions. The Post first reported in November that Anthropic's ties to the cultlike Effective Altruism movement and Democratic megadonors like LinkedIn cofounder Reid Hoffman were on the Trump administration's radar.
[148]
Hegseth Gives Anthropic Friday Ultimatum To Drop AI Safeguards Or Risk Ban As Pentagon Looks For Alternatives: Report - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL)
Secretary of War Pete Hegseth has reportedly warned Anthropic that it could be removed from the Pentagon's supply chain if the company does not commit by Friday to allowing its technology to be used in all lawful military applications. Hegseth called Anthropic's CEO, Dario Amodei, to Washington for a meeting on Tuesday, a development the Pentagon previously confirmed to Benzinga. In the course of the tense discussions, the Secretary of War threatened to invoke the Defense Production Act (DPA) if Anthropic does not come around, the Financial Times reported late Tuesday. Invoking the DPA would enable the Pentagon to utilize Anthropic's tools without an agreement. This Act, a remnant from the Cold War era, allows the president to control domestic industry for national defense purposes. President Donald Trump and former President Joe Biden, both have invoked this Act to address the medical supplies shortage during the COVID-19 pandemic. The Department of War and Anthropic did not immediately respond to Benzinga's requests for comment. Anthropic, Pentagon Clash Over AI Red Lines Anthropic's refusal to permit its technology to be used for mass surveillance of Americans or the development of autonomous weapons has become a key point of friction with the Pentagon. While the AI start-up is open to easing its terms of service, it considers these terms as red lines. The Department of War, however, views those restrictions as overly limiting. The company has raised concerns about its AI models being used in lethal missions without a human in the loop, arguing that current systems are not reliable enough for such roles, according to FT. The company has also advocated for new safeguards to restrict the use of AI in mass domestic surveillance, even when such activities are legally permitted. Meanwhile, Amodei, in a podcast last week, expressed his discomfort with the rapid concentration of AI power and wealth among a small group of companies. He also warned that AI advancement is akin to an approaching "tsunami," and that many people underestimate its impact. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors Image via Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[149]
Pentagon wants Anthropic to loosen restrictions on classified AI use cases: report
Pentagon leaders, including U.S. Secretary of Defense Pete Hegseth, plan to hold a meeting with Anthropic (ANTHRO) CEO Dario Amodei on Tuesday to open up the use cases of artificial intelligence in classified operations, according to Axios. Anthropic, which is backed The Pentagon's threat to end or limit Anthropic's involvement due to AI safeguards jeopardizes its $200M contract and future defense business, potentially impacting its value and partnerships. If Anthropic loses Pentagon access, investor trust in its backing from Amazon and Google could decrease, harming strategic alignment and Anthropic's government market growth. The Pentagon views Anthropic's reluctance as a supply chain risk, raising concerns about reliability and willingness to adapt, endangering its inclusion in critical national security operations.
[150]
Hegseth to meet with Anthropic CEO as safe AI principles collide with military contracting | Fortune
Defense Secretary Pete Hegseth plans to meet Tuesday with the CEO of Anthropic, with the artificial intelligence company the only one of its peers to not supply its technology to a new U.S. military internal network. Anthropic, maker of the chatbot Claude, declined to comment on the meeting but CEO Dario Amodei has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent. The meeting between Hegseth and Amodei was confirmed by a defense official who was not authorized to comment publicly and spoke on condition of anonymity. It underscores the debate over AI's role in national security and concerns about how the technology could be used in high-stakes situations involving lethal force, sensitive information or government surveillance. It also comes as Hegseth has vowed to root out what he calls a "woke culture" in the armed forces. "A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow," Amodei wrote in an essay last month. Anthropic is the only AI company approved for classified military networks The Pentagon announced last summer that it was awarding defense contracts to four AI companies -- Anthropic, Google, OpenAI and Elon Musk's xAI. Each contract is worth up to $200 million. Anthropic was the first AI company to get approved for classified military networks, where it works with partners like Palantir. The other three companies, for now, are only operating in unclassified environments. By early this year, Hegseth was highlighting only two of them: xAI and Google. The defense secretary said in a January speech at Musk's space flight company, SpaceX, in South Texas that he was shrugging off any AI models "that won't allow you to fight wars." Hegseth said his vision for military AI systems means that they operate "without ideological constraints that limit lawful military applications," before adding that the Pentagon's "AI will not be woke." In January, Hegseth said Musk's artificial intelligence chatbot Grok would join the Pentagon network, called GenAI.mil. The announcement came days after Grok -- which is embedded into X, the social media network owned by Musk -- drew global scrutiny for generating highly sexualized deepfake images of people without their consent. OpenAI announced in early February that it, too, would join the military's secure AI platform, enabling service members to use a custom version of ChatGPT for unclassified tasks. Anthropic calls itself more safety-minded Anthropic has long pitched itself as the more responsible and safety-minded of the leading AI companies, ever since its founders quit OpenAI to form the startup in 2021. The uncertainty with the Pentagon is putting those intentions to the test, according to Owen Daniels, associate director of analysis and fellow at Georgetown University's Center for Security and Emerging Technology. "Anthropic's peers, including Meta, Google and xAI, have been willing to comply with the department's policy on using models for all lawful applications," Owens said. "So the company's bargaining power here is limited, and it risks losing influence in the department's push to adopt AI." In the AI craze that followed the release of ChatGPT, Anthropic closely aligned with President Joe Biden's administration in volunteering to subject its AI systems to third-party scrutiny to guard against national security risks. Amodei, the CEO, has warned of AI's potentially catastrophic dangers while rejecting the label that he's an AI "doomer." He argued in the January essay that "we are considerably closer to real danger in 2026 than we were in 2023'' but that those risks should be managed in a "realistic, pragmatic manner." Anthropic has been at odds with the Trump administration This would not be the first time Anthropic's advocacy for stricter AI safeguards has put it at odds with the Trump administration. Anthropic needled chipmaker Nvidia publicly, criticizing Trump's proposals to loosen export controls to enable some AI computer chips to be sold in China. The AI company, however, remains a close partner with Nvidia. The Trump administration and Anthropic also have been on opposite sides of a lobbying push to regulate AI in U.S. states. Trump's top AI adviser, David Sacks, accused Anthropic in October of "running a sophisticated regulatory capture strategy based on fear-mongering." Sacks made the remarks on X in response to an Anthropic co-founder, Jack Clark, writing about his attempt to balance technological optimism with "appropriate fear" about the steady march toward more capable AI systems. Anthropic hired a number of ex-Biden officials soon after Trump's return to the White House, but it's also tried to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official from Trump's first term, to its board of directors. The Pentagon-Anthropic debate is reminiscent of an uproar several years ago when some tech workers objected to their companies' participation in Project Maven, a Pentagon drone surveillance program. While some workers quit over the project and Google itself dropped out, the Pentagon's reliance on drone surveillance has only increased. Similarly, "the use of AI in military contexts is already a reality and it is not going away," Owens said. "Some contexts are lower stakes, including for back-office work, but battlefield deployments of AI entail different, higher-stakes risks," he said, referring to the use of lethal force or weapons like nuclear arms. "Military users are aware of these risks and have been thinking about mitigation for almost a decade." ___ O'Brien reported from Providence, Rhode Island.
[151]
Anthropic rejects Pentagon "final offer" just 24 hours before deadline set by Pete Hegseth
AI company Anthropic has rejected the US Pentagon's final offer just 24 hours before the deadline. The company said it does not want its AI model Claude used to spy on Americans or for deadly military missions, as stated in Anthropic's official statement. Anthropic's CEO Dario Amodei said there has been "virtually no progress" in talks with the Pentagon. The Pentagon gave a deadline of Friday at 5:01 PM for Anthropic to allow full use of its AI model, as reported by Axios. Anthropic said the contract language still did not clearly stop the AI from being used for mass surveillance or fully autonomous weapons, according to the company's statement. The company also said the new "compromise" wording had legal terms that could allow safeguards to be ignored anytime. Even after rejecting the offer, Anthropic said it is not walking away from talks and expects more negotiations soon. The main fight between the Pentagon and Anthropic is about limits on AI use, especially banning surveillance of Americans and autonomous weapons. The Pentagon has already started preparing possible punishment by asking defense contractors like Boeing and Lockheed Martin to check their links with Anthropic. US Defense Secretary Pete Hegseth also warned he could use the Defense Production Act to force Anthropic to provide the AI without restrictions. Experts say such a forced order may face legal challenges and unclear law issues. The Pentagon's rule that AI must be available for "all lawful purposes" in classified work is not only for Anthropic, as noted by Axios. So far, Anthropic is the only AI company whose model has been used in classified US settings. Meanwhile, xAI has already signed a contract under the Pentagon's "all lawful purposes" rule. Talks are also speeding up to bring OpenAI and Google into classified government AI work. Overall, the situation is a high-stakes battle between AI safety limits and national security demands. Q1. Why did Anthropic reject the Pentagon's offer? Anthropic rejected it because it does not want its AI to be used for spying on Americans or for fully autonomous weapons. Q2. What could happen next in the Anthropic-Pentagon AI dispute? The Pentagon may take action like restricting the company, forcing supply under law, or continuing negotiations.
[152]
Pentagon Anthropic feud has sales and AI warfare at stake as Friday deadline looms
NEW YORK, Feb 27 (Reuters) - An explosive feud between the Pentagon and top artificial intelligence lab Anthropic is set to come to a head by 5:01 p.m. (2201 GMT) on Friday over concerns about how the military could use AI at war. The dispute, barreling toward a deadline set by the Pentagon for resolution, is widely seen as a referendum on how powerful AI could be deployed by the military and how its risks are managed. The Pentagon wants any lawful use to be allowed and has threatened Anthropic's business if the startup does not scrap additional guardrails. "It's a shot across the bow about the future of artificial intelligence and its use on the battlefield," Chris Miller, the former acting secretary of defense, told Reuters. He added that the outcome will "be an acid test for those companies that claim to want to use AI humanely." The months-long spat has divided some industry leaders, military officials and lawmakers over whether AI should be wielded without constraints when its creator Anthropic said the technology was not yet reliable for fully autonomous weapons. Democratic Senator Elissa Slotkin weighed in on Thursday: "The average person does not think we should allow weapons systems to get into war and kill people without a human being overseeing that in some way." Speaking at a confirmation hearing for two assistant defense secretary nominees, Slotkin added: "I certainly don't think any American, Democrat or Republican, wants mass surveillance on the American people." The Pentagon, which the Trump administration renamed the Department of War, has pushed back on the dilemma as a false choice "peddled by leftists in the media." "The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement," Pentagon chief spokesperson Sean Parnell posted on X Thursday. NEGOTIATIONS FALTER The Pentagon has signed $200-million ceiling agreements with major AI labs in the past year, including Anthropic, OpenAI and Google. It is pushing companies to agree to scrap their usage policies in favor of abiding by an all-lawful use clause. Anthropic, continuing these talks, has maintained red lines over the military's use of its Claude AI models for autonomous weapons and domestic surveillance. Anthropic was first among these AI companies to work with classified information, through a supply deal via cloud provider Amazon. Anthropic CEO Dario Amodei, famous for quitting OpenAI in 2020 over concerns about AI technology's stewardship, has warned that AI has advanced faster than the law. Powerful technology could hoover up disparate material to gather intelligence on unwitting civilians, he said in a Thursday blog post, a prospect that critics view as a legal loophole. "Anthropic understands that the Department of War, not private companies, makes military decisions," but AI in narrow cases "can undermine, rather than defend, democratic values," Amodei said. Amodei met with Defense Secretary Pete Hegseth this week. Afterward, the Pentagon gestured toward compromise and sent the startup revised contract language. But the two parties remained at an apparent impasse. An Anthropic spokesperson said on Thursday, "The contract language we received overnight from the Department of War made virtually no progress" and would allow "safeguards to be disregarded at will." BUSINESS THREATS Key business for Anthropic is at stake. The Pentagon warned it would terminate its work with the startup and declare it a supply-chain risk if Anthropic did not accede to the department's demand for all-lawful use of AI. The designation, reserved typically for suppliers in adversary nations, means that defense contractors could be barred from deploying Anthropic's AI during work for the Pentagon. The setback comes as Anthropic races to win sales to businesses and government, with national security an area of focus. The Pentagon has asked contractors including Lockheed Martin to give an appraisal of their reliance on Anthropic ahead of the risk designation, Reuters reported on Wednesday. The defense industrial base totaled around 60,000 contractors including major public companies as of 2021. The Pentagon made a second threat, the legality of which some experts have questioned. "If they don't get on board, SecWar will ensure the Defense Production Act is invoked on Anthropic," a senior Pentagon official told Reuters, "compelling them to be used by the Pentagon regardless of if they want to or not." (Reporting by David Jeans in New York and Jeffrey Dastin and Deepa Seetharaman in San Francisco; Editing by Kenneth Li) By David Jeans, Jeffrey Dastin and Deepa Seetharaman
[153]
Hegseth and Anthropic CEO set to meet as debate intensifies over the military's use of AI
WASHINGTON -- U.S. Defence Secretary Pete Hegseth plans to meet Tuesday with the CEO of Anthropic, with the artificial intelligence company the only one of its peers to not supply its technology to a new U.S. military internal network. Anthropic, maker of the chatbot Claude, declined to comment on the meeting but CEO Dario Amodei has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent. The meeting between Hegseth and Amodei was confirmed by a defense official who was not authorized to comment publicly and spoke on condition of anonymity. It underscores the debate over AI's role in national security and concerns about how the technology could be used in high-stakes situations involving lethal force, sensitive information or government surveillance. It also comes as Hegseth has vowed to root out what he calls a "woke culture" in the armed forces. "A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow," Amodei wrote in an essay last month. The Pentagon announced last summer that it was awarding defense contracts to four AI companies -- Anthropic, Google, OpenAI and Elon Musk's xAI. Each contract is worth up to US$200 million. Anthropic was the first AI company to get approved for classified military networks, where it works with partners like Palantir. The other three companies, for now, are only operating in unclassified environments. By early this year, Hegseth was highlighting only two of them: xAI and Google. The defense secretary said in a January speech at Musk's space flight company, SpaceX, in South Texas that he was shrugging off any AI models "that won't allow you to fight wars." Hegseth said his vision for military AI systems means that they operate "without ideological constraints that limit lawful military applications," before adding that the Pentagon's "AI will not be woke." In January, Hegseth said Musk's artificial intelligence chatbot Grok would join the Pentagon network, called GenAI.mil. The announcement came days after Grok -- which is embedded into X, the social media network owned by Musk -- drew global scrutiny for generating highly sexualized deepfake images of people without their consent. OpenAI announced in early February that it, too, would join the military's secure AI platform, enabling service members to use a custom version of ChatGPT for unclassified tasks. Anthropic has long pitched itself as the more responsible and safety-minded of the leading AI companies, ever since its founders quit OpenAI to form the startup in 2021. The uncertainty with the Pentagon is putting those intentions to the test, according to Owen Daniels, associate director of analysis and fellow at Georgetown University's Center for Security and Emerging Technology. "Anthropic's peers, including Meta, Google and xAI, have been willing to comply with the department's policy on using models for all lawful applications," Owens said. "So the company's bargaining power here is limited, and it risks losing influence in the department's push to adopt AI." In the AI craze that followed the release of ChatGPT, Anthropic closely aligned with President Joe Biden's administration in volunteering to subject its AI systems to third-party scrutiny to guard against national security risks. Amodei, the CEO, has warned of AI's potentially catastrophic dangers while rejecting the label that he's an AI "doomer." He argued in the January essay that "we are considerably closer to real danger in 2026 than we were in 2023'' but that those risks should be managed in a "realistic, pragmatic manner." This would not be the first time Anthropic's advocacy for stricter AI safeguards has put it at odds with the Trump administration. Anthropic needled chipmaker Nvidia publicly, criticizing Trump's proposals to loosen export controls to enable some AI computer chips to be sold in China. The AI company, however, remains a close partner with Nvidia. The Trump administration and Anthropic also have been on opposite sides of a lobbying push to regulate AI in U.S. states. Trump's top AI adviser, David Sacks, accused Anthropic in October of "running a sophisticated regulatory capture strategy based on fear-mongering." Sacks made the remarks on X in response to an Anthropic co-founder, Jack Clark, writing about his attempt to balance technological optimism with "appropriate fear" about the steady march toward more capable AI systems. Anthropic hired a number of ex-Biden officials soon after Trump's return to the White House, but it's also tried to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official from Trump's first term, to its board of directors. The Pentagon-Anthropic debate is reminiscent of an uproar several years ago when some tech workers objected to their companies' participation in Project Maven, a Pentagon drone surveillance program. While some workers quit over the project and Google itself dropped out, the Pentagon's reliance on drone surveillance has only increased. Similarly, "the use of AI in military contexts is already a reality and it is not going away," Owens said. "Some contexts are lower stakes, including for back-office work, but battlefield deployments of AI entail different, higher-stakes risks," he said, referring to the use of lethal force or weapons like nuclear arms. "Military users are aware of these risks and have been thinking about mitigation for almost a decade."
[154]
Anthropic's Dario Amodei, Defense Sec. Pete Hegseth to meet as...
Defense Secretary Pete Hegseth was set to hold a high-stakes meeting with Anthropic boss Dario Amodei on Tuesday as they have been trying to navigate rising tensions over military use of the Claude AI chatbot. The meeting was scheduled to come just days after reports surfaced that Hegseth was "close" to designating Anthropic as a supply chain threat. That would effectively blacklist Anthropic, voiding its contracts and forcing other firms that do business with the US military to stop using Claude. "Anthropic knows this is not a get-to-know-you meeting," a senior Defense official told Axios, which first reported on the rendezvous. "This is not a friendly meeting. This is a s-t-or-get-off-the-pot meeting." A Pentagon spokesperson confirmed the meeting was slated to take place but declined further comment. Anthropic did not immediately return a request for comment. Anthropic, which runs the Claude chatbot that is the only AI model currently approved for use on classified military systems, has blocked the Pentagon from using the technology to enable mass surveillance of Americans or to power weapons that can fire without human involvement. Tensions between the two sides have been rising for months, with Pentagon officials growing weary on safety-minded Anthropic's efforts to control how its products are used. The feud reportedly escalated in January after Claude was used in the operation to arrest Venezuela's Nicolás Maduro. Hegseth is set to be joined in the meeting by Deputy Secretary Steve Feinberg and Under Secretary for Research and Engineering Emil Michael, according to Axios. An Anthropic spokesperson said the company is "committed to using frontier AI in support of US national security" and was "having productive conversations, in good faith" with Hegseth's team. Amodei, who cofounded the company after leaving OpenAI, has rankled his AI peers and some Trump administration officials with his frequent warnings about the technology's safety risks. Anthropic's critics include White House AI czar David Sacks, who has accused Amodei and his allies of belonging to a camp of AI "doomers" who were stifling innovation. "The problem with Dario is, with him, it's ideological. We know who we're dealing with," a senior Pentagon official told Axios. The Post first reported in November that Anthropic's ties to the cultlike Effective Altruism movement and Democratic megadonors like LinkedIn cofounder Reid Hoffman were on the Trump administration's radar.
[155]
Anthropic narrows AI safety policy pledge
Anthropic is narrowing its AI safety policy pledge, removing the company's previous commitment to halt the development of its AI models if they outpace its safety procedures. The AI firm unveiled an updated version of its Responsible Scaling Policy on Tuesday, explaining in a blog post that the AI industry has not reached a consensus on risks as it had previously hoped. "If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe -- the developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit," it said in the updated policy document. Anthropic said it has opted to separate its own goals from broader industry recommendations on safety going forward. It also noted that instead of putting forward hard commitments, the company will rely on "nonbinding but publicly-declared" goals that it will grade its progress on. The update to the Responsible Scaling Policy, which was first released in 2023, comes as Anthropic is currently locked in a dispute with the Pentagon. At issue is Anthropic's AI usage policy, which bars the use of its model Claude to conduct mass surveillance or develop weapons that do not require human oversight. Anthropic CEO Dario Amodei met with Defense Secretary Pete Hegseth at the Pentagon on Tuesday, during which the department threatened to cancel the company's $200 million contract if it did not agree to the Pentagon's terms by Friday. The department also warned that it would use the Defense Production Act against Anthropic or would designate it as a supply chain risk.
[156]
US Defense Secretary Hegseth Summons Anthropic CEO for Tough Talks Over Military Use of Claude, Axios Reports
Feb 23 (Reuters) - U.S. Defense Secretary Pete Hegseth has summoned artificial intelligence company Anthropic's CEO Dario Amodei to the Pentagon on Tuesday for what is expected to be potentially tough talks over the military use of Anthropic's Claude artificial intelligence tool, Axios reported on Monday, citing sources. Reuters reported exclusively this month that the Pentagon was pushing big AI companies including OpenAI and Anthropic to make their AI tools available on classified networks without many of the standard restrictions that the companies apply to users. Also this month, Axios reported that the Pentagon had been considering cutting ties with Anthropic over the latter's insistence on retaining restrictions on how the U.S. military uses its models, which includes Claude AI. According to its Monday report, Defense officials say the Pentagon's talks with Anthropic are on the verge on collapsing. A senior Defense official told the paper that Anthropic knows this is not a "get-to-know-you meeting," according to the report. An Anthropic spokesperson said "we are having productive conversations, in good faith," according to Axios. Reuters could not immediately verify the report. The Pentagon, White House and Anthropic did not immediately respond to Reuters' request for comment. (Reporting by Angela Christy in Bengaluru; Editing by Sharon Singleton and Hugh Lawson)
[157]
Pentagon Threatens to End Anthropic Work in Feud Over AI Terms
The Pentagon warned Anthropic PBC that it would terminate the company's military contracts on Friday if the artificial intelligence startup failed to meet government terms for use of its technology, according to people familiar with the matter. During a meeting Tuesday between Chief Executive Officer Dario Amodei and Defense Secretary Pete Hegseth, US officials threatened to declare Anthropic a supply-chain risk or invoke the Defense Production Act to use the AI software even if the company didn't comply, the people said. The ultimatum marks an escalation in a growing dispute between the Defense Department and the AI startup over the company's insistence on guardrails for use of its Claude AI tool. If carried out, the Pentagon's threat would put at risk up to $200 million in work that Anthropic had agreed to do for the military. In the meeting, according to one of the people, Amodei laid out Anthropic's conditions: that the US military refrain from using its products to autonomously target enemy combatants or conduct mass surveillance of US citizens. The person said Amodei emphasized that these scenarios have yet to arise during operations in the field. "We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do," Anthropic said in a statement following the meeting. The people who described the discussions did so on condition of anonymity owing to their confidential nature. Axios reported earlier on the meeting's outcome. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. Now valued at roughly $380 billion based on its latest funding round, Anthropic was the first AI company granted clearance to handle classified material within the US government, and its Claude Gov tool quickly became a preferred option among officials at the Pentagon who appreciate its ease of use. It faces growing competition in the national security space from rivals OpenAI, Google's DeepMind and Elon Musk's xAI. The Pentagon had grown concerned Anthropic did not support US goals after hearing the company had questions about how its AI was used during the special forces operation in early January that captured Venezuelan President Nicolas Maduro, a US official said. Anthropic offered a different interpretation of the Pentagon's claim the company had questions about the Maduro raid. "Anthropic has not discussed the use of Claude for specific operations with the Department of War," the company said on Monday, via a spokesperson, referring to the Trump administration's preferred name for the Defense Department. "We have also not discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters." Anthropic positions itself as a company focused on the responsible use of AI with a goal of avoiding catastrophic harms from the technology. It built Claude Gov specifically for US national security purposes and aims to serve government customers within its own ethical bounds. The feud erupted just weeks after the Pentagon published a new strategy on artificial intelligence that called for making the military an "AI-first" force by increasing experimentation with frontier models and reducing bureaucratic barriers to use. The approach specifically urged the Defense Department to choose models that are "free from usage policy constraints that may limit lawful military applications."
[158]
Anthropic digs in heels in dispute with Pentagon, source says
NEW YORK/WASHINGTON/SAN FRANCISCO, Feb 24 (Reuters) - Artificial intelligence lab Anthropic has no intention of easing its usage restrictions for military purposes, a person familiar with the matter said on Tuesday, adding talks continue after a meeting to discuss its future with the Pentagon. The meeting between Anthropic CEO Dario Amodei and U.S. Defense Secretary Pete Hegseth was scheduled to hash out a months-long dispute. The AI startup has refused to remove safeguards that would prevent its technology from being used to target weapons autonomously and conduct U.S. domestic surveillance. Pentagon officials have argued the government should only be required to comply with U.S. law. During the meeting, Hegseth delivered an ultimatum to Anthropic: get on board or the government would take drastic action, people familiar with the matter said. The options included labeling Anthropic as a supply-chain risk or have the Pentagon invoke a law, the Defense Production Act, that would force Anthropic to change its rules, the people said. The government gave Anthropic until Friday at 5 p.m. to respond, according to a senior Pentagon official with knowledge of the matter. The Pentagon did not immediately respond to a comment request. An Anthropic spokesperson said Tuesday's meeting "continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do." The Pentagon has been negotiating AI contracts with multiple large language model, or LLM, providers, including Alphabet's Google, xAI and OpenAI, that are set to shape the future of military use of artificial intelligence for battlefield applications, spanning autonomous drone swarms, robots and cyber attacks. Until recently, Anthropic was the only LLM provider on classified networks. This week, the Pentagon announced it had reached an agreement with xAI to deploy it across classified networks. Reuters has previously reported that it plans to move all AI companies to classified networks. The Pentagon's fight with Anthropic reached a fever pitch earlier this month when it grew concerned that the company had asked questions about how its AI products were used during the Venezuela military raid that captured President Nicolas Maduro. During the meeting with Hegseth, Amodei said Anthropic did not raise concerns to Palantir or the Pentagon about whether the company's AI products were used during the Venezuela raid, the source said. Amodei also said the safeguards currently in place would not pose a problem to the Defense Department's current operations. Hegseth said the Pentagon would either invoke the Defense Production Act to compel Anthropic to comply with its demands, or deem the company a supply chain risk, a determination typically imposed on companies from foreign adversaries. This could upend Anthropic's business with other companies that do business with the U.S. government. "This specific scenario is unprecedented and will almost certainly trigger a raft of downstream litigation if the Administration takes adverse action against Anthropic here," said Franklin Turner, a government contracts lawyer at McCarter & English. (Reporting by David Jeans in New York; Deepa Seetharaman in San Francisco; Mike Stone in Washington D.C.; Editing by Kenneth Li, Nick Zieminski, Daniel Wallis and David Gregorio)
[159]
Pentagon asks defense contractors about reliance on Anthropic's AI services: Report
The Pentagon has asked defense contractors to assess their reliance on Anthropic, a person familiar with the matter said on Wednesday, ahead of its Friday deadline for the AI service provider to respond to a request to eliminate safeguards. The Department of Defense has been engaged in a months-long dispute with Anthropic, which Reuters reported has no intention of easing its usage restrictions for military purposes. Talks are continuing after a meeting between the Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei. During the meeting, Hegseth said if Anthropic did not comply, the Pentagon would take action against it, with options including labeling it a supply-chain risk or invoking a law that would force Anthropic to change its rules, Reuters reported. The Department of Defense has given Anthropic until Friday 5 p.m. Eastern time (2200 GMT) to respond, Reuters reported. "The Office of the Secretary of War is preparing to execute on any decision that the Secretary might make on Friday regarding Anthropic," a senior Pentagon official said. The Pentagon has asked contractors including Lockheed Martin to provide an assessment of reliance on Anthropic, a step toward a potential designation of the AI firm as a supply-chain risk, the person familiar with the matter told Reuters. Contacted contractors include Boeing, Axios reported on Wednesday. A Lockheed spokesperson told Reuters the Pentagon had contacted the company. Boeing Defense, Space and Security said it does not have any active contracts with Anthropic. Anthropic did not respond to a request for comment. The person familiar with the matter was not authorized to speak with media so declined to be identified. The Pentagon has pushed big AI companies including Anthropic and OpenAI to make their AI tools available on classified networks without many of the standard restrictions that the companies apply to users, Reuters has reported. Its dispute with Anthropic stems from the AI startup's refusal to remove safeguards that would stop its technology being used to target weapons autonomously and conduct surveillance in the U.S. The department used Anthropic's AI products during a military raid that captured Venezuela's President Nicolas Maduro, the Wall Street Journal reported.
[160]
Trump team livid about Dario Amodei's principled stand to keep the Defense Department from using his AI tools for warlike purposes | Fortune
Anthropic's $200 million contract with the Department of Defense is up in the air after Anthropic reportedly raised concerns about the Pentagon's use of its Claude AI model during the Nicolas Maduro raid in January. "The Department of War's relationship with Anthropic is being reviewed," Chief Pentagon Spokesman Sean Parnell said in a statement to Fortune. "Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people." Tensions have escalated in recent weeks after a top Anthropic official reportedly reached out to a senior Palantir executive to question how Claude was used in the raid, per The Hill. The Palantir executive interpreted the outreach as disapproval of the model's use in the raid and forwarded details of the exchange to the Pentagon. (President Trump said the military used a "discombobulator" weapon during the raid that made enemy equipment "not work.") "Anthropic has not discussed the use of Claude for specific operations with the Department of War," an Anthropic spokeperson said in a statement to Fortune. "We have also not discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters." At the center of this dispute are the contractual guardrails dictating how AI models can be used in defense operations. Anthropic CEO Dario Amodei has consistently advocated for strict limits on AI use and regulation, even admitting it becomes difficult to balance safety with profits. For months now, the company and DOD have held contentious negotiations over how Claude can be used in military operations. Under the Defense Department contract, Anthropic won't allow the Pentagon to use its AI models for mass surveillance of Americans or use of its technology in fully autonomous weapons. The company also banned the use of its technology in "lethal" or "kinetic" military applications. Any direct involvement in active gunfire during the Maduro raid would likely violate those terms. Among the AI companies contracting with the government -- including OpenAI, Google and xAI -- Anthropic holds a lucrative position placing Claude as the only large language model authorized on the Pentagon's classified networks. This position was highlighted by Anthropic in a statement to Fortune. "Claude is used for a wide variety of intelligence-related use cases across the government, including the DoW, in line with our Usage Policy." The company "is committed to using frontier AI in support of US national security," the statement read. "We are having productive conversations, in good faith, with DoW on how to continue that work and get these complex issues right." Palantir, OpenAI, Google and xAI didn't immediately respond to a request for comment. Although the DOD has accelerated efforts to integrate AI into its operations, only xAI has granted the DOD the use of its models for "all lawful purposes," while the others maintain usage restrictions. Amodei has been sounding the alarms for months on user protections, offering Anthropic as a safety-first alternative to OpenAI and Google in the absence of governmental regulations. "I'm deeply uncomfortable with these decisions being made by a few companies," he said back in November. Although it was rumored that Anthropic was planning to ease restrictions, the company now faces the possibility of being cut out of the defense industry altogether. A senior Pentagon official told Axios Defense Secretary Pete Hegseth is "close" to removing Anthropic from the military supply chain, forcing anyone who wishes to conduct business with the military to also cut ties with the company. "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this," the senior official told the outlet. Being deemed a military supply risk issue is a special designation usually reserved only for foreign adversaries. The closest precedent is the government's 2019 ban on Huawei over national security concerns. In Anthropic's case, sources told Axios that defense officials have been looking to pick a fight with the San Francisco-based company for some time. The Pentagon's comments are the latest in a public dispute coming to a boil. The government claims that having companies set ethical limits to its models would be unnecessarily restrictive, and the sheer number of gray areas would render the technologies futile. As the Pentagon continues to negotiate with the AI subcontractors to expand usage, the public spat becomes a proxy skirmish for who will dictate the uses of AI.
[161]
Pentagon threatens to cancel Anthropic contract by Friday if company doesn't lift safeguards
The Pentagon has threatened to cancel Anthropic's contract by Friday if the company does not agree to the department's terms for the use of its AI model, a source familiar confirmed to The Hill on Tuesday. Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei on Tuesday at the Pentagon amid a dispute over the AI firm's usage policy, which bars its model Claude from being used for mass surveillance or to develop weapons that can be used without human oversight. If Anthropic doesn't agree to the Pentagon's terms, the department warned it would use the Defense Production Act against the company or designate it as a supply chain risk, the source familiar with the meeting noted. Axios first reported the Friday deadline. "During the conversation, Dario expressed appreciation for the Department's work and thanked the Secretary for his service," an Anthropic spokesperson told The Hill in a statement on Tuesday. "We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do," the spokesperson added. The Hill has reached out to the Pentagon for comment. Despite the months-long back-and-forth, the meeting was respectful and cordial and both sides were thoughtful and friendly with no one raising their voice, the source familiar with Tuesday's meeting said.
[162]
Anthropic Should Stand Its Ground Against the Pentagon
They say your values aren't truly values until they cost you something. For Anthropic co-founder and Chief Executive Officer Dario Amodei, that cost might come as soon as Tuesday when he visits the Pentagon for what's been billed as a showdown meeting with Secretary of Defense Pete Hegseth. "This is not a friendly meeting," a senior defense official told Axios. The AI company, best known for its chatbot Claude, took a more diplomatic tone: "We are having productive conversations, in good faith," a spokesperson said. The topic of discussion could not be more serious. To date, Claude has been the only AI model permitted to be used on the Pentagon's classified networks, a valuable seal of approval that likely wins Anthropic enterprise contracts across many sectors but has also put it in a lonely position when pushing back against what it thinks might be some of the Pentagon's intended use cases. Hegseth is insisting Anthropic agree to sign on to allowing Claude to be used for "all lawful purposes," threatening severe sanctions if it does not. That could mean not only the cancellation of a $200 million contract awarded last July but also being designated a "supply chain risk," a move that would force companies with military contracts to no longer use Anthropic's AI for that work. The restriction would make Anthropic a less appealing partner for regular AI use cases, too. Tensions between Anthropic and the Pentagon increased after the operation to capture Venezuelan President Nicolás Maduro, after which it was reported that Anthropic had objected to how its technology was used. The company has since said those reports were "inaccurate" and that no discussions were held nor complaints made. Even so, the Trump administration has branded Anthropic as the "woke" AI company for months. Amodei is "ideological," Axios' Pentagon source added. It's important to look at the company's concerns and consider whether these labels are justified. The first issue Anthropic has is about mass surveillance. AI can be used to monitor masses of data and match up previously siloed datasets in ways that were previously impossible. The company worries that Fourth Amendment protections against unlawful surveillance don't directly address what is possible with AI -- leaving scope for actions it isn't comfortable with from an administration that has demonstrated a willingness to test constitutional limits. The second concern is about a technical practicality rather than an ethical redline. Anthropic has said it does not want its AI models to be used for control of autonomous weapons because it doesn't believe its technology is reliable enough yet to make life-or-death decisions without human supervision. If the Pentagon is unhappy with those apparently "woke" conditions, then, sure, it is well within its rights to cancel the contract. But to take the additional step declaring Anthropic a "supply chain risk" appears unreasonably punitive while unnecessarily burdening other companies that have adopted Claude because of its superiority to other competing models. Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. Indeed, the quality of Anthropic's product gives Amodei a stronger bargaining position than it might first appear. It would take a significant effort from the Pentagon to disentangle Claude within its systems and, even if it did, it would then need to find a willing and equally capable partner to fill the gap. Maybe that would be OpenAI or Google, but both would surely hesitate to lead their companies -- and their employees -- into the ethical quagmire Amodei is trying to avoid. Both companies are said to have been in talks with the Pentagon, but neither has reached a deal yet. The New York Times reported on Monday evening that Elon Musk had agreed to make his Grok chatbot available for use on classified material without the safeguards Anthropic was pushing for -- though it is questionable whether Musk's model will be as capable as Anthropic's. From several angles, pressure is being applied on Anthropic to fall in line. In Tuesday's meeting, Amodei must state it plainly: It is not "woke" to want to avoid accidentally killing innocent people. This isn't a case of an arms maker dictating how the Pentagon must use a weapon it has purchased or against which target. No, this is a responsible company making sure a tool bought for one purpose won't be recklessly used for another. More From Bloomberg Opinion: * OpenClaw Might Be a Security Nightmare for OpenAI: Parmy Olson * The Key to Regaining Trust in the Era of AI: Gautam Mukunda * Japan Needs Claude Subscriptions, Not Tax Cuts: Gearoid Reidy Want more Bloomberg Opinion? Terminal readers head to OPIN <GO>. Or subscribe to our daily newsletter.
[163]
Pentagon asks US defense contractors about reliance on Anthropic's services, source says
The Pentagon is reaching out to defense contractors to assess their reliance on artificial intelligence lab Anthropic's services, a source familiar with the matter told Reuters on Wednesday, ahead of a Friday deadline for the AI firm to respond to the government. Reuters reported on Tuesday that Anthropic has no intention of easing its usage restrictions for military purposes, and talks continue after a meeting between the AI firm's CEO and U.S. Defense Secretary Pete Hegseth to discuss its future with the Pentagon. Axios reported earlier on Wednesday the Pentagon has asked defense contractors Boeing and Lockheed Martin to provide an assessment of their reliance on Anthropic, a first step toward a potential designation of the AI firm as a "supply chain risk." "Lockheed Martin has been contacted by the Department of War regarding an analysis of its exposure and reliance on Anthropic ahead of a potential supply chain risk declaration," a Lockheed spokesperson told Axios. Boeing declined to comment to Axios. Boeing and Lockheed did not immediately respond to Reuters' requests for comment. The Pentagon also did not reply to a Reuters request for comment.
[164]
Anthropic CEO to meet Hegseth amid dispute over military use of Claude
Anthropic CEO Dario Amodei is meeting with Defense Secretary Pete Hegseth on Tuesday at the Pentagon as the company continues discussions with the department around the terms of use of its AI model Claude, a Pentagon official confirmed to The Hill on Monday. The AI firm has increasingly found itself at odds with the Pentagon in recent weeks over its usage policy, which bars its AI models from being used for mass surveillance or development of weapons that can be used without human oversight. This became a key issue following the revelation that Anthropic's technology was used in the raid that captured Venezuelan President Nicolas Maduro last month. Amid this dispute, Pentagon officials are considering labeling the company a supply chain risk. Claude is the only AI model that is operating on the military's fully classified systems. The recent spat has left Anthropic's relationship with the Pentagon on shaky terms and Hegseth is considering canceling a $200 million contract with the company altogether. It is one of the several AI giants that have inked contracts with the Pentagon as the Trump administration pushed for more adoption of the technology at the department. If the two sides fail to reach an agreement, it would be a "massive loss," according to Emelia Probasco, a senior fellow at Georgetown's Center for Security and Emerging Technology. "One of the top AI labs in the world is trying to help the government, and there are warfighters who are using this today who are going to be harmed if all of sudden their access is taken away without some very clear technical explanation of what's going on," Probasco previously told The Hill. Anthropic said last week that it was engaged in "productive conversations, in good faith" with the Defense Department. The Hill has reached out to the company for comment on Tuesday's meeting, which was first reported by Axios.
[165]
Anthropic's CEO to Meet Hegseth Amid Feud Over Pentagon Work
Anthropic PBC Chief Executive Officer Dario Amodei will meet with US Defense Secretary Pete Hegseth on Tuesday, according to a senior Pentagon official, as contract talks with the artificial intelligence startup remain deadlocked over the company's insistence on guardrails for use of its technology. There were no further details on the meeting between Amodei and Hegseth, according to the official, who spoke on condition of anonymity. The Pentagon had grown concerned that the company did not support its aims after hearing it had questions about how its AI was used during the US raid last month that captured Venezuelan President Nicolas Maduro, the official said. In a statement Monday, Anthropic said it was committed to using AI to support national security. "We are having productive conversations, in good faith, with" the Pentagon "on how to continue that work and get these complex issues right," the company said via a spokesperson. Anthropic is also seeking additional protections governing use of its Claude AI tool, a person familiar with the matter told Bloomberg News last week. Those conditions would include measures to stop it from being used for mass surveillance of Americans or to develop weapons that can be deployed without a human involved. The company's stance has prompted objections from the Pentagon, which wants to be able to use Claude as long as its deployment doesn't break the law. A Defense Department spokesman said last week the company's relationship with the Pentagon was under review. "Our nation requires that our partners be willing to help our warfighters win in any fight," Pentagon spokesman Sean Parnell said. Axios reported earlier Monday on the meeting between Amodei and Hegseth, citing people familiar with the matter, who described it as decisive moment in the discussions over the contract. Anthropic positions itself as a company focused on the responsible use of AI with a goal of avoiding catastrophic harms from the technology. It built Claude Gov specifically for US national security purposes and aims to serve government customers within its own ethical bounds. "Claude is used for a wide variety of intelligence-related use cases across the government," including by the Defense Department, in line with the company's usage policy, Anthropic said in its statement.
[166]
AI vs military: This showdown can shape the future of war
A major clash is unfolding between AI firm Anthropic and the US Pentagon. Anthropic CEO Dario Amodei is drawing ethical lines on autonomous targeting and domestic surveillance. Defense Secretary Pete Hegseth is pushing for unrestricted military AI use. This dispute will shape how AI is governed in national security and democratic societies globally. The standoff between Anthropic and the Pentagon is more than a contractual dispute. It is a test case for how artificial intelligence will be governed in matters of war, surveillance and state power. At its center are Pete Hegseth, the U.S. defense secretary pressing for unrestricted military AI capabilities, and Dario Amodei, the CEO of Anthropic, who has drawn firm ethical red lines around autonomous targeting and domestic surveillance. As reported by AP and earlier by Axios, Hegseth has given Anthropic a deadline to open its AI systems for full military use or risk losing its defense contract. According to AP, Pentagon officials have floated designating Anthropic a supply chain risk or invoking the Defense Production Act to compel access to its technology. The confrontation is unfolding as the US military accelerates adoption of AI tools through initiatives such as GenAI.mil, an internal network that now includes models from OpenAI and xAI. Why does this clash matter so much? Because it sits at the intersection of military doctrine, democratic accountability, corporate power and the future architecture of AI governance. Read More | Hegseth and Anthropic CEO set to meet as debate intensifies over the military's use of AI The battle over autonomous lethality One of Anthropic's non-negotiable positions is its refusal to support fully autonomous military targeting operations. Amodei has publicly warned about the dangers of AI systems that can select and strike targets without meaningful human control. This concern is not theoretical. Militaries worldwide are exploring AI-assisted decision systems that can analyse sensor data, identify threats and compress the timeline from detection to strike. If the Pentagon succeeds in compelling or sidelining Anthropic, it would show that corporate-imposed ethical constraints will not survive in high-stakes national security contexts. The US Department of Defense argues that it issues only lawful orders and that compliance with the law is its responsibility. But legality and prudence are not always the same. The outcome here may shape whether AI firms retain the power to draw lines around autonomous weapons or whether governments will define those boundaries unilaterally. Surveillance, dissent and democratic norms The second red line concerns domestic surveillance. Amodei warned in a January essay that a powerful AI system analysing billions of conversations could detect "pockets of disloyalty" and suppress dissent. The Pentagon rejects built-in model restrictions, arguing that military tools cannot come with ideological constraints. This dispute touches a nerve in democratic societies. If AI systems embedded in military or intelligence networks can analyse large-scale communications data, the technical capacity for predictive surveillance expands dramatically. The Brennan Center's Amos Toh, cited by AP, has argued that Congress must increase oversight, particularly if AI is used to surveil Americans. History shows that surveillance authorities granted in the name of national security often expand over time. The difference now is scale and speed. AI systems do not just collect data but interpret it, cluster it and generate actionable insights. If guardrails are weakened in the name of flexibility, it could redefine the relationship between citizens and the state. Corporate ethics versus state power Anthropic has long positioned itself as a safety-focused AI firm. It was founded by former members of OpenAI who sought stronger guardrails and third-party scrutiny. The company aligned itself with voluntary oversight efforts during the Biden administration and has publicly advocated for tighter export controls on advanced chips, even when that put it at odds with the Trump administration's deregulatory instincts. In this light, the Pentagon confrontation tests whether corporate ethics can meaningfully constrain state power. Governments possess tools such as procurement leverage, regulatory authority and laws like the Defense Production Act that companies cannot easily counter. If Anthropic yields, it risks diluting its brand as a safety-first firm. If it resists and loses its contract, it may cede influence to competitors more willing to comply. Owen Daniels of Georgetown University's Center for Security and Emerging Technology told AP notes that peers including Google and xAI have shown greater willingness to align with Defense Department policies. That reality narrows Anthropic's bargaining power. The Pentagon can shift to alternative providers, potentially marginalising companies that insist on strict usage limits. Also Read | AI use standoff: Pentagon gives Anthropic until Friday to comply The militarisation of commercial AI The Pentagon's AI push is part of a broader transformation. As AP reports, defense contracts worth up to $200 million have been awarded to Anthropic, Google, OpenAI, and xAI. Hegseth has publicly declared that military AI systems must operate "without ideological constraints," and he has announced that xAI's Grok will join the Pentagon's secure but unclassified network. The rapid integration of commercial AI into defense infrastructure blurs the line between civilian and military technology ecosystems. Models trained on internet-scale data for general-purpose tasks are being adapted for logistics, intelligence analysis, and potentially battlefield support. This convergence raises strategic questions. If AI becomes foundational to military superiority, nations may feel compelled to relax ethical constraints to avoid falling behind adversaries. The Anthropic dispute thus reflects not only internal US debates but also global competitive pressures. Other powers are unlikely to impose self-restraint if they perceive that it sacrifices strategic advantage. What it means for global AI governance The implications extend beyond the US. American AI firms set norms that often ripple outward. If the US government establishes that it can override corporate safeguards in the name of national security, other governments may follow suit, perhaps with fewer legal checks. Conversely, if Anthropic's stance catalyses congressional scrutiny or leads to clearer statutory limits, it could shape emerging global standards on military AI. International humanitarian law, export control regimes and future treaties on autonomous weapons will all be influenced by how leading democracies navigate these internal conflicts. There has been a growing bipartisan concern over AI's role in warfare and surveillance. The current episode compresses those concerns into a concrete showdown, where rhetoric must translate into policy. A defining moment for AI's social contract Ultimately, the Anthropic-Pentagon affair is about who defines the moral architecture of transformative technologies. Is it elected officials, acting in the name of national security? Is it private companies, guided by internal principles and reputational incentives? Or is it Congress and the courts, struggling to keep pace with rapid innovation? As per AP, the Pentagon's adoption of AI is proceeding at breakneck speed. The law, critics argue, has not kept up. That gap creates a governance vacuum in which executive authority and corporate discretion collide. What happens next will be closely watched not because of one contract or one chatbot, but because it may establish the template for how AI is deployed in matters of war and civil liberty. If the balance tips decisively toward unconstrained military use, the precedent will echo globally. If ethical guardrails hold or are formalised through legislation, it could mark the beginning of a more deliberate integration of AI into state power. In either case, the stakes are global.
[167]
US Defense Dept gives Anthropic Friday deadline to drop AI curbs
The US Defense Department has given AI company Anthropic until Friday to agree to unrestricted military use of its technology or face being forced to comply under emergency federal powers, a senior official said Tuesday. But after the meeting, agree to unrestricted military use of its technology by 5:01 pm (22:00 GMT) Friday or face being forced to comply under the Defense Production Act. The US Defense Department has given AI company Anthropic until Friday to agree to unrestricted military use of its technology or face being forced to comply under emergency federal powers, a senior official said Tuesday. Anthropic chief executive Dario Amodei met personally with Defense Secretary Pete Hegseth at the Pentagon on Tuesday, with the company saying he "expressed appreciation for the Department's work and thanked the Secretary for his service." At the heart of the conflict is Anthropic's refusal to let its Claude models be used for the mass surveillance of US citizens or in fully autonomous weapons systems. "We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do," the company said in a statement. But after the meeting, the Pentagon delivered a stark ultimatum: agree to unrestricted military use of its technology by 5:01 pm (22:00 GMT) Friday or face being forced to comply under the Defense Production Act. The Cold War-era law, last used during the Covid pandemic, grants the federal government sweeping powers to compel private industry to prioritize national security needs. The Pentagon also threatened to label Anthropic a supply chain risk, a designation usually reserved for firms from adversary countries that could severely damage the company's ability to work with the US government and reputation. The senior Pentagon official pushed back on the company's concerns, insisting the Defense Department had always operated within the law. "Legality is the Pentagon's responsibility as the end user," the official said, adding that the department "has only given out lawful orders." Officials also confirmed that an exchange regarding intercontinental ballistic missiles had taken place between Anthropic and the Pentagon, underscoring the sensitivity of the applications at the heart of the dispute. The Pentagon confirmed that Elon Musk's Grok system had been cleared for use in a classified setting, while other contracted companies -- OpenAI and Google -- were described as close to similar clearances, piling competitive pressure on Anthropic to fall in line. Anthropic was contracted alongside those companies last year to supply AI models for a range of military applications under a $200 million agreement. Anthropic was founded by former OpenAI employees in 2021 on the premise that AI development should prioritize safety -- a philosophy that now puts it on a collision course with the Pentagon and the White House.
[168]
Anthropic digs in heels in dispute with Pentagon, source says
AI firm Anthropic faces pressure from the Pentagon. The company refuses to remove safeguards for military use. The Pentagon has issued an ultimatum. Anthropic must comply or face drastic actions. This dispute could impact future AI use in defense. Talks continue between Anthropic and the Pentagon. Artificial intelligence lab Anthropic has no intention of easing its usage restrictions for military purposes, a person familiar with the matter said on Tuesday, adding talks continue after a meeting to discuss its future with the Pentagon. The meeting between Anthropic CEO Dario Amodei and U.S. Defense Secretary Pete Hegseth was scheduled to hash out a months-long dispute. The AI startup has refused to remove safeguards that would prevent its technology from being used to target weapons autonomously and conduct U.S. domestic surveillance. Pentagon officials have argued the government should only be required to comply with U.S. law. During the meeting, Hegseth delivered an ultimatum to Anthropic: get on board or the government would take drastic action, people familiar with the matter said. The options included labeling Anthropic as a supply-chain risk or have the Pentagon invoke a law, the Defense Production Act, that would force Anthropic to change its rules, the people said. The government gave Anthropic until Friday at 5 p.m. to respond, according to a senior Pentagon official with knowledge of the matter. The Pentagon did not immediately respond to a comment request. An Anthropic spokesperson said Tuesday's meeting "continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do." The Pentagon has been negotiating AI contracts with multiple large language model, or LLM, providers, including Alphabet's Google, xAI and OpenAI, that are set to shape the future of military use of artificial intelligence for battlefield applications, spanning autonomous drone swarms, robots and cyber attacks. Until recently, Anthropic was the only LLM provider on classified networks. This week, the Pentagon announced it had reached an agreement with xAI to deploy it across classified networks. Reuters has previously reported that it plans to move all AI companies to classified networks. The Pentagon's fight with Anthropic reached a fever pitch earlier this month when it grew concerned that the company had asked questions about how its AI products were used during the Venezuela military raid that captured President Nicolas Maduro. During the meeting with Hegseth, Amodei said Anthropic did not raise concerns to Palantir or the Pentagon about whether the company's AI products were used during the Venezuela raid, the source said. Amodei also said the safeguards currently in place would not pose a problem to the Defense Department's current operations. Hegseth said the Pentagon would either invoke the Defense Production Act to compel Anthropic to comply with its demands, or deem the company a supply chain risk, a determination typically imposed on companies from foreign adversaries. This could upend Anthropic's business with other companies that do business with the U.S. government. "This specific scenario is unprecedented and will almost certainly trigger a raft of downstream litigation if the Administration takes adverse action against Anthropic here," said Franklin Turner, a government contracts lawyer at McCarter & English.
[169]
US Defense Secretary Pete Hegseth warns Anthropic to allow full military use of its ai or risk losing Pentagon contract
US Defense Secretary Pete Hegseth warned AI company Anthropic that it must allow its technology to be used in all lawful military work or it could lose its Pentagon contract. Hegseth gave a strict deadline, which is Friday, for Anthropic to say yes or no to full military use of its AI systems. If Anthropic refuses, the Pentagon may remove it from its supply chain and stop buying its technology. Officials also warned they could use the Defense Production Act, which can legally force companies to support military needs, as reported by AP News. A meeting between Hegseth and Anthropic CEO Dario Amodei happened Tuesday, and the discussion was described as polite but tense. Amodei refused to change two key rules: No fully autonomous AI weapons targeting, and No AI surveillance of US citizens. Anthropic created the AI chatbot Claude and is the only major AI firm not fully supporting the Pentagon's internal AI network yet. The Pentagon gave AI contracts worth up to $200 million each to four companies: Anthropic, Google, OpenAI, and xAI, as stated by AP News. Anthropic was the first approved for classified military networks, while others mainly work on unclassified systems. Hegseth recently praised only Google and xAI, saying the military does not want AI that refuses to help fight wars. CEO Amodei says powerful AI could be dangerous if used for, Autonomous weapons, Mass surveillance, and Tracking public dissent. In a recent essay, Amodei warned AI risks could become very serious by 2026 if not carefully managed. Anthropic has long promoted itself as a "safety-focused" AI company since its founders left OpenAI in 2021. Experts say Anthropic has limited power because its competitors already agreed to military use rules. Anthropic previously worked closely with the Joe Biden administration on AI safety checks. It has clashed with the Donald Trump administration over AI regulations and chip export rules. Trump's AI adviser David Sacks accused Anthropic of using fear to push regulation. Anthropic co-founder Jack Clark said AI development needs "balanced optimism and fear", as stated by AP News. The company has also publicly criticized chip maker Nvidia over policy issues, despite being partners. Experts warn the Pentagon's fast adoption of AI in war and surveillance raises serious legal and ethical questions. Some legal analysts say US laws are not keeping up with fast AI development, especially for monitoring citizens. The US government is pressuring Anthropic to fully support military AI use, but the company is resisting because of safety and ethical concerns -- creating a major clash between national security goals and AI responsibility. Q1. Why is the US government pressuring Anthropic? The Pentagon wants the company to allow its AI to be used in all legal military work or it may lose its contract. Q2. What is Pete Hegseth asking from Anthropic? He wants the company to approve full military use of its AI technology, including defense operations.
[170]
Elon Musk lashes out at Anthropic as Pentagon summons AI company CEO Dario Amodei - The Economic Times
US Defence Secretary Pete Hegseth has reportedly summoned AI firm Anthropic CEO Dario Amodei to the Pentagon for "a high‑stakes", "tense" meeting over the military's use of the company's Claude AI model, according to reports, as Elon Musk slammed the AI company over allegedly stealing training data. The report from Axios said, citing an anonymous senior defence official, that the meeting was "not a friendly" one as Anthropic did not remove restrictions on their technology even as Hegseth urged AI firms to do so. Claude is currently the only AI system deployed inside classified defence networks, under a $200 million pilot contract signed last year but Hegseth in a January 9 memo asked AI companies to renegotiate terms to remove restrictions on their technology. However, Anthropic stayed put on refusal to fully lift safeguards, including restrictions on mass surveillance of Americans and development of fully autonomous weapons, the report said. Defence officials warned Anthropic could be designated a "supply chain risk," voiding contracts and restricting other Pentagon partners from using Claude even as the AI firm's spokesperson called discussions "productive". Replacing Anthropic is deemed complex given its deep integration into defence systems, the report added. Meanwhile, Pentagon has signed agreements with Elon Musk's xAI and is nearing a deal with Google for Gemini model building pressure on Anthropic, The New York Times reported. Ahead of the meeting, Anthropic alleged three Chinese AI firms used chatbots to siphon millions of Claude outputs to train their own models. Musk, in response, lashed out at Anthropic on X saying, "Anthropic is guilty of stealing training data at massive scale and has had to pay multi‑billion-dollar settlements for their theft." The Tesla and SpaceX CEO called the AI company 'MisAnthropic.' Claude is a next generation AI assistant built by Anthropic and trained to be safe, accurate, and secure. The new AI assistant could automate legal document reviews, compliance checks, sales planning, marketing campaign analysis, financial reconciliation, data visualisation, SQL‑based reporting and enterprise‑wide document search.
[171]
Hegseth and Anthropic CEO set to meet as debate intensifies over the military's use of AI
Defense Secretary Pete Hegseth plans to meet Tuesday with the CEO of Anthropic, with the artificial intelligence company the only one of its peers to not supply its technology to a new US military internal network. Anthropic, maker of the chatbot Claude, declined to comment on the meeting but CEO Dario Amodei has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent. The meeting between Hegseth and Amodei was confirmed by a defense official who was not authorized to comment publicly and spoke on condition of anonymity. It underscores the debate over AI's role in national security and concerns about how the technology could be used in high-stakes situations involving lethal force, sensitive information or government surveillance. It also comes as Hegseth has vowed to root out what he calls a "woke culture" in the armed forces. "A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow," Amodei wrote in an essay last month. Anthropic is the only AI company approved for classified military networks The Pentagon announced last summer that it was awarding defense contracts to four AI companies - Anthropic, Google, OpenAI and Elon Musk's xAI. Each contract is worth up to $200 million. Anthropic was the first AI company to get approved for classified military networks, where it works with partners like Palantir. The other three companies, for now, are only operating in unclassified environments. By early this year, Hegseth was highlighting only two of them: xAI and Google. The defense secretary said in a January speech at Musk's space flight company, SpaceX, in South Texas that he was shrugging off any AI models "that won't allow you to fight wars." Hegseth said his vision for military AI systems means that they operate "without ideological constraints that limit lawful military applications," before adding that the Pentagon's "AI will not be woke." In January, Hegseth said Musk's artificial intelligence chatbot Grok would join the Pentagon network, called GenAI.mil. The announcement came days after Grok - which is embedded into X, the social media network owned by Musk - drew global scrutiny for generating highly sexualized deepfake images of people without their consent. OpenAI announced in early February that it, too, would join the military's secure AI platform, enabling service members to use a custom version of ChatGPT for unclassified tasks. Anthropic calls itself more safety-minded Anthropic has long pitched itself as the more responsible and safety-minded of the leading AI companies, ever since its founders quit OpenAI to form the startup in 2021. The uncertainty with the Pentagon is putting those intentions to the test, according to Owen Daniels, associate director of analysis and fellow at Georgetown University's Center for Security and Emerging Technology. "Anthropic's peers, including Meta, Google and xAI, have been willing to comply with the department's policy on using models for all lawful applications," Owens said. "So the company's bargaining power here is limited, and it risks losing influence in the department's push to adopt AI." In the AI craze that followed the release of ChatGPT, Anthropic closely aligned with President Joe Biden's administration in volunteering to subject its AI systems to third-party scrutiny to guard against national security risks. Amodei, the CEO, has warned of AI's potentially catastrophic dangers while rejecting the label that he's an AI "doomer." He argued in the January essay that "we are considerably closer to real danger in 2026 than we were in 2023'' but that those risks should be managed in a "realistic, pragmatic manner." Anthropic has been at odds with the Trump administration This would not be the first time Anthropic's advocacy for stricter AI safeguards has put it at odds with the Trump administration. Anthropic needled chipmaker Nvidia publicly, criticizing Trump's proposals to loosen export controls to enable some AI computer chips to be sold in China. The AI company, however, remains a close partner with Nvidia. The Trump administration and Anthropic also have been on opposite sides of a lobbying push to regulate AI in U.S. states. Trump's top AI adviser, David Sacks, accused Anthropic in October of "running a sophisticated regulatory capture strategy based on fear-mongering." Sacks made the remarks on X in response to an Anthropic cofounder, Jack Clark, writing about his attempt to balance technological optimism with "appropriate fear" about the steady march toward more capable AI systems. Anthropic hired a number of ex-Biden officials soon after Trump's return to the White House, but it's also tried to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official from Trump's first term, to its board of directors. The Pentagon-Anthropic debate is reminiscent of an uproar several years ago when some tech workers objected to their companies' participation in Project Maven, a Pentagon drone surveillance program. While some workers quit over the project and Google itself dropped out, the Pentagon's reliance on drone surveillance has only increased. Similarly, "the use of AI in military contexts is already a reality and it is not going away," Owens said. "Some contexts are lower stakes, including for back-office work, but battlefield deployments of AI entail different, higher-stakes risks," he said, referring to the use of lethal force or weapons like nuclear arms. "Military users are aware of these risks and have been thinking about mitigation for almost a decade."
[172]
Anthropic's Claude AI faces Pentagon ultimatum from US Defense Sec Pete Hegseth
US Defense Secretary Pete Hegseth has called Anthropic CEO Dario Amodei to the Pentagon over a dispute on military use of its AI model, Claude, as tensions rise over safeguards and defense contracts. US Defense Secretary Pete Hegseth has reportedly summoned Anthropic CEO Dario Amodei to the Pentagon for what officials described as a high-stakes meeting over the military's use of the company's AI model, Claude, Axios reported. Citing sources, Axios said the talks are expected to be tense, with one senior Defense official calling it "not a friendly meeting" and a make-or-break moment in negotiations. Claude is currently the only AI model deployed within the military's classified systems and is considered among the most capable tools for sensitive defence and intelligence work. Also Read: Teaser for Nikhil Kamath's podcast has Anthropic CEO Dario Amodei warning of AI tsunami The Defense Department and Anthropic signed a $200 million pilot contract last year, but tensions escalated after a January 9 memo from Hegseth urging AI companies to remove restrictions on their technology, prompting a renegotiation of terms. Reuters reported earlier this month that the Pentagon was pushing major AI companies, including OpenAI and Anthropic, to make their tools available on classified networks with fewer standard restrictions. The Pentagon is frustrated with Anthropic's refusal to fully lift safeguards. While the company has signalled openness to easing some limits, it wants to maintain restrictions on mass surveillance of Americans and the development of fully autonomous weapons. An Anthropic spokesperson told Axios discussions are "productive" and being held "in good faith," adding that the company is committed to supporting US national security. Defense officials, however, said negotiations have stalled and warned Anthropic could be designated a "supply chain risk". Potentially voiding its contracts and restricting other Pentagon partners from using Claude. Also Read: Anthropic accuses Chinese AI firms of data copying using fake accounts and AI distillation methods The Pentagon has also signed an agreement with Elon Musk's AI firm xAI and is nearing a deal with Google for its Gemini model, The New York Times reported. Officials hope those agreements will pressure Anthropic to broaden access to Claude. Axios added that Hegseth is expected to present Amodei with an ultimatum. Replacing Anthropic would be complex given its deep integration into defence systems. Ahead of the meeting, Anthropic published a blog post alleging that three Chinese AI firms had siphoned information from the company to improve their own models.
[173]
Anthropic lost the battle, OpenAI won the war?
OpenAI replaced Anthropic hours later, with the same restrictions By now we are all used to US President Donald Trump's outbursts on Truth Social. But when that outburst came at Anthropic's expense - the first AI company to infuse its Claude models into US Department of Defense workflows last year - with Sam Altman's OpenAI benefiting down the line, that made it all the more interesting. The feud between OpenAI's Sam Altman and Anthropic's Dario Amodei isn't new. From Anthropic's superbowl ads chastising OpenAI's ChatGPT ads to Altman hitting back, and more recently Altman and Amodei refusing to shake hands at the India AI Impact Summit, it's safe to say there's no love lost between the two. In the matter related to the US Depart of Defense (DoD), it seems like Altman will have the last laugh over Amodei, where ChatGPT benefits at the expense of good old Claude. To fully understand how Anthropic got publicly humiliated by Trump and DoD Secretary Hegseth over their non-compliance in supporting development of autonomous weapons and possible mass surveillance with the help of AI, and how OpenAI swooped in to "save the day" you will have to go back to mid-2025 where it all began. Away from the rivalry between ChatGPT and Gemini, Anthropic has been quietly doing stellar work with Claude Code and Cowork (and more). In fact, to its credit, it became the first AI company to sign a contract with the US defence department. Signed in July 2025, the $200 million contract allowed Anthropic to integrate its models into US defence mission workflows on classified networks, according to reports. At the time, the contract had usage restrictions - essential guardrails that prohibited applications of Anthropic's AI for creating autonomous weapons and mass surveillance. And the US Pentagon had originally agreed to these restrictions, but tensions soon began to rise. In January 2026, as Secretary of the DoD, Pete Hegseth asked all US Defense Department AI contracts to have legal language added that allowed the US Defense Department to deploy AI models without restrictions - of course, for "any lawful use" within 180 days. This new legal edit to the contract was in direct opposition with Anthropic's restrictions. By early February 2026, Pentagon officials told Anthropic about their concerns, how any company's guardrails could stand in the way of critical actions at the time of war - like responding to a missile launched toward the United States. Actions that needed split-second decision making, sped up by the use of AI, of course. Anthropic, however, was committed in its belief to not allow its AI to be used for development of autonomous weapons systems. On February 26, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a deadline of 5 pm on Friday, February 27 - to relent and allow unrestricted use of the company's AI models "for all lawful purposes." If not, Anthropic would be deemed a supply chain risk and be legally forced to comply under the US Defense Production Act. In response, Anthropic CEO Amodei said his company won't be intimidated. "These threats do not change our position: we cannot in good conscience accede to their request," he wrote in his Thursday statement. This was primarily because Anthropic believed current AI models aren't reliable for autonomous weapons deployments, and how mass domestic surveillance was in violation of US fundamental rights. This is what led to President Trump ordering the US government to stop using Anthropic's products with a Truth Social post on February 27. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War," Trump wrote on Truth Social. "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology." Also Defense Secretary Hegseth said that he was labeling Anthropic a supply chain risk to national security, blacklisting it from working with the US military or contractors going forward. In response, on February 27, Anthropic posted a statement saying it had "not yet received direct communication" from either the Pentagon or Trump. "We will challenge any supply chain risk designation in court," showing no signs of backing down. Of course, Anthropic's non-compliance doesn't help the US Department of Defense - especially at a time when Israel has opened pre-emptive strikes against Iran in the Middle East. It still needs a top AI company to step in and save the day. Thank heavens for OpenAI and Sam Altman, right? OpenAI CEO Sam Altman said late Friday that his company had agreed to terms with the Department of Defense on use of its AI models. "Tonight, we reached an agreement with the Department of War to deploy our models in their classified network," Altman wrote. But here's the irony, if you read Altman's entire post. Altman wrote, "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement." In other words, OpenAI got the Pentagon to agree in writing to pretty much the exact same restrictions that Anthropic had been demanding all these months - and for which Anthropic got blacklisted. Unlike Anthropic, OpenAI and Sam Altman had been more savvier in their discussions with top US Department of Defense officials, and had already allowed its AI models to be used by the DoD for "all lawful uses," after months of internal deliberations. OpenAI was comfortable with this because so many safeguards were already built into its models, and also as Altman wrote, "In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome." And that, in a nutshell, is the story so far. Having built its entire brand on AI safety and ethics, Anthropic lost out in partnering with the US Pentagon. In contrast, OpenAI swooped in to plug the hole left by Anthropic and earn the prestige of being the US Department of Defense's classified AI partner.
[174]
Trump Anthropic ban effect: Pentagon turns to OpenAI to deploy AI, here's what happened
OpenAI CEO Sam Altman has announced that the company has reached a new agreement with the Department of War. US President Donald Trump has ordered all federal agencies in the country to stop using Anthropic AI. The decision comes after months of growing disagreement between the Pentagon and Anthropic over how the military could use the AI systems. After a few hours of Trump's announcement, OpenAI CEO Sam Altman revealed that the company has reached a new agreement with the Department of War. In a post on Truth Social, Trump announced, 'I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!' He added that the agencies would have six months to end any existing contracts with Anthropic. Also read: Samsung exec explains why Galaxy S26 Ultra, S26 Plus, S26 prices are higher this year Soon after, Defence Secretary Pete Hegseth called Anthropic a 'Supply-Chain Risk to National Security.' He further urged that 'effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.' Anthropic has also responded with a blogpost, saying it has 'not yet received direct communication' from either the Pentagon or Trump. The company also said it would fight back in court. 'We will challenge any supply chain risk designation in court.' Also read: Google Nano Banana 2 is here: Features, how to use it and more details Just hours after the ban was announcement, Altman announced on X that his AI company has reached an agreement with the Department of War. He said OpenAI would deploy its AI models on the department's classified networks. Altman praised the Pentagon, saying it 'displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.' 'AI safety and wide distribution of benefits are the core of our mission,' he added. 'Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.' Also read: Samsung Galaxy S26 Ultra, S26 Plus, S26 price in India, Dubai, USA and more compared: Which country offers the lowest cost
Share
Share
Copy Link
Defense Secretary Pete Hegseth has threatened to invoke the Defense Production Act or label Anthropic a supply chain risk unless the $380 billion AI company grants unfettered access to its Claude models for all military applications by Friday. CEO Dario Amodei refuses to budge on two red lines: mass surveillance of Americans and fully autonomous weapons with no human in the loop.
US Defense Secretary Pete Hegseth has issued a stark ultimatum to Anthropic: grant the Pentagon unrestricted access to its AI models for all lawful military applications by Friday at 5:01 PM, or face severe consequences. The threat marks a dramatic escalation in tensions between the $380 billion AI company and the military over ethical boundaries in military AI deployment
1
.During tense talks in Washington on Tuesday, Hegseth summoned Anthropic CEO Dario Amodei and threatened to either label the company a supply chain risk—a designation typically reserved for foreign adversaries—or invoke the Defense Production Act, a Cold War-era measure that would compel Anthropic to comply regardless of its objections
1
. A senior Pentagon official stated that if Anthropic doesn't "get on board," Hegseth "will ensure the Defense Production Act is invoked on Anthropic, compelling them to be used by the Pentagon regardless of if they want to or not"1
.
Source: ET
Amodei has drawn two firm ethical boundaries that Anthropic refuses to cross: no mass surveillance of Americans and no fully autonomous weapons with no human in the loop. In a statement released Thursday, just hours before the Pentagon deadline, Amodei declared he "cannot in good conscience accede to [the Pentagon's] request" for unrestricted access to AI systems
4
.
Source: ET
"In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei wrote. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do"
4
. He pointed out the inherent contradiction in Hegseth's dual threats: "One labels us a security risk; the other labels Claude as essential to national security"4
.Anthropics has expressed particular concern about its models being used for lethal missions that do not have a human in the loop, arguing that state-of-the-art AI models are not reliable enough to be trusted in those contexts
1
. The company had also pushed for new rules to govern the use of AI models for mass domestic surveillance, even where that might be legal under current regulations1
.As the Friday deadline approaches, over 300 Google employees and over 60 OpenAI employees have signed an open letter urging their company leaders to support Anthropic and refuse unilateral military use of AI for domestic mass surveillance and autonomous weaponry
2
. "They're trying to divide each company with fear that the other will give in," the letter states. "That strategy only works if none of us know where the others stand"2
.Sam Altman, CEO of OpenAI, told CNBC on Friday morning that he doesn't "personally think the Pentagon should be threatening DPA against these companies"
2
. OpenAI subsequently reached a new agreement with the Pentagon that allows the US military to "deploy our models in their classified network" while maintaining prohibitions on domestic mass surveillance and ensuring "human responsibility for the use of force, including for autonomous weapon systems," according to Altman5
. Altman wrote that OpenAI is "asking the DoW to offer these same terms to all AI companies"5
.Google DeepMind Chief Scientist Jeff Dean also expressed opposition to mass surveillance by the government, writing on X that "mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression" and that "surveillance systems are prone to misuse for political or discriminatory purposes"
2
.Related Stories
The standoff intensified after January 3, when US special operations forces raided Venezuela and captured Nicolás Maduro. The Wall Street Journal reported that forces used Claude during the operation via Anthropic's partnership with Palantir
3
. When an Anthropic executive reached out to Palantir to ask whether the technology had been used in the raid, the question raised immediate alarms at the Pentagon3
.
Source: The Hill
Anthropics Claude tool has until recently been the only model working on classified missions as a result of its partnership with Palantir
1
. The company made Claude available on a Palantir platform with a cloud security level up to "secret" in late 2024, making Claude, by public accounts, the first large language model operating inside classified systems3
.Hegseth is now negotiating with AI labs, including Google, OpenAI and Elon Musk's xAI, to replace Anthropic and integrate their technology into classified military systems. A senior Pentagon official said Musk's Grok "is on board with being used in a classified setting, while the rest of the companies are close"
1
.The collision exposes fundamental tensions as Anthropic scales rapidly while maintaining its safety-first ethos. On February 5, Anthropic released Claude Opus 4.6, its most powerful AI model, featuring the ability to coordinate teams of autonomous agents that divide up work and complete it in parallel
3
. Twelve days later, the company dropped Sonnet 4.6, a cheaper model that nearly matches Opus's coding and computer skills. Sonnet 4.6 can navigate web applications and fill out forms with human-level capability, and both models have working memory large enough to hold a small library3
.Enterprise customers now make up roughly 80 percent of Anthropic's revenue, and the company closed a $30 billion funding round last week at a $380 billion valuation
3
. By every available measure, Anthropic is one of the fastest-scaling technology companies in history3
.The Pentagon released its AI strategy last month, with Hegseth stating in a memo that "AI-enabled warfare and AI-enabled capability development will redefine the character of military affairs over the next decade"
1
. He added that the US military "must build on its lead" over foreign adversaries to make soldiers "more lethal and efficient," and that the AI race was "fueled by the accelerating pace" of innovation coming from the private sector1
.Amodei has said Anthropic will support "national defense in all ways except those which would make us more like our autocratic adversaries"
3
. The company was founded in 2021 by former OpenAI executives who believed the industry was not taking AI safety seriously enough, positioning Claude as the ethical alternative3
. The standoff now tests whether ethical boundaries and AI regulation can hold once autonomous agents capable of processing vast datasets and acting on conclusions are running inside classified networks for national security purposes.Summarized by
Navi
[2]
[3]
[5]
25 Feb 2026•Policy and Regulation

12 Feb 2026•Policy and Regulation

30 Jan 2026•Policy and Regulation

1
Technology

2
Technology

3
Business and Economy
