5 Sources
5 Sources
[1]
Anthropic's apparently starting to learn that it can't have its cake and eat it when it comes to working with the military
Running an AI company isn't a cheap thing to do, especially if you're at the forefront of developing new models. You need to buy or rent hundreds of thousands of GPUs, pay huge energy and water bills, and hire lots of highly skilled and qualified staff. So, where does one find a handy source of money for all of this? In the case of Anthropic, it's turned to the military as the pot of bountiful cash, but according to one report, it's now discovering that all that glitters isn't gold. The agreement between Anthropic and the then-named Department of Defense to "prototype frontier AI capabilities that advance U.S. national security" was announced last July, and while it was only given a capped $200 million budget -- a fraction of the money it's received from Microsoft and Nvidia -- the makers of Claude were looking "look forward to deepening our collaboration across the Department to solve critical mission challenges". However, Reuters claims that six months on, the relationship between the now Department of War and Anthropic has taken a bit of a turn, and it's apparently all down to the safeguards that are built into the LLM models. Those that specifically prevent them from being deployed in scenarios such as autonomous weapon targeting and domestic surveillance. It's being claimed that Anthropic is arguing the presence of these safeguards is a key part of its usage policies, whereas the DoW takes the view that as long as no laws are being broken, it should be free to use a commercial AI however it sees fit, including the removal of the safeguards. Hence, why the two are at loggerheads. Given Anthropic's history, though, you'd be forgiven for thinking that the AI company would have surely expected this right from the start. After all, it has already teamed up with the likes of Lawrence Livermore National Laboratory and Palantir in the past, all for Claude to be used in classified areas of security and intelligence. So if all the above is an accurate description of the current state of affairs between the AI firm and the US military, I find it hard to believe that Anthropic wouldn't expect these demands from the Department of War. Naturally, we have no idea as to precisely what has been discussed between the two parties. It might be a matter of money, for example, with Anthropic effectively saying, 'Sure, we can switch all that off, but that's a whole pile of extra work, which is another contract entirely.' That said, Anthropic already has a special version of Claude for "US national security customers", so why would the DoW not be using that one? Whatever the situation may truly be, I suspect that Anthropic is beginning to learn that while it's all well and good have a set of universal usage standards, the moment you make exceptions to some of these for specific customers, others will want the same. And if that customer just so happens to be the military, with its enormous cake of money, if you don't like the slice that's been given to you, other AI companies will surely rush in to pick up the crumbs.
[2]
Exclusive-Pentagon Clashes With Anthropic Over Military AI Use
By Deepa Seetharaman, David Jeans and Jeffrey Dastin WASHINGTON/SAN FRANCISCO, Jan 29 (Reuters) - The Pentagon and artificial-intelligence developer Anthropic are at odds over potentially eliminating safeguards that might allow the government to use its technology to target weapons autonomously and conduct U.S. domestic surveillance, three people familiar with the matter told Reuters. The discussions represent an early test case for whether Silicon Valley - in Washington's good graces after years of tensions - can sway how U.S. military and intelligence personnel deploy increasingly powerful AI on the battlefield. After weeks of contract talks, the U.S. Department of Defense and Anthropic are at a standstill, six people familiar with the matter said, on condition of anonymity. The company's position on how its AI tools can be used has intensified disagreements between it and the Trump administration, details of which have not been previously reported. (Reporting By Deepa Seetharaman and Jeffrey Dastin in San Francisco and David Jeans in Washington, Editing by Franklin Paul)
[3]
Pentagon clashes with Anthropic over military AI use, sources say
The discussions represent an early test case for whether Silicon Valley, in Washington's good graces after years of tensions, can sway how U.S. military and intelligence personnel deploy increasingly powerful AI on the battlefield. The Pentagon is at odds with artificial-intelligence developer Anthropic over safeguards that would prevent the government from deploying its technology to target weapons autonomously and conduct U.S. domestic surveillance, three people familiar with the matter told Reuters. The discussions represent an early test case for whether Silicon Valley, in Washington's good graces after years of tensions, can sway how U.S. military and intelligence personnel deploy increasingly powerful AI on the battlefield. After extensive talks under a contract worth up to $200 million, the U.S. Department of Defense and Anthropic are at a standstill, six people familiar with the matter said, on condition of anonymity. The company's position on how its AI tools can be used has intensified disagreements between it and the Trump administration, the details of which have not been previously reported. A spokesperson for the Defense Department, which the Trump administration renamed the Department of War, did not immediately respond to requests for comment. Anthropic said its AI is "extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work." The spat, which could threaten Anthropic's Pentagon business, comes at a delicate time for the company. The San Francisco-based startup is preparing for an eventual public offering. It also has spent significant resources courting U.S. national security business and sought an active role in shaping government AI policy. Anthropic is one of a few major AI developers that were awarded contracts by the Pentagon last year. Others were Alphabet's Google, Elon Musk's xAI and OpenAI. Weapons targeting In its discussions with government officials, Anthropic representatives raised concerns that its tools could be used to spy on Americans or assist weapons targeting without sufficient human oversight, some of the sources told Reuters. The Pentagon has bristled at the company's guidelines. In line with a January 9 department memo on AI strategy, Pentagon officials have argued they should be able to deploy commercial AI technology regardless of companies' usage policies, so long as they comply with U.S. law, sources said. Still, Pentagon officials would likely need Anthropic's cooperation moving forward. Its models are trained to avoid taking steps that might lead to harm, and Anthropic staffers would be the ones to retool its AI for the Pentagon, some of the sources said. Anthropic's caution has drawn conflict with the Trump administration before, Semafor has reported. In an essay on his personal blog, Anthropic CEO Dario Amodei warned this week that AI should support national defense "in all ways except those which would make us more like our autocratic adversaries." Amodei was among Anthropic's co-founders critical of fatal shootings of U.S. citizens protesting immigration enforcement actions in Minneapolis, which he described as a "horror" in a post on X. The deaths have compounded concern among some in Silicon Valley about government use of their tools for potential violence.
[4]
Pentagon Clashes With Anthropic Over Military AI Use: Report
WASHINGTON/SAN FRANCISCO, Jan 29 (Reuters) - The Pentagon and artificial-intelligence developer Anthropic are at odds over potentially eliminating safeguards that might allow the government to use its technology to target weapons autonomously and conduct U.S. domestic surveillance, three people familiar with the matter told Reuters. The discussions represent an early test case for whether Silicon Valley - in Washington's good graces after years of tensions - can sway how U.S. military and intelligence personnel deploy increasingly powerful AI on the battlefield. After weeks of talks under a contract worth up to $200 million, the U.S. Department of Defense and Anthropic are at a standstill, six people familiar with the matter said, on condition of anonymity. The company's position on how its AI tools can be used has intensified disagreements between it and the Trump Administration, details of which have not been previously reported.
[5]
Amazon-Backed Anthropic, Pentagon Clash Over AI Safeguards -- Officials Push Back Against Limits On Autonomous Weapons, Surveillance: Report - Amazon.com (NASDAQ:AMZN)
Amazon.com, Inc. (NASDAQ:AMZN)-backed AI developer Anthropic is reportedly in a standoff with the Pentagon over how its artificial intelligence tools can be used. Pentagon Vs. Anthropic: A High-Stakes AI Dispute The dispute centers on whether Anthropic's AI models, which include safeguards to prevent harmful actions, can be deployed by U.S. military and intelligence agencies without restrictions, reported Reuters, citing people familiar with the matter. As per the sources, Anthropic raised concerns that its technology could be used to target weapons autonomously or spy on Americans without human oversight. The U.S. government has "extensively used" Anthropic's AI for national security missions and the company is engaged in productive discussions about ways to continue that work, an Anthropic spokesperson told the publication. Anthropic did not immediately respond to Benzinga's request for comments. Pentagon officials, citing a Jan. 9 department memo on AI strategy, maintain that commercial AI should be deployable as long as it complies with U.S. law, regardless of corporate usage policies, the report added. Ethics, National Security And Silicon Valley Influence The standoff represents an early test of how Silicon Valley companies can shape the ethical deployment of AI in military contexts. Anthropic CEO Dario Amodei this week warned in a blog post that AI should support national defense "in all ways except those which would make us more like our autocratic adversaries." The San Francisco-based startup is preparing for a potential public offering while investing heavily in U.S. national security partnerships. Anthropic Projects $18 Billion In 2026 Revenue Anthropic has boosted its 2026 revenue projection by 20%, now expecting sales to reach $18 billion this year and $55 billion in 2027. Its popular AI model, Claude, achieved a $1 billion run rate revenue just six months after its public launch. In addition, Anthropic recently completed a funding round valuing the company at $350 billion, far surpassing its original $10 billion target. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo courtesy: Shutterstock Market News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
The Pentagon and AI developer Anthropic have reached a standstill in contract negotiations over safeguards preventing autonomous weapons targeting and domestic surveillance. The dispute under a $200 million contract tests whether Silicon Valley can influence how the U.S. military deploys AI on the battlefield, with Anthropic insisting on ethical AI standards while Pentagon officials argue they should deploy commercial AI as they see fit.
The Pentagon and Anthropic are locked in a dispute over AI safeguards that could reshape how Silicon Valley companies influence military technology deployment. After weeks of contract talks under a deal worth up to $200 million, the U.S. Department of Defense and the San Francisco-based AI developer have reached an impasse, according to Reuters
2
. Six people familiar with the matter confirmed the standstill, revealing that the company's position on usage policies has intensified disagreements with the Trump Administration3
.
Source: Benzinga
The conflict centers on whether Anthropic must eliminate safeguards built into its Claude LLM models that specifically prevent deployment in scenarios involving autonomous weapons targeting and domestic surveillance. In discussions with government officials, Anthropic representatives raised concerns that its tools could be used to spy on Americans or assist weapons targeting without sufficient human oversight
3
. The company maintains these safeguards are a key part of its ethical AI standards and usage policies.
Source: PC Gamer
Pentagon officials, now operating under the Trump administration's renamed Department of War, have bristled at Anthropic's guidelines. Citing a January 9 department memo on AI strategy, Pentagon officials argue they should be able to deploy commercial AI technology regardless of companies' usage policies, as long as they comply with U.S. law
3
. This position directly challenges Silicon Valley's influence over how intelligence agencies and military personnel use increasingly powerful AI on the battlefield.Yet Pentagon officials would likely need Anthropic's cooperation moving forward. The company's models are trained to avoid taking steps that might lead to harm, and Anthropic staffers would be the ones to retool its AI for military applications
3
. An Anthropic spokesperson stated that its AI is "extensively used for national security missions by the U.S. government" and that the company is "in productive discussions with the Department of War about ways to continue that work"3
.AnthropicCEO Dario Amodei addressed the ethical concerns in a blog post this week, warning that AI should support national defense "in all ways except those which would make us more like our autocratic adversaries"
3
. Amodei, who was among Anthropic's co-founders critical of recent fatal shootings of U.S. citizens protesting immigration enforcement actions in Minneapolis, described the incidents as a "horror" in a post on X3
. These deaths have compounded concern among some in Silicon Valley about government use of their tools for potential violence.
Source: ET
Related Stories
The spat threatens Anthropic's Pentagon business at a delicate time for the Amazon-backed company
5
. The startup is preparing for an eventual public offering while having spent significant resources courting U.S. national security business and seeking an active role in shaping government AI policy3
. Anthropic is one of a few major AI developers awarded contracts by the Pentagon last year, alongside Google, OpenAI, and Elon Musk's xAI3
.The company has boosted its 2026 revenue projection by 20 percent, now expecting sales to reach $18 billion this year and $55 billion in 2027
5
. Claude achieved a $1 billion run rate revenue just six months after its public launch, and Anthropic recently completed a funding round valuing the company at $350 billion5
.Given Anthropic's history of partnering with Lawrence Livermore National Laboratory and Palantir for classified security and intelligence work, observers question whether the company anticipated these demands from the Department of War
1
. Anthropic already has a special version of Claude for "US national security customers"1
. The discussions represent an early test case for whether Silicon Valley can maintain ethical concerns while pursuing lucrative government contracts, and if the company doesn't accept the terms, other AI companies may rush in to capture the Pentagon's substantial budget1
.Summarized by
Navi
12 Feb 2026•Policy and Regulation

14 Feb 2026•Policy and Regulation

18 Sept 2025•Policy and Regulation

1
Policy and Regulation

2
Technology

3
Technology
