26 Sources
26 Sources
[1]
Anthropic and the Pentagon are reportedly arguing over Claude usage | TechCrunch
The Pentagon is pushing AI companies to allow the U.S. military to use their technology for "all lawful purposes," but Anthropic is pushing back, according to a new report in Axios. The government is reportedly making the same demand to OpenAI, Google, and xAI. An anonymous Trump administration official told Axios that one of those companies has agreed, while the other two have supposedly shown some flexibility. Anthropic, meanwhile, has reportedly been the most resistant. In response, the Pentagon is apparently threatening to pull the plug on its $200 million contract with the AI company. In January, the Wall Street Journal reported that there was significant disagreement between Anthropic and Defense Department officials over how its Claude models could be used. The WSJ subsequently said that Claude was used in the U.S. military's operation to capture then-Venezuelan President Nicolás Maduro. Anthropic did not immediately respond to TechCrunch's request for comment. A company spokesperson told Axios that the company has "not discussed the use of Claude for specific operations with the Department of War" but is instead "focused on a specific set of Usage Policy questions -- namely, our hard limits around fully autonomous weapons and mass domestic surveillance."
[2]
Anthropic, Pentagon Reportedly at Odds Over How Military Will Use Claude AI
The Pentagon is considering ending its work with Anthropic over the AI company's insistence that some of its Claude safeguards remain in place for military operations. As The Wall Street Journal reports, the Defense Department used Claude (via its Palantir contract) to help it carry out the attack on Venezuela and capture former President Nicolás Maduro. An Anthropic employee then asked Palantir for details on what happened, the WSJ says, though a spokesperson insists the company only has "routine discussions on strictly technical matters...with industry partners, including Palantir." "Anthropic's conversations with the [Defense Department] to date have focused on a specific set of Usage Policy questions -- namely, our hard limits around fully autonomous weapons and mass domestic surveillance -- none of which relate to current operations," a spokesperson tells Axios. In 2024, Anthropic signed a deal with Palantir to have Claude "support government operations," including data processing, identifying trends, and "helping US officials to make more informed decisions in time-sensitive situations." Last year, Anthropic signed on for Palantir's FedStart tool, which allows Anthropic to offer Claude to federal government employees, who can use it to "enhance their efficiency in writing, analyzing data, and solving complex problems." Anthropic also has a $200 million contract with the Pentagon, but its concerns about how the government uses Claude have reportedly put that contract at risk. Claude is accessible on the Pentagon's classified networks, Axios says, and the Defense Department is negotiating to also allow OpenAI's ChatGPT, Google's Gemini, and xAI's Grok. They would have to agree to let the military use their AI for "all lawful purposes," but Anthropic is reportedly insisting that Claude not be used for "the mass surveillance of Americans and fully autonomous weaponry." "Our nation requires that our partners be willing to help our warfighters win in any fight," a Pentagon spokesperson told the WSJ. "Everything's on the table," including the cancellation of the Anthropic contract, a senior administration official told Axios. "But there'll have to be an orderly replacement [for] them, if we think that's the right answer."
[3]
Watch Anthropic's Pentagon Talks Hit Surveillance and Weapons Snag
Anthropic PBC's talks about extending a contract with the Pentagon are being held up over additional protections the artificial intelligence company wants to put on its Claude tool, a person familiar with the matter said. Anthropic wants to put guardrails in place to stop Claude from being used for mass surveillance of Americans or to develop weapons that can be deployed without a human involved, the person said, asking not to be identified because the negotiations are private. The Pentagon wants to be able to use Claude as long as its deployment doesn't break the law. Axios reported on the disagreement earlier. Bloomberg Mandeep Singh reports.
[4]
Pentagon threatens to cut off Anthropic in AI safeguards dispute: Report
Dario Amodei, co-founder and chief executive officer of Anthropic, during a Bloomberg Television interview in San Francisco, California, US, on Tuesday, Dec. 9, 2025. The Pentagon is considering ending its relationship with artificial intelligence company Anthropic over its insistence on keeping some restrictions on how the U.S. military uses its models, Axios reported on Saturday, citing an administration official. The Pentagon is pushing four AI companies to let the military use their tools for "all lawful purposes," including in areas of weapons development, intelligence collection and battlefield operations, but Anthropic has not agreed to those terms and the Pentagon is getting fed up after months of negotiations, according to the Axios report. The other companies included OpenAI, Google and xAI. An Anthropic spokesperson said the company had not discussed the use of its AI model Claude for specific operations with the Pentagon. The spokesperson said conversations with the U.S. government so far had focused on a specific set of usage policy questions, including hard limits around fully autonomous weapons and mass domestic surveillance, none of which related to current operations. The Pentagon did not immediately respond to Reuters' request for comment. Anthropic's AI model Claude was used in the U.S. military's operation to capture former Venezuelan President Nicolas Maduro, with Claude deployed via Anthropic's partnership with data firm Palantir, the Wall Street Journal reported on Friday. Reuters reported on Wednesday that the Pentagon was pushing top AI companies including OpenAI and Anthropic to make their artificial intelligence tools available on classified networks without many of the standard restrictions that the companies apply to users.
[5]
Decoding the A.I. Beliefs of Anthropic and Its C.E.O., Dario Amodei
The Defense Department has approved the cutting-edge artificial intelligence technology built by the San Francisco start-up Anthropic for use with classified tasks. But Anthropic, led by its chief executive, Dario Amodei, does not want the Pentagon using the technology in certain situations, such as the use of autonomous weapons and domestic surveillance. Now, the Pentagon and Anthropic are locked in a battle over the future of their contract, which is worth as much as $200 million. The Defense Department may also forbid its contractors from using Anthropic's technology on government projects. With several other A.I. companies hoping to provide similar technology to the Defense Department -- including OpenAI, Google and Elon Musk's xAI -- the tussle between Anthropic and the Pentagon could damage Anthropic's growing business selling A.I. to big corporate customers. Here is a guide to Anthropic, Dr. Amodei and the company's A.I. philosophy. What is Anthropic? Anthropic was founded by Dr. Amodei and his sister, Daniela Amodei, who worked together at OpenAI. They created their company in early 2021 after a series of disagreements with OpenAI executives over how its A.I. should be funded, built and released. They founded Anthropic alongside about 15 other former OpenAI employees who shared their views. The group that left OpenAI to start Anthropic oversaw the creation of OpenAI's large language models, the technology that eventually powered the chatbot ChatGPT. At Anthropic, they built a chatbot called Claude. But they did not release the chatbot until after OpenAI touched off the A.I. boom with the release of ChatGPT in late 2022. What are Dr. Amodei's views on the dangers of A.I.? Dr. Amodei has long expressed concern that A.I. could be used to spread disinformation, power mass surveillance or enable autonomous weapons. In 2019, while they were still at OpenAI, Dr. Amodei and other Anthropic founders were part of a team that announced that they were not releasing an early large language model called GPT-2 because it could be used to spread disinformation. But they eventually changed course. In a podcast interview in 2023, Dr. Amodei said there was a 10 percent to 25 percent chance that A.I. could destroy humanity. But he has since tried to distance himself from that, saying he is not "a doomer." In October 2024, Dr. Amodei unveiled a more optimistic view of A.I., publishing a 14,000-word essay on the potential benefits of the technology. What are the roots of Dr. Amodei's philosophy? The Amodeis and many of Anthropic's co-founders have ties to a community that has long aimed to ensure that A.I. is built and deployed in a safe way. Many people in this community call themselves Rationalists or effective altruists. This community believes that A.I. could eventually find a cure for cancer or solve climate change, but they worry that A.I. might do things their creators did not intend and cause people serious harm. Ms. Amodei's husband, Holden Karnofsky, is one of the founders of the effective altruist movement. The Amodeis and Mr. Karnofsky once lived in a Silicon Valley group house with many other members of the community, including some of the other founders of Anthropic. But more recently, the Amodeis have said they are not part of the effective altruist community. What are the politics of Anthropic's founders? The Anthropic founders have said that their political views are separate from their views on A.I. But Dr. Amodei was a vocal supporter of Kamala Harris in 2024 and did not hide his disdain for Donald J. Trump. In a recent social media post, Ms. Amodei recently said that the deaths of protesters in Minneapolis "is not what America stands for," while praising President Trump for calling for an investigation. Dr. Amodei had close ties to officials in the Biden administration, including Ben Buchanan, Biden's A.I. adviser. Anthropic has since hired Mr. Buchanan and Tarun Chhabra, a former National Security Council official for technology. After the Trump administration reversed Biden-era policies that sought to restrict China's access to computer chips needed to build A.I., Dr. Amodei criticized the move. He reiterated this stance earlier this year at the World Economic Forum in Davos, Switzerland. This month, Anthropic named Chris Liddell, a deputy chief of staff in the first Trump administration, to its board. What was their disagreement with OpenAI? Dr. Amodei and others in his circle were unhappy that OpenAI had tied itself to Microsoft through an agreement that required the start-up to share its technologies with the tech giant. They worried OpenAI was moving in a commercial direction that would make it hard to control its technologies. They also had personal disagreement with two of OpenAI's founders, its chief executive, Sam Altman, and its president, Greg Brockman. Dr. Amodei and others went to OpenAI's board to try to push out Mr. Altman. After they failed, they left the company. (The New York Times sued OpenAI and Microsoft in 2023, accusing them of copyright infringement of news content related to A.I. systems. The two companies have denied those claims.) Has Anthropic moved in a commercial direction? Anthropic's co-founders wanted to build A.I. with safety guardrails and structured the company as a public benefit corporation, which is aimed at creating public and social good. But they are part of a very commercial race to build A.I. Earlier this month, Anthropic raised a new funding round that values the company at $380 billion. After raising more than $57 billion, Anthropic is considering going public on Wall Street over the next 12 to 18 months. Do other companies hold similar views on A.I.? Google's A.I. work is overseen by Demis Hassabis, who joined the company in 2014 when his start-up DeepMind was acquired for $650 million. The tech giant's primary A.I. lab is called Google DeepMind. When Dr. Hassabis and his co-founders joined the company, they demanded two conditions: No DeepMind technology could be used for military purposes, and its most powerful technologies must be overseen by an independent board of technologists and ethicists. In 2018, during the first Trump administration, Google backed away from a military contract after protests from employees. But it has since agreed to work with the Pentagon, and this work could include autonomous weapons. The DeepMind ethics board was disbanded. OpenAI has said that would allow the Pentagon to use its technologies for any lawful purposes, but it has said that it will provide its technology to the Defense Department with certain guardrails in place. Mr. Musk did not respond to requests for comment on xAI's approach to military technologies. Google declined to comment.
[6]
Pentagon Is Close to Cutting Ties With Anthropic, Report Says
The Pentagon is close to cutting ties with Anthropic and may label the artificial intelligence company a supply chain risk after becoming frustrated with restrictions on how it can use the technology, Axios reportedBloomberg Terminal. The breakdown follows months of contentious negotiations about how the military can use the Claude tool, Axios said, citing a source familiar with the talks who it didn't identify. In particular, Anthropic wants to make sure its AI isn't used to spy on citizens on a large scale or to develop weapons that can be deployed without a human involved, the article said. The government wants to be allowed to use Claude for "all lawful purposes," it said. If the AI company is deemed a supply chain risk, any company that wants to do business with the military will have to cut ties with Anthropic, Axios said, citing a senior Pentagon official. Pentagon spokesman Sean Parnell told Axios that the relationship was being reviewed. A spokesperson for Anthropic told Axios it was having "productive conversations, in good faith" with the Department of War and said the company is committed to using AI for national security. A representative for Anthropic did not immediately respond to a Bloomberg request for comment. Anthropic won a two-year agreementBloomberg Terminal with the US Defense Department last year that involved a prototype of AI's Claude Gov models and Claude for Enterprise. The Anthropic negotiations may set the tone for talks with OpenAI, Google and xAI, which aren't yet used for classified work, Axios said. Anthropic, founded by former OpenAI researchers, positions itself as a more responsible AI company that aims to avoid any catastrophic harms from the advanced technology.
[7]
The Pentagon Wants to Raw Dog the Latest AI Models on Classified Systems
The Pentagon is looking to expand its use of artificial intelligence across both unclassified and classified networks, but negotiations with major AI companies have hit a sticking point. Defense officials want access to the most advanced models without any usage restrictions or heavy guardrails. According to Reuters, military officials argue they should be allowed to deploy AI however they see fit, as long as it complies with U.S. law. The push comes as OpenAI announced Monday that it has made a customized version of ChatGPT available through the War Department’s AI platform, GenAI.mil. The platform, which launched in December, is used by roughly 3 million civilian and military personnel and already includes tailored versions of tools from xAI and Google’s Gemini. "We are pushing all of our chips in on artificial intelligence as a fighting force. The Department is tapping into America's commercial genius, and we're embedding generative AI into our daily battle rhythm," Secretary of War Pete Hegseth said in a press release about the platform. "AI tools present boundless opportunities to increase efficiency, and we are thrilled to witness AI's future positive impact across the War Department." OpenAI’s version of ChatGPT on the platform is designed to help with day-to-day tasks like summarizing policy documents, drafting reports, and assisting with research. But Reuters reports that Pentagon officials are pushing to roll out AI systems across all classification levels, potentially opening the door to more sensitive applications like mission planning or weapons targeting. An unnamed official told Reuters that the Pentagon is “moving to deploy frontier AI capabilities across all classification levels.†Currently, Anthropic’s models are available in select classified settings through third-party providers, but with significant usage restrictions. Reuters reports that Anthropic executives have told military officials they do not want their systems used for autonomous weapons targeting or domestic surveillance. Meanwhile, Semafor reports that Anthropic has not agreed to allow its models to be used for “all lawful uses." As of now, its tools are not currently available on GenAI.mil. The negotiations leave AI companies walking a delicate tightrope. On one side, there are employees who oppose military use of their systems and fear it will make it hard to recruit future employees. On the other side is the Pentagon, which represents a massive customer and a powerful political force. Semafor reported that Anthropic’s stance has “drawn ire from the Pentagon and the White House.†At the same time, some OpenAI employees have expressed concerns about giving competitors an advantage by stepping back from defense work, according to Semafor. The Pentagon, OpenAI, Anthropic, Google, and xAI did not immediately respond to requests for comment from Gizmodo.
[8]
Pentagon may sever Anthropic relationship over AI safeguards - Claude maker expresses concerns over 'hard limits around fully autonomous weapons and mass domestic surveillance'
A rift between the Pentagon and several AI companies has emerged over how their models can be used as part of operations. The Pentagon has requested AI providers Anthropic, OpenAI, Google, and xAI to allow the use of their models for "all lawful purposes". Anthropic has voiced fears its Claude models would be used in autonomous weapons systems and mass domestic surveillance, with the Pentagon threatening to terminate its $200 million contract with the AI provider in response. Speaking to Axios, an anonymous Trump administration advisor said one of the companies has agreed to allow the Pentagon full use of its model, with the other two showing flexibility in how their AI models can be used. The Pentagon's relationship with Anthropic has been shaken since January over the use of its Claude models, with the Wall Street Journal reporting that Claude was used in the US military operation to capture Venezuelan then-President Nicolás Maduro. An Anthropic spokesperson told Axios that the company has "not discussed the use of Claude for specific operations with the Department of War". The company did state that its Usage Policy with the Pentagon was under review, with specific reference to "our hard limits around fully autonomous weapons and mass domestic surveillance." Chief Pentagon spokesman Sean Parnell stated that "Our nation requires that our partners be willing to help our warfighters win in any fight." Security experts, policy makers, and Anthropic Chief Executive Dario Amodei have called for greater regulation on AI development and increased requirements on safeguarding, with specific reference to the use of AI in weapons systems and military technology.
[9]
Exclusive: Pentagon warns Anthropic will "pay a price" as feud escalates
Why it matters: That kind of penalty is usually reserved for foreign adversaries. Chief Pentagon spokesman Sean Parnell told Axios: "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people." The big picture: Anthropic's Claude is the only AI model currently available in the military's classified systems, and is the world leader for many business applications. Pentagon officials heartily praise Claude's capabilities. * As a sign of how embedded the software already is within the military, Claude was used during the Maduro raid in January, as Axios reported on Friday. Breaking it down: Anthropic and the Pentagon have held months of contentious negotiations over the terms under which the military can use Claude. * Anthropic CEO Dario Amodei takes these issues very seriously, but is a pragmatist. * Anthropic is prepared to loosen its current terms of use, but wants to ensure its tools aren't used to spy on Americans en masse, or to develop weapons that fire with no human involvement. The Pentagon claims that's unduly restrictive, and that there are all sorts of gray areas that would make it unworkable to operate on such terms. Pentagon officials are insisting in negotiations with Anthropic and three other big AI labs -- OpenAI, Google and xAI -- that the military be able to use their tools for "all lawful purposes." * A source familiar with the dynamics said senior defense officials have been frustrated with Anthropic for some time, and embraced the opportunity to pick a public fight. The other side: Existing mass surveillance law doesn't contemplate AI. The Pentagon can already collect troves of people's information, from social media posts to concealed carry permits, and there are privacy concerns AI can supercharge that authority to target civilians. * An Anthropic spokesperson said: "We are having productive conversations, in good faith, with DoW on how to continue that work and get these new and complex issues right." * The spokesperson reiterated the company's commitment to using frontier AI for national security, noting Claude was the first to be used on classified networks. Another Anthropic official told us: "There are laws against domestic mass surveillance, but they have not in any way caught up to what AI can do." * For instance, the official said, "AI can be used to analyze any and all publicly available information at scale. DoW is legally permitted to collect publicly available information -- so-called 'open source intelligence' -- including everything posted on social media, public forums and online news. That's always been true, but the scale was limited by human capacity." * "With AI," the official added, "the DoW could continuously monitor and analyze the public posts of every American, cross-referenced against public voter registration rolls and demonstration permit records, to automatically flag civilians who live near military bases, have criticized military policy online, own firearms, and attended rallies." The stakes: Designating Anthropic a supply chain risk would require the plethora of companies that do business with the Pentagon to certify that they don't use Claude in their own workflows. * Some of them almost certainly do, given the wide reach of Anthropic, which recently said eight of the 10 biggest U.S. companies use Claude. * The contract the Pentagon is threatening to cancel is valued at up to $200 million, a small fraction of Anthropic's $14 billion in annual revenue. Friction point: A senior administration official said that competing models "are just behind" when it comes to specialized government applications, complicating an abrupt switch. The intrigue: The Pentagon's hardball with Anthropic sets the tone for its negotiations with OpenAI, Google and xAI, all of which have agreed to remove their safeguards for use in the military's unclassified systems, but are not yet used for more sensitive classified work.
[10]
Pentagon Issues Threat to Anthropic
Over the weekend, the Wall Street Journal reported that the US military had used Anthropic's Claude AI chatbot for its invasion of Venezuela and kidnapping of the country's president Nicolás Maduro. The exact details of Claude's use remain hazy, but the incident demonstrated the Pentagon's prioritization of the use of AI, and how tools available to the public may already be involved in military operations. And when Anthropic learned about it, its response was icy. An Anthropic spokesperson remained tight-lipped on whether "Claude, or any other AI model, was used for any specific operation, classified or otherwise" in a statement to the WSJ, but noted that "any use of Claude -- whether in the private sector or across government -- is required to comply with our Usage Policies, which govern how Claude can be deployed." The deployment reportedly occurred through the AI company's partnership with the shadowy military contractor Palantir. Anthropic also signed an up to $200 million contract with the Pentagon last summer as part of the military's broader adoption of the tech, alongside OpenAI's ChatGPT, Google's Gemini, and xAI's Grok. Whether the Pentagon's use of Claude broke any of Anthropic's rules remains unclear. Claude's usage guidelines forbid it from being used to "facilitate or promote any act of violence," "develop or design weapons," or "surveillance." Either way, Trump administration officials are now considering cutting ties with Anthropic over the company's insistence that mass surveillance of Americans and fully autonomous weaponry remain off limits, Axios reports. "Everything's on the table," including a dialing back of the partnership, a senior administration official told Axios. "But there'll have to be an orderly replacement [for] them, if we think that's the right answer." Anthropic reportedly reached out to Palantir to figure out whether Claude was used during the attacks on Venezuela, according to Axios' sourcing, in signs of a broader culture clash and growing concerns of the tech being implicated in military operations. Late last month, Anthropic already clashed over the limits of its $200 million contract with the Pentagon, including how many law enforcement agencies, such as Immigration and Customs Enforcement and the Federal Bureau of Investigation, could deploy it, per prior reporting by the WSJ. Anthropic CEO Dario Amodei has repeatedly warned of the inherent risks of the tech his company is working on, calling for more government oversight, regulation, and guardrails. He has also raised concerns over AI being used by autonomous lethal operations and domestic surveillance. In a lengthy essay posted earlier this year, Amodei argued that large-scale AI-facilitated surveillance should be considered a crime against humanity. Meanwhile, defense secretary Pete Hegseth doesn't appear to share these hangups. Earlier this year, he promised that the Pentagon wouldn't "employ AI models that won't allow you to fight wars," a comment that sources told the WSJ concerned discussions with Anthropic. Anthropic continues to insist that it's "committed to using frontier AI in support of US national security" in a statement to both the WSJ and Axios. However, its issues with the Pentagon appear to be going over well with its non-government users, who have been horrified by the tech being implicated in military action. "Good job Anthropic, you just became the top closed [AI] company in my books," one top post reads on the Claude subreddit.
[11]
Exclusive: Pentagon threatens to cut off Anthropic in AI safeguards dispute
Why it matters: The Pentagon is pushing four leading AI labs to let the military use their tools for "all lawful purposes," even in the most sensitive areas of weapons development, intelligence collection, and battlefield operations. Anthropic has not agreed to those terms, and the Pentagon is getting fed up after months of difficult negotiations. * Anthropic insists that two areas remain off limits: the mass surveillance of Americans and fully autonomous weaponry. The big picture: The senior administration official argued there is considerable gray area around what would and wouldn't fall into those categories, and that it's unworkable for the Pentagon to have to negotiate individual use-cases with Anthropic -- or have Claude unexpectedly block certain applications. * "Everything's on the table," including dialing back the partnership with Anthropic or severing it entirely, the official said. "But there'll have to be an orderly replacement [for] them, if we think that's the right answer." * An Anthropic spokesperson said the company remained "committed to using frontier AI in support of U.S. national security." Zoom in: The tensions came to a head recently over the military's use of Claude in the operation to capture Venezuela's Nicolás Maduro, through Anthropic's partnership with AI software firm Palantir. * According to the senior official, an executive at Anthropic reached out to an executive at Palantir to ask whether Claude had been used in the raid. * "It was raised in such a way to imply that they might disapprove of their software being used, because obviously there was kinetic fire during that raid, people were shot," the official said. The other side: The Anthropic spokesperson flatly denied that, saying the company had not "not discussed the use of Claude for specific operations with the Department of War. We have also not discussed this with any industry partners outside of routine discussions on strictly technical matters." * "Claude is used for a wide variety of intelligence-related use cases across the government, including the DoW, in line with our Usage Policy." * "Anthropic's conversations with the DoW to date have focused on a specific set of Usage Policy questions -- namely, our hard limits around fully autonomous weapons and mass domestic surveillance -- none of which relate to current operations," the spokesperson said. Friction point: Beyond the Maduro incident, the official described a broader culture clash with what the person claimed was the most "ideological" of the AI labs when it came to the potential dangers of the technology. * But the official conceded that it would be difficult for the military to quickly replace Claude, because "the other model companies are just behind" when it comes to specialized government applications. Breaking it down: Anthropic signed a contract valued up to $200 million with the Pentagon last summer. Claude was also the first model the Pentagon brought into its classified networks. * OpenAI's ChatGPT, Google's Gemini and xAI's Grok are all used in unclassified settings, and all three have agreed to lift the guardrails that apply to ordinary users for their work with the Pentagon. * The Pentagon is negotiating with them about moving into the classified space, and is insisting on the "all lawful purposes" standard for both classified and unclassified uses. * The official claimed one of the three has agreed to those terms, and the other two were showing more flexibility than Anthropic. The intrigue: In addition to CEO Dario Amodei's well-documented concerns about AI-gone-wrong, Anthropic also has to navigate internal disquiet among its engineers about working with the Pentagon, according to a source familiar with that dynamic. The bottom line: The Anthropic spokesperson said the company was still committed to the national security space.
[12]
Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon
Caught in the middle is Palantir. The defense contractor provides the secure cloud infrastructure that allows the military to use Anthropic's Claude model, but it has stayed quiet as tensions escalate. That's even as the Pentagon, per Axios, threatens to designate Anthropic a "supply chain risk," a move that could force Palantir to cut ties with one of its most important AI partners. The threat may be a negotiating tactic. But if carried out, it would have sweeping consequences, potentially barring not just Anthropic but its customers from government work. "That would just mean that the vast majority of companies that now use [Claude] in order to make themselves more effective would all of a sudden be ineligible for working for the government," says Alex Bores, a former Palantir employee who is now running for Congress in New York's 12th district. "It would be horribly hamstringing our government's ability to get things done." (Palantir did not respond to a request for comment.) Anthropic and the Pentagon's war of words Anthropic has, until now, maintained close ties with the military. Claude was the first frontier AI model deployed on classified Pentagon networks. Last summer, the Defense Department awarded Anthropic a $200 million contract, and the company's technology was even used in the recent U.S. operation to capture Nicolas Maduro, the Wall Street Journal reported this week.
[13]
Pentagon officials threaten to blacklist Anthropic over its military chatbot policies - SiliconANGLE
Pentagon officials threaten to blacklist Anthropic over its military chatbot policies The U.S. Department of War is reportedly considering cutting all business ties with the artificial intelligence startup Anthropic PBC and designating it as a "supply chain risk" amid disagreements over how it intends to use its chatbot tool Claude. If the War Department went ahead with the move, it would be a severe blow to Anthropic, as it would require all U.S. military contractors to stop using the company's technology, or risk losing its Pentagon contracts. Defense Secretary Pete Hegseth and senior Pentagon officials are said to be close to making the decision after months of trying to negotiate with Anthropic, Axios reported today. One unnamed source told the outlet: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this." The "supply chain risk" designation is normally used to label foreign adversaries and other hostile actors, rather than American companies, which makes the threat unusually severe for a firm that is considered among the country's leading technology lights. A spokesperson for the Pentagon told Axios in a statement that the War Department's AI partnerships are currently under review, and stressed that "our nation requires our partners to be willing to help our warfights win in any fight." Anthropic currently enjoys a privileged status as the only AI model maker to win a contract with the U.S. military. Its Claude Gov chatbot was built specifically for the U.S. national security apparatus and is widely used by Pentagon officials and has received lots of praise. Notably, it was used extensively in the operation last month that saw U.S. special forces snatch Venezuelan President Nicolas Maduro from his residence in Caracas, according to the Wall Street Journal. But as the company's contract comes up for renewal, there is disagreement over Anthropic's reluctance to let the Pentagon use Claude "for all lawful purposes." Apparently, the company is worried that officials might use the chatbot to conduct mass surveillance of Americans and build and run fully autonomous weapons systems. According to Anthropic, its existing restrictions are necessary to protect the privacy of U.S. citizens and prevent unchecked AI systems from targeting or harming them. But the Pentagon insists that the current limits are too restrictive and could hamper its effectiveness on the battlefield. Should the Pentagon follow through with its threat of designating Anthropic as a supply chain risk, any company that does business with the War Department would be required to certify that it does not use Claude in its workflows. That would likely cause a lot of headaches, given that Anthropic reportedly has a much stronger presence in the private sector than other AI firms, such as Google LLC and OpenAI Group PBC. According to Axios, eight out of the largest 10 U.S. companies currently use Claude. Besides making threats, the Pentagon has made it clear it has alternatives. Axios said it's currently holding talks with Google, OpenAI and Elon Musk's xAI Corp. over the possibility of the military using their chatbots instead. The report states that all three companies have agreed to remove guardrails preventing military use on unclassified systems. They're also reportedly negotiating to access classified military networks, Axios' sources said. Pentagon officials are confident that all three would be prepared to comply with its insistence on being able to use their tech for "all lawful purposes." Anthropic has tried to argue that U.S. law currently forbids domestic mass surveillance, and worries that the rapidly advancing capabilities of AI would outpace the evolution of existing statutes. The company declined to talk about the reported threat, but told Axios that its negotiations with the War Department are ongoing and being conducted in "good faith" to try and resolve complex policy issues. Anthropic's contract with the military is relatively small, worth around $200 million over two years, representing only a fraction of its reported $14 billion in annual revenue. Yet many more of its enterprise deals could be at risk if the Pentagon makes good on its threat.
[14]
Anthropic resists Pentagon push for autonomous weapon AI use
The Pentagon is pressuring major artificial intelligence developers to permit U.S. military applications of their technology for "all lawful purposes." This demand is directed at Anthropic, OpenAI, Google, and xAI. An anonymous Trump administration official reported that one of these firms has agreed to the terms, while two others have demonstrated flexibility. Anthropic remains the most resistant entity in these negotiations, prompting the Department of Defense to threaten the termination of a $200 million contract with the company. Friction between the AI firm and government officials is not a recent development. In January, the Wall Street Journal documented significant disagreement regarding the permissible scope of Anthropic's Claude models within defense contexts. The report further noted that the technology was utilized during the U.S. military operation to apprehend former Venezuelan President Nicolás Maduro. When contacted by TechCrunch, Anthropic did not provide an immediate response to inquiries regarding the contract dispute. Addressing the nature of the dispute, an Anthropic spokesperson told Axios that the company has "not discussed the use of Claude for specific operations with the Department of War." The spokesperson clarified that current discussions are "focused on a specific set of Usage Policy questions -- namely, our hard limits around fully autonomous weapons and mass domestic surveillance." This statement distinguishes the company's policy concerns from the Pentagon's request for broader access to its models.
[15]
Pentagon reviewing Anthropic partnership over terms of use dispute
The Pentagon is reviewing its relationship with artificial intelligence (AI) giant Anthropic over the terms of use of its AI model, which was used by the U.S. military during last month's operation to capture Venezuelan leader Nicolás Maduro. "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people," chief Pentagon spokesperson Sean Parnell told The Hill in a statement on Monday. For months, the Pentagon and Anthropic have held discussions around the U.S. military's terms of use of Claude, its signature AI product. But now the Pentagon is close to cutting ties with Anthropic and labeling the San Francisco-based company as a supply chain risk, Axios reported Monday. The company wants to make sure that its tools are not used to develop weaponry that fires without human input and that its products are not used for mass surveillance on Americans. Defense Department officials have argued that those terms would confine the U.S. military and make it more difficult to work under such conditions, the outlet noted. The tensions were highlighted by the Pentagon's Venezuela operations, which reportedly prompted Anthropic to inquire about whether its technology was used in the raid. The Wall Street Journal reported last week that Claude was used through Anthropic's partnership with Palantir, which has extensive military contracts. Anthropic's contract with the Pentagon is valued at up to $200 million and was announced last July. Anthropic said it is "committed" to using frontier AI to support U.S. national security. "That's why we were the first frontier AI company to put our models on classified networks and the first to provide customized models for national security customers. Claude is used for a wide variety of intelligence-related use cases across the government, including the DoW, in line with our Usage Policy," an Anthropic spokesperson told The Hill in a statement on Monday. "We are having productive conversations, in good faith, with DoW on how to continue that work and get these complex issues right," the spokesperson added.
[16]
Anthropic Blasts Pentagon's Use of Its AI Tool in Venezuela Raid -- May Void $200M Contract
At stake is Anthropic's $200 million contract with the Defense Department, which is now reportedly considering voiding the deal signed last summer. Both parties have already drawn clear lines in the sand. "Any use of Claude -- whether in the private sector or across government -- is required to comply with our Usage Policies, which govern how Claude can be deployed," an Anthropic spokesman told The Wall Street Journal. But Defense Secretary Pete Hegseth bristled at the prospect of AI companies dictating how their products can and can't be used by the military. Speaking at an event promoting the Pentagon's new pact with Elon Musk's xAI last month, Hegseth said the agency will no longer employ AI models, "that won't allow you to fight wars."
[17]
Pentagon threatens to cut off Anthropic in AI safeguards dispute: Report - The Economic Times
The Pentagon is pushing four AI companies to let the military use their tools for "all lawful purposes," including in areas of weapons development, intelligence collection and battlefield operations, but Anthropic has not agreed to those terms and the Pentagon is getting fed up after months of negotiations, according to the Axios report.The Pentagon is considering ending its relationship with artificial intelligence company Anthropic over its insistence on keeping some restrictions on how the U.S. military uses its models, Axios reported on Saturday, citing an administration official. The Pentagon is pushing four AI companies to let the military use their tools for "all lawful purposes," including in areas of weapons development, intelligence collection and battlefield operations, but Anthropic has not agreed to those terms and the Pentagon is getting fed up after months of negotiations, according to the Axios report. The other companies included OpenAI, Google and xAI. An Anthropic spokesperson said the company had not discussed the use of its AI model Claude for specific operations with the Pentagon. The spokesperson said conversations with the U.S. government so far had focused on a specific set of usage policy questions, including hard limits around fully autonomous weapons and mass domestic surveillance, none of which related to current operations. The Pentagon did not immediately respond to Reuters' request for comment. Anthropic's AI model Claude was used in the U.S. military's operation to capture former Venezuelan President Nicolas Maduro, with Claude deployed via Anthropic's partnership with data firm Palantir, the Wall Street Journal reported on Friday. Reuters reported on Wednesday that the Pentagon was pushing top AI companies including OpenAI and Anthropic to make their artificial intelligence tools available on classified networks without many of the standard restrictions that the companies apply to users.
[18]
Anthropic Could Face 'Supply Chain Risk' Tag As Pentagon Weighs Cutting Ties: Report - NVIDIA (NASDAQ:NVDA), Palantir Technologies (NASDAQ:PLTR)
Defense Secretary Pete Hegseth is reportedly contemplating ending the Department of War's association with AI firm Anthropic. AI Safeguards Clash With Pentagon The Department of War's relationship with Anthropic is under scrutiny, Chief Pentagon spokesman Sean Parnell told Axios on Monday. Another high-ranking Pentagon official told the publication that the company could soon be labeled as a "supply chain risk." He also stated that it would be "an enormous pain" to disentangle from the arrangement and added that they would ensure the company "pays a price" for forcing their hand. If Anthropic were labeled a supply chain risk, companies working with the Pentagon would have to certify they don't use its Claude AI, which could be challenging. Despite the ongoing discussions, Anthropic is prepared to relax its terms of use but remains adamant about preventing its tools from being used for mass surveillance of Americans or for developing autonomous weapons. The Pentagon, however, deems these conditions excessively restrictive. The Department of War and Anthropic did not immediately respond to Benzinga's requests for comment. Amodei Warns On Chips, AI Spending Notably, Anthropic's Pentagon contract is valued at up to $200 million out of its $14 billion in annual revenue, as per the publication. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Image via Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[19]
Pentagon pushing AI companies to expand on classified networks, sources say
The Pentagon is pushing the top AI companies including OpenAI and Anthropic to make their artificial-intelligence tools available on classified networks without many of the standard restrictions that the companies apply to users. During a White House event on Tuesday, Pentagon Chief Technology Officer Emil Michael told tech executives that the military is aiming to make the AI models available on both unclassified and classified domains, according to two people familiar with the matter. The Pentagon is "moving to deploy frontier AI capabilities across all classification levels," an official who requested anonymity said.
[20]
Pentagon and Anthropic's clash escalates over AI use
The Pentagon may ask contractors to certify that they do not use Anthropic's (ANTHRO) Claude amid tensions over how the company's tools are used, The Wall Street Journal reported. The Pentagon wants to be able to use Anthropic and other Anthropic may face designations as a supply chain risk, exclusion from defense contracts, reputational damage, and loss of lucrative government business from potential vendor bans and Pentagon criticism. While OpenAI, Google, and xAI allow their AI models for any lawful Pentagon use, Anthropic restricts usage for domestic surveillance and autonomous lethal activities, creating conflict with Pentagon requirements. Continued standoffs with the Pentagon could halt current collaborations, prevent future contracts, provoke adverse publicity, and limit Anthropic's involvement in U.S. large-scale defense technology initiatives.
[21]
Anthropic's Deal With US Military Under Threat | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The government is also "close" to designating the company a "supply chain risk," which means anyone wishing to do business with the military would also need to end their relationship with Anthropic, Axios reported Monday (Feb. 16), citing a senior Pentagon official. "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this," that official said. The report noted that this sort of penalty is typically reserved for foreign adversaries. And Anthropic's Claude is the only AI model available in the military's classified systems, with the Pentagon championing its abilities, Axios added. But for the last few months, the report said, Anthropic and the Pentagon have been holding contentious talks on the military's permitted usage of Claude. The company is prepared to relax its terms of use, but wants assurances its tech won't be used for mass domestic spying operations or to build weapons that can be deployed with no human involvement. According to Axios, the Pentagon says this is too restrictive, with a host of gray areas that would make such terms impractical. The military is insisting in negotiations with Anthropic and other major AI firms that it be permitted to use their tools for "all lawful purposes," the report added. "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight," Chief Pentagon spokesman Sean Parnell told Axios. "Ultimately, this is about our troops and the safety of the American people." An Anthropic spokesperson told Axios: "We are having productive conversations, in good faith, with DoW on how to continue that work and get these new and complex issues right." The Pentagon last year awarded up to $200 million each to four U.S.-based AI companies developing "frontier" models -- Anthropic, Google, OpenAI and xAI -- as the military's Chief Digital and Artificial Intelligence Office (CDAO) looks to employ agentic AI to deal with national security challenges. CDAO Head Doug Matty said the goal is to tap into the best technologies developed by American AI firms to support its troops and uphold a strategic advantage. "The U.S. military is no slouch when it comes to technological innovation," PYMNTS wrote at the time. "For example, the DoD's R&D arm -- Defense Advanced Research Projects Agency or DARPA -- created Arpanet in 1969, a communications network that linked computers far apart. Arpanet later became the internet."
[22]
Pentagon's AI Push Faces Friction With Anthropic Over Usage Restrictions | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The dispute centers on a demand from the Pentagon that would allow the U.S. military to use AI tools for "all lawful purposes," including weapons development, intelligence work and battlefield missions, per Reuters. Anthropic has resisted these broader terms, maintaining limits on uses such as fully autonomous weapons and mass domestic surveillance, which Pentagon officials see as restrictive hurdles in defense applications. While other leading AI firms -- including OpenAI, Google, and xAI -- have reportedly moved toward agreements that relax some usage limits in defense contexts, Anthropic's stance has strained its relationship with the Defense Department. The ongoing negotiations reportedly have frustrated Pentagon officials after several months of talks, Axios said, according to Reuters. An Anthropic spokesperson told Reuters that the company has not discussed the deployment of its Claude model for specific military operations with the Pentagon. Instead, conversations with the U.S. government have focused on policy questions related to the company's usage guidelines, particularly restrictions designed to prevent certain applications of AI technology. Those topics, the spokesperson said, did not involve present operations. Read more: Pentagon Pressed to Review SpaceX Over Alleged Chinese Investment Links The Wall Street Journal reported separately that Anthropic's Claude model was used in a U.S. military operation targeting former Venezuelan President Nicolás Maduro earlier this year, with the technology accessed through a collaboration between Anthropic and data firm Palantir. Reuters later confirmed that the Pentagon is encouraging AI firms to make their systems available on classified networks with fewer of the usual restrictions, a shift that has amplified tensions with developers. The Pentagon did not immediately respond to a Reuters request for comment on potential changes to its relationship with Anthropic. Observers say the standoff highlights the broader challenge facing the U.S. military as it seeks to deepen its reliance on cutting-edge AI while balancing ethical concerns and commercial developers' safety policies.
[23]
Pentagon Weighs Cutting Anthropic Ties Over AI Military Safeguard Dispute
Pentagon Reviews Anthropic Contracts and Weighs Stricter Procurement Action Defense officials have reviewed whether to continue working with Anthropic under the current terms. The Pentagon wants language that permits Claude's use for "all lawful purposes." The scope covers weapons development, intelligence collection, and battlefield operations. The Pentagon has also discussed labeling Anthropic a "supply chain risk." This step can force defense suppliers to certify that they do not use Claude in their workflows. It can also complicate future contracting across the defense ecosystem. has already integrated into sensitive US government systems. This footprint raises switching costs if the Pentagon ends the relationship quickly. Officials have also weighed how fast other frontier models can meet specialized government needs. The at issue totals up to $200 million. The figure represents a small share of Anthropic's stated revenue, but the policy outcome can shape future AI procurement rules.
[24]
Hegseth 'close' to blacklisting AI firm Anthropic as heated...
Defense Secretary Pete Hegseth is allegedly "close" to cutting off the Pentagon's ties to AI firm Anthropic and placing it on a blacklist that would force several other firms to stop using the Claude chatbot, according to a report Monday. Tensions have reached a boiling point after months of heated negotiations, as Anthropic has sought to ensure its tools are not used for mass surveillance on Americans or to develop weapons that can fire with no human involvement, according to Axios. Hegseth is considering designating Anthropic a "supply chain risk," meaning any firm that wants to do business with the Department of War will need to cut ties to the fast-growing AI company, a senior Pentagon official told the outlet. "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this," the official told Axios. The "supply chain risk" label is a strict penalty usually reserved for foreign adversaries suspected of posing a national security risk - but senior defense officials have long been frustrated with Anthropic and are eager to pick a fight, a source told Axios. The Post reported last September that Anthropic was staring down a possible clash with the White House over concerns the AI firm carries a leftist bias. Anthropic CEO Dario Amodei is a prominent Democratic donor, and Ford Foundation - a notorious left-leaning group - bought $5 million worth of Anthropic shares in March 2024. "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight," Pentagon spokesman Sean Parnell told The Post. "Ultimately, this is about our troops and the safety of the American people." An Anthropic spokesperson noted that Claude is the first and only AI model currently used in the military's classified systems. It was recently used in the US operation to capture Venezuelan dictator Nicolás Maduro, according to several reports. "We are having productive conversations, in good faith, with DoW on how to continue that work and get these complex issues right," the spokesperson told The Post. Critics say they are concerned that unbridled access to AI will strengthen the Pentagon's ability to target civilians. Hegseth's Department of War has argued that the exceptions Anthropic is seeking are too restrictive, insisting that the Pentagon should be allowed to use AI tools for "all lawful purposes." If Hegseth labels Anthropic a supply chain risk, the company could see some partners unwinding their business deals in order to maintain a good relationship with the Pentagon. The Pentagon's contract with Anthropic is valued at roughly $200 million - though this is just a fraction of the company's $14 billion in annual revenue. While OpenAI's ChatGPT has seen more success among consumers, Anthropic has soared ahead in business deals. The company recently said it has partnerships with eight of the 10 largest US companies. It also wouldn't be a simple switch for the Pentagon, since rival AI bots from OpenAI, Google and xAI "are just behind" when it comes to government applications, a senior Trump administration official told Axios. OpenAI, Google and xAI have all agreed to remove their chatbot safeguards for use in the military's unclassified systems, but none of these bots are currently used in classified systems. A senior administration official said the Pentagon is confident these three AI companies will agree to the looser standards, though another source told Axios that discussions are still ongoing.
[25]
Anthropic Talks With Pentagon Stall Over Claude AI Guardrails
Anthropic PBC's talks about extending a contract with the Pentagon are being held up over additional protections the artificial intelligence company wants to put on its Claude tool, a person familiar with the matter said. Anthropic wants to put guardrails in place to stop Claude from being used for mass surveillance of Americans or to develop weapons that can be deployed without a human involved, the person said, asking not to be identified because the negotiations are private. The Pentagon wants to be able to use Claude as long as its deployment doesn't break the law. Bloomberg News Tech and National Security Reporter Katrina Manson joins Bloomberg Businessweek Daily to discuss. She speaks with Carol Massar and Tim Stenovec.
[26]
Pentagon pushing AI companies to expand on classified networks, sources say
Feb 11 (Reuters) - The Pentagon is pushing the top AI companies including OpenAI and Anthropic to make their artificial-intelligence tools available on classified networks without many of the standard restrictions that the companies apply to users. During a White House event on Tuesday, Pentagon Chief Technology Officer Emil Michael told tech executives that the military is aiming to make the AI models available on both unclassified and classified domains, according to two people familiar with the matter. The Pentagon is "moving to deploy frontier AI capabilities across all classification levels," an official who requested anonymity told Reuters. It is the latest development in ongoing negotiations between the Pentagon and the top generative AI companies over how the U.S. will use AI on a future battlefield that is already dominated by autonomous drone swarms, robots and cyber attacks. Michael's comments are also likely to intensify an already contentious debate over the military's desire to use AI without restrictions and tech companies' ability to set boundaries around how their tools are deployed. Many AI companies are building custom tools for the U.S. military, most of which are available only on unclassified networks typically used for military administration. Only one AI company - Anthropic - is available in classified settings through third parties but the government is still bound by the company's usage policies. Classified networks are used to handle a wide range of more sensitive work that can include mission-planning or weapons targeting. Reuters could not determine how or when the Pentagon planned to deploy AI chatbots on classified networks. Military officials are hoping to leverage AI's power to synthesize information to help shape decisions. But while these tools are powerful, they can make mistakes and even make up information that might sound plausible at first glance. Such mistakes in classified settings could have deadly consequences, AI researchers say. AI companies have sought to minimize the downside of their products by building safeguards within their models and asking customers to adhere to certain guidelines. But Pentagon officials have bristled at such restrictions, arguing that they should be able to deploy commercial AI tools as long as they comply with American law. This week, OpenAI reached a deal with the Pentagon so that the military could use its tools, including ChatGPT, on an unclassified network called , which has been rolled out to more than 3 million Defense Department employees. As part of the deal, OpenAI agreed to remove many of its typical user restrictions although some guardrails remain. Alphabet's Google and xAI have previously struck similar deals. In a statement, OpenAI said this week's agreement is specific to unclassified use through genai.mil. Expanding on that agreement would require a new or modified agreement, a spokesperson said. Similar discussions between OpenAI rival Anthropic and the Pentagon have been significantly more contentious, Reuters previously reported. Anthropic executives have told military officials that they do not want their technology used to target weapons autonomously and conduct U.S. domestic surveillance. Anthropic's products include a chatbot called Claude. "Anthropic is committed to protecting America's lead in AI and helping the U.S. government counter foreign threats by giving our warfighters access to the most advanced AI capabilities," an Anthropic spokesperson said. "Claude is already extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work." President Donald Trump has ordered the Department of Defense to rename itself the Department of War, a change that will require action by Congress. (Reporting by David Jeans in New York and Deepa Seetharaman in San Francisco; Editing by Kenneth Li and Matthew Lewis)
Share
Share
Copy Link
The Pentagon is pushing Anthropic to allow military use of Claude AI for all lawful purposes, but the company refuses to budge on restrictions around autonomous weapons and mass domestic surveillance. With a $200 million contract at stake, the Defense Department threatens to end the partnership while other AI companies show more flexibility.
Anthropicfaces mounting pressure from the Pentagon to remove restrictions on how the U.S. military uses its Claude AI models, putting a lucrative $200 million contract in jeopardy
1
. The Defense Department is demanding that AI companies allow military use of AI for "all lawful purposes," including weapons development, intelligence collection, and battlefield operations4
. But Anthropic, led by CEO Dario Amodei, is pushing back harder than its competitors, insisting on maintaining hard limits around fully autonomous weapons and mass domestic surveillance1
.
Source: Market Screener
The AI company dispute has escalated after months of negotiations, with the Pentagon now openly threatening to pull the plug on its partnership with Anthropic. "Everything's on the table," including contract cancellation, a senior administration official told Axios, though they acknowledged there would need to be "an orderly replacement" if that becomes necessary
2
.The Pentagon is making identical demands to OpenAI, Google, and xAI as it seeks to deploy their AI systems on classified networks
1
. According to anonymous Trump administration officials, one of these companies has already agreed to the terms, while the other two have shown some flexibility in negotiations1
. This makes Anthropic the most resistant among the group, a stance rooted in the company's founding philosophy around AI safety guardrails.Claude is already accessible on the Pentagon's classified networks, and the Defense Department is working to add ChatGPT, Gemini, and Grok to its arsenal
2
. The stakes are high for Anthropic: losing the Pentagon contract could damage its growing business selling AI to corporate customers, especially as competitors position themselves as more cooperative partners5
.The Wall Street Journal reported that Claude AI models were used in the U.S. military's operation to capture former Venezuelan President Nicolás Maduro, deployed through Anthropic's partnership with data firm Palantir. An Anthropic employee subsequently asked Palantir for details about what happened, though a company spokesperson characterized these as "routine discussions on strictly technical matters"
2
.In 2024, Anthropic signed agreements with Palantir to have Claude "support government operations," including data processing and helping officials make informed decisions in time-sensitive situations
2
. The company also joined Palantir's FedStart tool, allowing federal government employees to use Claude for writing, analyzing data, and solving complex problems2
.Related Stories
An Anthropic spokesperson clarified that the company has "not discussed the use of Claude for specific operations with the Department of Defense" but is instead "focused on a specific set of Usage Policy questions"
1
. The company's red lines center on preventing Claude from being used for the mass surveillance of Americans and fully autonomous weaponry2
.A Pentagon spokesperson pushed back against these restrictions, telling the Wall Street Journal: "Our nation requires that our partners be willing to help our warfighters win in any fight"
2
. This fundamental disagreement reflects broader tensions about who controls AI deployment decisions once systems are sold to government clients.
Source: New York Post
Dario Amodei and his sister Daniela Amodei, who co-founded Anthropic in 2021 after leaving OpenAI, have deep ties to the effective altruism community that prioritizes AI safety
5
. The company was created alongside about 15 other former OpenAI employees following disagreements over how AI should be funded, built, and released5
. Dario Amodei has long expressed concern that AI could spread disinformation, enable mass surveillance, or power autonomous weapons5
. This philosophical foundation now puts Anthropic at odds with the Defense Department's demand for unrestricted access to Claude on classified networks3
.
Source: Futurism
Summarized by
Navi
03 Mar 2026•Policy and Regulation

30 Jan 2026•Policy and Regulation

14 Feb 2026•Policy and Regulation

1
Policy and Regulation

2
Technology

3
Technology
