12 Sources
12 Sources
[1]
AI Safety Meets the War Machine
When Anthropic last year became the first major AI company cleared by the US government for classified use -- including military applications -- the news didn't make a major splash. But this week a second development hit like a cannonball: The Pentagon is reconsidering its relationship with the company, including a $200 million contract, ostensibly because the safety-conscious AI firm objects to participating in certain deadly operations. The so-called Department of War might even designate Anthropic as a "supply chain risk," a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic's AI in their defense work. In a statement to WIRED, chief Pentagon spokesperson Sean Parnell confirmed that Anthropic was in the hot seat. "Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people," he said. This is a message to other companies as well: OpenAI, xAI and Google, which currently have Department of Defense contracts for unclassified work, are jumping through the requisite hoops to get their own high clearances. There's plenty to unpack here. For one thing, there's a question of whether Anthropic is being punished for complaining about the fact that its AI model Claude was used as part of the raid to remove Venezuela's president Nicolás Maduro (that's what's being reported; the company denies it). There's also the fact that Anthropic publicly supports AI regulation -- an outlier stance in the industry and one that runs counter to the administration's policies. But there's a bigger, more disturbing issue at play. Will government demands for military use make AI itself less safe? Researchers and executives believe AI is the most powerful technology ever invented. Virtually all of the current AI companies were founded on the premise that it is possible to achieve AGI, or superintelligence, in a way that prevents widespread harm. Elon Musk, the founder of xAI, was once the biggest proponent of reining in AI -- he cofounded OpenAI because he feared that the technology was too dangerous to be left in the hands of profit-seeking companies. Anthropic has carved out a space as the most safety-conscious of all. The company's mission is to have guardrails so deeply integrated into their models that bad actors cannot exploit AI's darkest potential. Isaac Asimov said it first and best in his laws of robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Even when AI becomes smarter than any human on Earth -- an eventuality that AI leaders fervently believe in -- those guardrails must hold. So it seems contradictory that leading AI labs are scrambling to get their products into cutting-edge military and intelligence operations. As the first major lab with a classified contract, Anthropic provides the government a "custom set of Claude Gov models built exclusively for U.S. national security customers." Still, Anthropic said it did so without violating its own safety standards, including a prohibition on using Claude to produce or design weapons. Anthropic CEO Dario Amodei has specifically said he doesn't want Claude involved in autonomous weapons or AI government surveillance. But that might not work with the current administration. Department of Defense CTO Emil Michael (formerly the chief business officer of Uber) told reporters this week that the government won't tolerate an AI company limiting how the military uses AI in its weapons. "If there's a drone swarm coming out of a military base, what are your options to take it down? If the human reaction time is not fast enough ... how are you going to?" he asked rhetorically. So much for the first law of robotics. There's a good argument to be made that effective national security requires the best tech from the most innovative companies. While even a few years ago, some tech companies flinched at working with the Pentagon, in 2026 they are generally flag-waving would-be military contractors. I have yet to hear any AI executive speak about their models being associated with lethal force, but Palantir CEO Alex Karp isn't shy about saying, with apparent pride, "Our product is used on occasion to kill people."
[2]
US used Anthropic's Claude during the Venezuela raid, WSJ reports
Feb 13 (Reuters) - Anthropic's artificial-intelligence model Claude was used in the U.S. military's operation to capture former Venezuelan President Nicolas Maduro, the Wall Street Journal reported on Friday, citing people familiar with the matter. Claude's deployment came via Anthropic's partnership with data firm Palantir Technologies (PLTR.O), opens new tab, whose platforms are widely used by the Defense Department and federal law enforcement, the report added. Reuters could not immediately verify the report. The U.S. Defense Department, the White House, Anthropic and Palantir did not immediately respond to Reuters' requests for comment. The Pentagon is pushing top AI companies, including OpenAI and Anthropic, to make their artificial-intelligence tools available on classified networks without many of the standard restrictions that the firms apply to users, Reuters exclusively reported on Wednesday. Many AI companies are building custom tools for the U.S. military, most of which are available only on unclassified networks typically used for military administration. Anthropic is the only one that is available in classified settings through third parties, but the government is still bound by the company's usage policies. The usage policies of Anthropic, which raised $30 billion in its latest funding round and is now valued at $380 billion, forbid using Claude to support violence, design weapons or carry out surveillance. The United States captured President Nicolas Maduro in an audacious raid and whisked him to New York to face drug-trafficking charges early in January. Reporting by Carlos Méndez and Juby Babu in Mexico City; Editing by Chris Reese and Alan Barona Our Standards: The Thomson Reuters Trust Principles., opens new tab
[3]
Pentagon-Anthropic battle pushes other AI labs into major dilemma
Why it matters: Defense Secretary Pete Hegseth wants to integrate AI into everything the military does more quickly and effectively than adversaries like China. He's insisting AI firms give unrestricted access to their models with no questions asked -- and showing he's willing to play hardball to force their hands. Driving the news: The Pentagon is threatening to sever its contract with Anthropic and declare the company a "supply chain risk" because it's unwilling to lift certain restrictions on its model, Claude. * The company is particularly concerned about Claude being used for mass domestic surveillance or to develop fully autonomous weapons. * The use of Claude in the Nicolás Maduro raid deepened tensions. The Pentagon claims an Anthropic executive raised concerns after the operation, though Anthropic denies that. * Administration officials say it's unworkable for the military to have to litigate individual use-cases with Anthropic before or after the fact. "We're dead serious," a senior Pentagon official told Axios of the threat to cut off Anthropic and force its vendors to follow suit. State of play: Crucially, Claude is the only model available in the military's classified systems through Anthropic's partnership with Palantir. * Three other models -- OpenAI's ChatGPT, Google's Gemini and xAI's Grok -- are available in unclassified systems, and have lifted their ordinary safeguards as part of those agreements. * Negotiations to bring those companies into the classified domain are now more urgent as the Pentagon ponders how to replace Claude if necessary -- a process a senior official conceded would be massively disruptive. * Anthropic says it remains committed to working with the Pentagon, despite the public feud, and both sides say they might still come to an agreement. * One acknowledged that the fight with Anthropic was a useful way to set the tone for negotiations with the other three. The intrigue: Officials are adamant they won't budge on a standard allowing the Pentagon "all lawful use" of the AI models, and a senior administration official said one of the three labs already told the Pentagon it was "ok with 'all lawful use' at any classification level." * A source familiar told Axios that it was xAI, whose founder Elon Musk has ripped rivals like Anthropic and OpenAI as "woke" for their approaches to safety. xAI did not respond to multiple requests for comment. * Notably, xAI was the only bidder out of the frontier labs in the Pentagon's autonomous drone software contest. * OpenAI is bidding in a limited way to translate voice commands into digital instructions, but not for drone control, weapon integration, or target selection. Zoom in: The senior official said the administration was confident the other two labs would agree to "all lawful use" across both domains. But sources familiar with those dynamics tell Axios it's not nearly that clear-cut. * An OpenAI spokesperson told Axios that moving into classified work "would require us to agree to a new or modified agreement." Google declined to comment. * The "all lawful use" requirement is hardly relevant for unclassified work. "People are going to use this thing to make their PowerPoint slides a little bit more quickly and easier. They're not going to be developing autonomous weapons," one source said. * But applying that standard in the classified domain poses thorny ethical dilemmas. Between the lines: While Anthropic CEO Dario Amodei has been the most vocal about the risks of advanced AI, executives at OpenAI and Google share some concerns about how their models might be used, sources familiar with those dynamics say. * The companies may also fear revolts among their engineers, like the one Google experienced in 2018 over a previous initiative, Project Maven, that involved using AI to analyze drone footage. Google walked away from that deal after a damaging internal fight. Then there's the matter of how you ensure the Pentagon is complying with whatever usage terms have been agreed, or even with the law. * "The whole game" is building infrastructure that ensures what's being deployed is safe, and having oversight on the back end into how it was used, one source said. * The source was skeptical of Anthropic's claim that the company has sufficient visibility into Pentagon operations to ensure it's comfortable with every use of its model. * A separate source said Anthropic does have visibility and is confident its usage policies are followed. Threat level: One source familiar with the ongoing discussions said one issue is that the companies themselves don't fully understand how their models will respond in certain scenarios, or why. * "That is more challenging than just figuring out like, 'Hey, will this metal withstand this degree of heat or that degree of heat?'" * The source added: "If there's a one in a million chance that the model might do something unpredictable, is that one in a million chance so catastrophic that it's not worth taking a 1 million chance?" Consider this: If an AI model enables an autonomous weapon to near-instantly take down dangerous drone swarms, is it ethical to deploy it when there's some small chance it could also fire on a civilian flight? * Those are the sorts of questions the labs are grappling with. The bottom line: The Pentagon's position is that such decisions should be made by the military, not by executives in Silicon Valley. Anthropic and its rivals are under pressure to decide whether they can live with that.
[4]
US military used Anthropic's AI model Claude in Venezuela raid, report says
Wall Street Journal says Claude used in operation via Anthropic's partnership with Palantir Technologies Claude, the AI model developed by Anthropic, was used by the US military during its operation to kidnap Nicolás Maduro from Venezuela, the Wall Street Journal revealed on Saturday, a high-profile example of how the US defence department is using artificial intelligence in its operations. The US raid on Venezuela involved bombing across the capital, Caracas, and the killing of 83 people, according to Venezuela's defence ministry. Anthropic's terms of use prohibit the use of Claude for violent ends, for the development of weapons or for conducting surveillance. Anthropic was the first AI developer known to be used in a classified operation by the US department of defence. It was unclear how the tool, which has capabilities ranging from processing PDFs to piloting autonomous drones, was deployed. A spokesperson for Anthropic declined to comment on whether Claude was used in the operation, but said any use of the AI tool was required to comply with its usage policies. The US defence department did not comment on the claims. The WSJ cited anonymous sources who said Claude was used through Anthropic's partnership with Palantir Technologies, a contractor with the US defence department and federal law enforcement agencies. Palantir refused to comment on the claims. The revelation comes as the US and other militaries such as Israel's increasingly deploy AI as part of their arsenals. Israel's military has used drones with autonomous capabilities in Gaza, and has extensively used AI to fill its targeting bank in Gaza. The US military has used AI targeting for strikes in Iraq and Syria in recent years. Critics have warned against the use of AI in weapons technologies and the deployment of autonomous weapons systems, pointing to targeting mistakes created by computers governing who should and should not be killed. AI companies have grappled with how their technologies should engage with the defence sector, with Anthropic's CEO, Dario Amodei, calling for regulation to prevent harms from the deployment of AI. Amodei has also expressed wariness over the use of AI in autonomous lethal operations and surveillance in the US. This more cautious stance has apparently rankled the US defence department, with the secretary of war, Pete Hegseth, saying in January that the department wouldn't "employ AI models that won't allow you to fight wars". The Pentagon announced in January that it would work with xAI, owned by Elon Musk. The defence department also uses a custom version of Google's Gemini and OpenAI systems to support research.
[5]
Tensions between the Pentagon and AI giant Anthropic reach a boiling point
Over the last week, tensions between the Pentagon and artificial intelligence giant Anthropic have reached a boiling point. Anthropic, the creator of the Claude chatbot system and a frontier AI company with a defense contract worth up to $200 million, has built its brand around the promotion of AI safety, touting red lines the company says it won't cross. Now, the Pentagon appears to be pushing those boundaries. Hints of a possible rift between Anthropic and the Defense Department, now rebranded the Department of War, began to intensify after The Wall Street Journal and Axios reported the use of Anthropic products in the operation to capture Venezuelan President Nicolás Maduro. It is unclear how Anthropic's Claude was used. Anthropic has not raised or found any violations of its policies in the wake of the Maduro operation, according to two people familiar with the matter, who asked to remain anonymous in order to discuss sensitive topics. They said that the company has high visibility into how its AI tool Claude is used, such as in data analysis operations. Anthropic was the first AI company allowed to offer services on classified networks, via Palantir, which partnered with it in 2024. Palantir said in an announcement of the partnership that Claude could be used "to support government operations such as processing vast amounts of complex data rapidly" and "helping U.S. officials to make more informed decisions in time-sensitive situations." Palantir is one of the military's favored data and software contractors, for example collecting data from space sensors to provide better strike targeting for soldiers. It has also attracted scrutiny for its work under the Trump administration and law enforcement agencies. Though Anthropic has maintained that it does not and will not allow its AI systems to be directly used in lethal autonomous weapons or for domestic surveillance, the reported use of its technology related to the Venezuela raid through the contract with Palantir allegedly raised concerns from an Anthropic employee. Semafor reported Tuesday that, during a routine meeting between Anthropic and Palantir, a Palantir executive was worried that an Anthropic employee did not seem to agree with how its systems might have been used in the operation, leading to "a rupture in Anthropic's relationship with the Pentagon." A senior Pentagon official told NBC News that "a senior executive from Anthropic communicated with a senior Palantir executive, inquiring as to whether their software was used for the Maduro raid." According to the Pentagon official, the Palantir executive "was alarmed that the question was raised in such a way to imply that Anthropic might disapprove of their software being used during that raid." Citing the classified nature of military operations, an Anthropic spokesperson would neither confirm nor deny that its Claude chatbot systems had been used in the Maduro operation: "We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise," the spokesperson told NBC News in a statement. The spokesperson pushed back on the idea that the incident had caused notable fallout, telling NBC News the company had not held out-of-the-ordinary discussions about Claude usage with partners or shared any mission-related disagreements with the military. "Anthropic has not discussed the use of Claude for specific operations with the Department of War," the spokesperson said. "We have also not discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters." Palantir did not reply to a request for comment. The core tension between Anthropic and the Defense Department appears to be rooted in a broader clash over the military's future use of Anthropic's systems. The Defense Department has recently emphasized its desire to be able to use all available AI systems for any purpose allowed by law, while Anthropic says it wants to maintain its own guardrails. Chief spokesman for the Pentagon Sean Parnell told NBC News that "The Department of War's relationship with Anthropic is being reviewed." "Our nation requires that our partners be willing to help our warfighters win in any fight," he said in a statement. "Ultimately, this is about our troops and the safety of the American people." On Tuesday, Undersecretary of Defense Emil Michael said that the department's negotiations with Anthropic had hit a snag over a disagreement over potential uses of its systems, according to CNBC. In early January, Defense Secretary Pete Hegseth released a new AI strategy document that called for any contracts with AI companies to eliminate company-specific guardrails or constraints on how the military can use companies' AI systems, newly allowing "any lawful use" of AI for Defense Department purposes. The document called for defense officials to incorporate this language into any Defense Department AI contract within 180 days, which would implicate Anthropic's dealings with the military. While Anthropic has broadly supported the use of its services for national security purposes, it has maintained that its systems not be used for domestic surveillance or in fully autonomous weapons. The Defense Department has balked at Anthropic's insistence on these two issues and applied increasing pressure to the company. "Claude is used for a wide variety of intelligence-related use cases across the government, including the Department of War, in line with our Usage Policy," the Anthropic spokesperson said. "We are having productive conversations, in good faith, with the Department of War on how to continue that work and get these complex issues right." Relative to other AI companies, Anthropic has prioritized enterprise and national security applications of its AI systems. In August 2025, Anthropic formed a national security and public sector advisory council composed of former senior defense and intelligence officials and last week added Chris Lidell, President Donald Trump's former deputy chief of staff, to its board of directors. Anthropic has partnered with Palantir since late 2024 to provide U.S. defense and intelligence agencies with access to various Claude systems. At the time, Anthropic's head of sales and partnerships, Kate Earle Jensen, said the company was "proud to be at the forefront of bringing responsible AI solutions to U.S. classified environments, enhancing analytical capabilities and operational efficiencies in vital government operations." Anthropic, along with other leading American AI companies such as OpenAI and Google, signed individual two-year contracts with the Defense Department in July 2025, each worth up to $200 million to help "prototype frontier AI capabilities that advance U.S. national security." "Anthropic is committed to using frontier AI in support of US national security," the Anthropic spokesperson told NBC News in a statement. "We were the first frontier AI company to put our models on classified networks and the first to provide customized models for national security customers." Anthropic CEO Dario Amodei has routinely emphasized Anthropic's commitment to using its AI services for national security purposes. In an essay published in late January, Amodei wrote that "democracies have a legitimate interest in some AI-powered military and geopolitical tools," and that "we should arm democracies with AI, but we should do so carefully and within limits." Michael Horowitz, who led AI and emerging technology policy in the Pentagon and is now a professor of political science at the University of Pennsylvania, said that any concerns about use of Anthropic systems for active engagement in lethal autonomous weapons would likely be irrelevant to current negotiations given the type of systems Anthropic is developing. "I would be surprised if Anthropic models were the right ones to use for lethal autonomous weapon systems right now, since the algorithms for that will be more bespoke than Claude's," Horowitz told NBC News. "My sense is that Anthropic wants to increase the depth and scope of their work with the Pentagon. Based on what we know, this sounds like a dispute more over theoretical possibilities than real-world use cases on the table."
[6]
Tech 24 - Anthropic's Claude helped Pentagon raid Caracas and seize Maduro: US media
The Wall Street Journal has reported that the Pentagon used Anthropic's AI model, Claude, as part of its operation to capture Venezuelan President Nicolás Maduro. The newspaper cites people familiar with the matter, saying the Department of War leveraged Claude in last month's raid as part of a technology suite provided by Palantir, another American technology company used extensively by the United States government. Both the Wall Street Journal and Axios report that someone at Anthropic asked someone at Palantir how exactly Claude was used in the operation. Technology correspondent Peter O'Brien breaks down what it means for the company that positions itself as a champion of AI safety.
[7]
Pentagon used Anthropic's Claude during Maduro raid
Why it matters: The previously undisclosed role of Claude in the highly complex and deadly operation highlights the tensions the major AI labs face, as they enter into business with the military while trying to maintain some limitations on how their tools are used. Breaking it down: AI models can quickly process data in real-time, a capability prized by the Pentagon given the chaotic environments in which military operations take place. * Axios could not confirm the precise role that Claude played in the operation to capture Maduro. The military has used Claude in the past to analyze satellite imagery or intelligence. The sources said Claude was used during the active operation, not just in preparations for it. * No Americans were killed in the raid. Cuba and Venezuela both said dozens of their soldiers and security personnel were killed. Friction point: The Pentagon wants the AI giants to allow them to use their models in any scenario so long as they comply with the law. * Anthropic, which has positioned itself as the safety-first AI leader, is currently negotiating with the Pentagon around its terms of use. The company wants to ensure in particular that its technology is not used for the mass surveillance of Americans or to operate fully autonomous weapons. * The company is confident the military has complied in all cases with its existing usage policy, which has additional restrictions, a source familiar with those discussions told Axios. What they're saying: "We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise," an Anthropic spokesperson told Axios. * "Any use of Claude -- whether in the private sector or across government -- is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance." * Defense Secretary Pete Hegseth has leaned into AI and said he wants to quickly integrate it into all aspects of the military's work, in part to stay ahead of China. The Pentagon did not respond to a request for comment. * One source said some senior officials at the Pentagon were "deeply frustrated" by Anthropic's posture on safeguards, citing recent conversations. The big picture: Anthropic is one of several major model-makers that are working with the Pentagon in various capacities. * OpenAI, Google and xAI have all reached deals for military users to access their models without many of the safeguards that apply to ordinary users. It's unclear if any other models were used during the Venezuela operation. * But the military's most sensitive work -- from weapons testing to comms during active operations -- happens on classified systems. For now, only Anthropic's system is available on those classified platforms. * Anthropic also has a partnership with Palantir, the AI software firm that has extensive Pentagon contracts, that allows it to use Claude within its security products. It's not clear whether the use of Claude in the operation was tied to the Anthropic-Palantir partnership. What to watch: Discussions are ongoing between the Pentagon and OpenAI, Google and xAI about allowing the use of their tools in classified systems. Anthropic and the Pentagon are also in discussions about potentially loosening the restrictions on Claude.
[8]
Anthropic is fighting with a big client, and it's actually good for its brand
Can a headline-making squabble with a client actually be good for a brand? This week's dispute between the Department of Defense and Anthropic, a high-profile player in the super-competitive field of artificial intelligence, may be just that. The dispute involves whether the Pentagon, which has an agreement to use Anthropic technology, can apply it in a wider range of scenarios: all "lawful use" cases. Anthropic has resisted signing off on some potential scenarios, and the Pentagon has essentially accused it of being overly cautious. As it happens, that assessment basically aligns with Anthropic's efforts (most recently via Super Bowl ads aimed squarely at prominent rival OpenAI) to burnish a reputation as a thoughtful and considered AI innovator. At a moment when the pros-vs.-cons implications and potential consequences of AI are more hotly debated than ever, Anthropic's public image tries to straddle the divide. Presumably Anthropic (best known to consumers for its AI chat tool Claude) would prefer to push that reputation without alienating a lucrative client. But the underlying feud concerns how the military can use Anthropic's technology, with the company reportedly seeking limits on applications involving mass surveillance and autonomous weapons. A Pentagon spokesman told Fast Company that the military's "relationship with Anthropic is being reviewed," adding: "Our nation requires that our partners be willing to help our warfighters win in any fight." The department has reportedly threatened to label Anthropic a "supply chain risk," lumping it in with supposedly "woke" tech companies, causing potential problems not just for Anthropic but for partners like Palintir. So far Anthropic's basic stance amounts to: This is a uniquely potent technology whose eventualities we don't fully comprehend, so there are limits to uses we'll currently permit. Put more bluntly: We are not reckless.
[9]
Anthropic on shaky ground with Pentagon amid feud after Maduro raid
Anthropic has increasingly found itself at odds with the Pentagon over how its AI model Claude can be deployed in military operations following disclosure about its use in the raid that captured Venezuelan President Nicolas Maduro last month. The AI company is one of several firms that secured a massive contract with the Defense Department (DOD) last summer amid a broader push by the Trump administration to boost AI adoption. But the latest dispute has left its relationship with the Pentagon on shaky terms, threatening the future of the contract and sparking a broader ethics battle over the safe application of AI models in warzones increasingly dominated by cyber attacks, robots and drones. Sarah Kreps, the director of the Tech Policy Institute in the Cornell Brooks School of Public Policy, said that Anthropic has been "going in the direction of enterprise, and that, I think, does open them up to these questions of how enterprises are using these models, and it makes it much more difficult to maintain control once you've handed these models over to that kind of enterprise-level use." "What I've seen in other tech sectors, which is when you grow so quickly, which they have, even if your goal is not to move quickly and break things, things are going to break. And then you have to figure out what to do," Kreps said in an interview with The Hill. The Pentagon confirmed Monday that it was reviewing its relationship with Anthropic amid a long-brewing dispute over the AI firm's usage policy, which bars the use of Claude to conduct mass surveillance or weapons development. "Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people," chief Pentagon spokesperson Sean Parnell said in a Monday statement. Anthropic was one of several leading AI companies that scored a contract with DOD for up to $200 million last July, alongside Google, OpenAI and Elon Musk's xAI. As the Pentagon has continued its push to embrace the technology, it has also brought several models onto its new bespoke AI platform, GenAI.mil. Google and xAI joined the platform in December, while OpenAI announced last week that ChatGPT would also be amde available. Anthropic notably has not been added to the platform. The company's relationship with the Defense Department has hit a major snag in recent weeks. A key point of contention between the two sides was the Trump administration's Venezuela raid in early January, which resulted in Maduro's capture and indictment in the U.S. on charges related to drug trafficking. Following the raid, a senior Anthropic executive got in touch with a senior Palantir executive, asking whether Anthropic's software was used in the operation, according to the senior DOD official. The Palantir executive ultimately told the Pentagon about the exchange because he was alarmed the question was raised in a way that would indicate that Anthropic might disapprove of the use of its model, the official noted. Palantir did not respond to a request for comment. An Anthropic spokesperson said in a statement Wednesday that the company has not discussed the use of Claude for specific operations with the Pentagon or expressed concerns to "any industry partners outside of routine discussions on strictly technical matters." A senior DOD official told The Hill on Wednesday that due to Anthropic's behavior, many senior DOD officials are starting to view the company as a supply chain risk -- a designation typically reserved for foreign adversaries -- and the Pentagon might require all of its vendors and contractors to certify that they do not use any Anthropic models. In a statement Monday, an Anthropic spokesperson said it is engaged in "productive conversations, in good faith, with [the Department of War] on how to continue that work and get these complex issues right." "Anthropic is committed to using frontier AI in support of US national security. That's why we were the first frontier AI company to put our models on classified networks and the first to provide customized models for national security customers," the spokesperson added. Morgan C. Plummer, a senior policy director at Americans for Responsible Innovation, said the tangle over Claude's use might be one of those "weird cases" where both sides have a point. The Pentagon can argue that its needs are different than the rest of the market and that it could use models in "new and unique" ways to protect the country, he said, while Anthropic can contend it has a certain ethos that drives the company and red lines around its use. "Both positions, to me, are perfectly reasonable positions," Plummer said in an interview with The Hill, adding that the clash highlights why the two sides should have come to an agreement over the use of technology before it was procured. Amid this standoff, Emelia Probasco, a senior fellow at Georgetown's Center for Security and Emerging Technology, suggested it would be a "massive loss" if the two sides failed to reach an agreement and the Pentagon no longer had access to Anthropic's tools. "One of the top AI labs in the world is trying to help the government, and there are warfighters who are using this today who are going to be harmed if all of sudden their access is taken away without some very clear technical explanation of what's going on," she said. Claude is currently the only large language model that can operate on fully classified systems, a priority for the Pentagon. The senior DOD official said Wednesday that other AI companies are working in "good faith" with the Pentagon to ensure their models, including ChatGPT, Grok and Gemini, can be utilized for "all lawful purposes" and have all agreed to this in the military's unclassified systems. They added that one company, which they declined to name, agreed to have the model used across all systems and the Pentagon is "optimistic the rest of [the] companies will get there on classified settings in the near future." Anthropic, through its policy views and links to Democrats, has at times clashed with the Trump administration. Anthropic CEO Dario Amodei criticized President Trump in the lead-up to the 2024 election and recently swiped at the administration over its sales of AI chips to China at the World Economic Forum. The firm has also supported state-level AI regulations, while the administration has argued that state laws could stifle innovation. The AI giant has brought on several former Biden administration officials, including Ben Buchanan, the former Biden AI advisor, and Tarun Chhabra, an ex National Security Council official. But the company also added Chris Liddell, who was the deputy White House chief of staff during Trump's first term, to its board of directors earlier this month. Last month, Defense Secretary Pete Hegseth said he is forming an "AI-first, war-fighting force" and appeared to take a swipe at Anthropic. "We will not employ AI models that won't allow you to fight wars," the Pentagon chief said during a speech in Texas.
[10]
US Used Anthropic's Claude During the Venezuela Raid, WSJ Reports
Feb 13 (Reuters) - Anthropic's artificial-intelligence model Claude was used in the U.S. military's operation to capture former Venezuelan President Nicolas Maduro, the Wall Street Journal reported on Friday, citing people familiar with the matter. Claude's deployment came via Anthropic's partnership with data firm Palantir Technologies , whose platforms are widely used by the Defense Department and federal law enforcement, the report added. Reuters could not immediately verify the report. The U.S. Defense Department, the White House, Anthropic and Palantir did not immediately respond to Reuters' requests for comment. The Pentagon is pushing top AI companies, including OpenAI and Anthropic, to make their artificial-intelligence tools available on classified networks without many of the standard restrictions that the firms apply to users, Reuters exclusively reported on Wednesday. Many AI companies are building custom tools for the U.S. military, most of which are available only on unclassified networks typically used for military administration. Anthropic is the only one that is available in classified settings through third parties, but the government is still bound by the company's usage policies. The usage policies of Anthropic, which raised $30 billion in its latest funding round and is now valued at $380 billion, forbid using Claude to support violence, design weapons or carry out surveillance. The United States captured President Nicolas Maduro in an audacious raid and whisked him to New York to face drug-trafficking charges early in January. (Reporting by Carlos Méndez and Juby Babu in Mexico City; Editing by Chris Reese and Alan Barona)
[11]
US reviews Anthropic ties due to AI use in Maduro capture following WSJ report
The US Defense Department is reviewing its relationship with AI company Anthropic after a Wall Street Journal report on Sunday said the firm's Claude model was used in a US military operation last month targeting former Venezuelan president Nicolas Maduro and his wife. The report said the mission involved strikes on multiple sites in Caracas and culminated in the pair's capture. Anthropic's public usage policies prohibit Claude from being used to facilitate violence, develop weapons, or conduct surveillance, the report noted. An Anthropic spokesperson said the company could not comment on whether Claude or any other model was used in any specific operation and that any deployment must comply with its usage policies. According to the report, Claude's deployment in the Venezuela operation occurred through Anthropic's partnership with Palantir Technologies, whose platforms are widely used across the Pentagon and federal law enforcement. The Journal reported that after the raid, an Anthropic employee asked a Palantir counterpart how Claude had been used. Anthropic said it has not discussed the use of Claude for specific operations with industry partners outside of routine technical matters. The spokesperson added that Anthropic is committed to supporting US national security, while emphasizing policy compliance. Chief Pentagon spokesperson Sean Parnell said in a statement that the Defense Department's relationship with Anthropic is under review. "Our nation requires that our partners be willing to help our warfighters win in any fight," Parnell said, according to the Journal's report. The Journal also reported that tensions over Anthropic's constraints on military use have prompted officials to consider canceling a contract worth up to $200 million. The contract was awarded last summer, the report said. The Journal added that Anthropic was the first AI model developer to be used in classified Defense Department operations, and that other tools could have been used for unclassified tasks. AI models can support activities such as summarizing documents, generating reports, and assisting with research, while the most contentious debates center on autonomous lethal uses and surveillance. Anthropic CEO Dario Amodei has publicly called for stronger regulation and guardrails around advanced AI, the report said, putting the company at odds with officials who want fewer restrictions on wartime applications. The Journal reported that Defense Secretary Pete Hegseth previously said the Pentagon would not employ AI models that "won't allow you to fight wars," referencing discussions involving Anthropic. How the US captured Nicolas Maduro The operation to capture Maduro began with overnight US strikes across Venezuela, with multiple explosions reported in Caracas and a loss of power in parts of the capital's southern districts, followed by a rapid assault that ended with Maduro and his wife, Cilia Flores, being taken into custody and flown out of the country. Venezuelan authorities declared a national emergency and, in the immediate aftermath, senior officials said they did not know the couple's whereabouts, while Washington said Maduro would be brought to the United States to face prosecution. In the days that followed, the humanitarian and political fallout widened: Cuba announced that 32 of its citizens, described as members of its armed forces and intelligence services assigned to security duties, were killed during the raid and declared two days of national mourning, while Venezuelan officials later cited a death toll of roughly 100 and said both Maduro and Flores were injured during the operation, as competing accounts continued to emerge over casualties, damage, and the scope of the strikes.
[12]
US used Anthropic's Claude during the Venezuela raid, WSJ reports
Feb 13 (Reuters) - Anthropic's artificial-intelligence model Claude was used in the U.S. military's operation to capture former Venezuelan President Nicolas Maduro, the Wall Street Journal reported on Friday, citing people familiar with the matter. Claude's deployment came via Anthropic's partnership with data firm Palantir Technologies , whose platforms are widely used by the Defense Department and federal law enforcement, the report added. Reuters could not immediately verify the report. The U.S. Defense Department, the White House, Anthropic and Palantir did not immediately respond to Reuters' requests for comment. The Pentagon is pushing top AI companies, including OpenAI and Anthropic, to make their artificial-intelligence tools available on classified networks without many of the standard restrictions that the firms apply to users, Reuters exclusively reported on Wednesday. Many AI companies are building custom tools for the U.S. military, most of which are available only on unclassified networks typically used for military administration. Anthropic is the only one that is available in classified settings through third parties, but the government is still bound by the company's usage policies. The usage policies of Anthropic, which raised $30 billion in its latest funding round and is now valued at $380 billion, forbid using Claude to support violence, design weapons or carry out surveillance. The United States captured President Nicolas Maduro in an audacious raid and whisked him to New York to face drug-trafficking charges early in January. (Reporting by Carlos Méndez and Juby Babu in Mexico City; Editing by Chris Reese and Alan Barona)
Share
Share
Copy Link
The Pentagon is threatening to sever its $200 million contract with Anthropic and designate the AI company as a supply chain risk after tensions erupted over usage restrictions on its Claude model. The dispute intensified following reports that US military used Anthropic's Claude during the Venezuela raid to capture Nicolás Maduro, raising questions about AI safety and military use.
The relationship between Pentagon Anthropic has deteriorated to a critical juncture, with the Department of Defense threatening to terminate its contract worth up to $200 million with the AI safety-focused company
1
. The dispute centers on Anthropic's refusal to lift certain restrictions on its Claude AI model for unrestricted military application of AI, particularly concerning autonomous weapons and mass domestic surveillance3
. Chief Pentagon spokesperson Sean Parnell confirmed the company was under review, stating that "our nation requires that our partners be willing to help our warfighters win in any fight"1
. The Pentagon is even considering designating Anthropic as a supply chain risk, a label typically reserved for companies doing business with scrutinized countries like China, which would prevent defense contractors from using Anthropic's technology1
.
Source: Jerusalem Post
Tensions escalated after reports emerged that US military used Anthropic's Claude during the operation to capture Venezuelan President Nicolás Maduro in January
2
. Claude's deployment came through Anthropic's partnership with Palantir Technologies, whose platforms are widely used by the Department of Defense and federal law enforcement2
. The Venezuela raid involved bombing across Caracas and resulted in 83 deaths according to Venezuela's defense ministry4
. While it remains unclear exactly how Claude was used in the operation, the incident reportedly raised concerns from an Anthropic employee during a routine meeting with Palantir5
. However, Anthropic has maintained it has not found any violations of its policies and denied expressing mission-related disagreements with the military5
.
Source: Axios
The core tension revolves around AI companies ethical guidelines versus the Pentagon's demand for lawful use of AI without company-imposed restrictions. Anthropic's usage policies explicitly forbid using Claude to support violence, design weapons, or carry out surveillance
2
. CEO Dario Amodei has been particularly vocal about not wanting Claude involved in autonomous weapons or AI government surveillance1
. Defense Secretary Pete Hegseth released a new AI strategy document in January calling for contracts with AI companies to eliminate company-specific guardrails, allowing "any lawful use" of AI for Department of Defense purposes5
. Department of Defense CTO Emil Michael told reporters the government won't tolerate AI companies limiting how the military uses AI in its weapons, asking rhetorically about responding to drone swarms when "human reaction time is not fast enough"1
.
Source: Wired
Related Stories
Crucially, Claude is the only model currently available in the military's classified systems through Anthropic's partnership with Palantir Technologies
3
. Three other models—OpenAI's ChatGPT, Google's Gemini, and xAI's Grok—are available in unclassified systems and have lifted their ordinary safeguards as part of those agreements3
. Negotiations to bring these companies into the classified domain are now more urgent as the Pentagon considers how to replace Claude if necessary, a process officials conceded would be massively disruptive3
. A senior administration official revealed that xAI, founded by Elon Musk, has already agreed to "all lawful use" at any classification level and was the only frontier lab bidder in the Pentagon's autonomous drone software contest3
. OpenAI is bidding in a limited way to translate voice commands but not for drone control, weapon integration, or target selection3
.The standoff raises fundamental questions about AI safety and military use as companies grapple with how advanced AI should engage with national security operations. Anthropic, which raised $30 billion in its latest funding round and is valued at $380 billion, has carved out a space as the most safety-conscious AI company, with guardrails deeply integrated into its models
2
1
. Sources familiar with dynamics at OpenAI and Google indicate executives at these companies share some concerns about how their models might be used in AI in military operations, and may fear employee revolts similar to Google's 2018 experience with Project Maven3
. One source noted a critical challenge: "The companies themselves don't fully understand how their models will respond in certain scenarios, or why," adding that "if there's a one in a million chance that the model might do something unpredictable, is that one in a million chance so catastrophic"3
. Administration officials acknowledged the fight with Anthropic serves as a useful way to set the tone for defense contracts negotiations with other labs3
.Summarized by
Navi
[1]
03 Mar 2026•Policy and Regulation

30 Jan 2026•Policy and Regulation

12 Feb 2026•Policy and Regulation

1
Technology

2
Policy and Regulation

3
Policy and Regulation
