4 Sources
4 Sources
[1]
US used Anthropic's Claude during the Venezuela raid, WSJ reports
Feb 13 (Reuters) - Anthropic's artificial-intelligence model Claude was used in the U.S. military's operation to capture former Venezuelan President Nicolas Maduro, the Wall Street Journal reported on Friday, citing people familiar with the matter. Claude's deployment came via Anthropic's partnership with data firm Palantir Technologies (PLTR.O), opens new tab, whose platforms are widely used by the Defense Department and federal law enforcement, the report added. Reuters could not immediately verify the report. The U.S. Defense Department, the White House, Anthropic and Palantir did not immediately respond to Reuters' requests for comment. The Pentagon is pushing top AI companies, including OpenAI and Anthropic, to make their artificial-intelligence tools available on classified networks without many of the standard restrictions that the firms apply to users, Reuters exclusively reported on Wednesday. Many AI companies are building custom tools for the U.S. military, most of which are available only on unclassified networks typically used for military administration. Anthropic is the only one that is available in classified settings through third parties, but the government is still bound by the company's usage policies. The usage policies of Anthropic, which raised $30 billion in its latest funding round and is now valued at $380 billion, forbid using Claude to support violence, design weapons or carry out surveillance. The United States captured President Nicolas Maduro in an audacious raid and whisked him to New York to face drug-trafficking charges early in January. Reporting by Carlos Méndez and Juby Babu in Mexico City; Editing by Chris Reese and Alan Barona Our Standards: The Thomson Reuters Trust Principles., opens new tab
[2]
Pentagon used Anthropic's Claude during Maduro raid
Why it matters: The previously undisclosed role of Claude in the highly complex and deadly operation highlights the tensions the major AI labs face, as they enter into business with the military while trying to maintain some limitations on how their tools are used. Breaking it down: AI models can quickly process data in real-time, a capability prized by the Pentagon given the chaotic environments in which military operations take place. * Axios could not confirm the precise role that Claude played in the operation to capture Maduro. The military has used Claude in the past to analyze satellite imagery or intelligence. The sources said Claude was used during the active operation, not just in preparations for it. * No Americans were killed in the raid. Cuba and Venezuela both said dozens of their soldiers and security personnel were killed. Friction point: The Pentagon wants the AI giants to allow them to use their models in any scenario so long as they comply with the law. * Anthropic, which has positioned itself as the safety-first AI leader, is currently negotiating with the Pentagon around its terms of use. The company wants to ensure in particular that its technology is not used for the mass surveillance of Americans or to operate fully autonomous weapons. * The company is confident the military has complied in all cases with its existing usage policy, which has additional restrictions, a source familiar with those discussions told Axios. What they're saying: "We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise," an Anthropic spokesperson told Axios. * "Any use of Claude -- whether in the private sector or across government -- is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance." * Defense Secretary Pete Hegseth has leaned into AI and said he wants to quickly integrate it into all aspects of the military's work, in part to stay ahead of China. The Pentagon did not respond to a request for comment. * One source said some senior officials at the Pentagon were "deeply frustrated" by Anthropic's posture on safeguards, citing recent conversations. The big picture: Anthropic is one of several major model-makers that are working with the Pentagon in various capacities. * OpenAI, Google and xAI have all reached deals for military users to access their models without many of the safeguards that apply to ordinary users. It's unclear if any other models were used during the Venezuela operation. * But the military's most sensitive work -- from weapons testing to comms during active operations -- happens on classified systems. For now, only Anthropic's system is available on those classified platforms. * Anthropic also has a partnership with Palantir, the AI software firm that has extensive Pentagon contracts, that allows it to use Claude within its security products. It's not clear whether the use of Claude in the operation was tied to the Anthropic-Palantir partnership. What to watch: Discussions are ongoing between the Pentagon and OpenAI, Google and xAI about allowing the use of their tools in classified systems. Anthropic and the Pentagon are also in discussions about potentially loosening the restrictions on Claude.
[3]
US Used Anthropic's Claude During the Venezuela Raid, WSJ Reports
Feb 13 (Reuters) - Anthropic's artificial-intelligence model Claude was used in the U.S. military's operation to capture former Venezuelan President Nicolas Maduro, the Wall Street Journal reported on Friday, citing people familiar with the matter. Claude's deployment came via Anthropic's partnership with data firm Palantir Technologies , whose platforms are widely used by the Defense Department and federal law enforcement, the report added. Reuters could not immediately verify the report. The U.S. Defense Department, the White House, Anthropic and Palantir did not immediately respond to Reuters' requests for comment. The Pentagon is pushing top AI companies, including OpenAI and Anthropic, to make their artificial-intelligence tools available on classified networks without many of the standard restrictions that the firms apply to users, Reuters exclusively reported on Wednesday. Many AI companies are building custom tools for the U.S. military, most of which are available only on unclassified networks typically used for military administration. Anthropic is the only one that is available in classified settings through third parties, but the government is still bound by the company's usage policies. The usage policies of Anthropic, which raised $30 billion in its latest funding round and is now valued at $380 billion, forbid using Claude to support violence, design weapons or carry out surveillance. The United States captured President Nicolas Maduro in an audacious raid and whisked him to New York to face drug-trafficking charges early in January. (Reporting by Carlos Méndez and Juby Babu in Mexico City; Editing by Chris Reese and Alan Barona)
[4]
US used Anthropic's Claude during the Venezuela raid, WSJ reports
Feb 13 (Reuters) - Anthropic's artificial-intelligence model Claude was used in the U.S. military's operation to capture former Venezuelan President Nicolas Maduro, the Wall Street Journal reported on Friday, citing people familiar with the matter. Claude's deployment came via Anthropic's partnership with data firm Palantir Technologies , whose platforms are widely used by the Defense Department and federal law enforcement, the report added. Reuters could not immediately verify the report. The U.S. Defense Department, the White House, Anthropic and Palantir did not immediately respond to Reuters' requests for comment. The Pentagon is pushing top AI companies, including OpenAI and Anthropic, to make their artificial-intelligence tools available on classified networks without many of the standard restrictions that the firms apply to users, Reuters exclusively reported on Wednesday. Many AI companies are building custom tools for the U.S. military, most of which are available only on unclassified networks typically used for military administration. Anthropic is the only one that is available in classified settings through third parties, but the government is still bound by the company's usage policies. The usage policies of Anthropic, which raised $30 billion in its latest funding round and is now valued at $380 billion, forbid using Claude to support violence, design weapons or carry out surveillance. The United States captured President Nicolas Maduro in an audacious raid and whisked him to New York to face drug-trafficking charges early in January. (Reporting by Carlos Méndez and Juby Babu in Mexico City; Editing by Chris Reese and Alan Barona)
Share
Share
Copy Link
Anthropic's Claude artificial intelligence model was used during the US military's operation to capture former Venezuelan President Nicolas Maduro, marking the first confirmed deployment of a major AI system in an active military raid. The deployment came through Anthropic's partnership with Palantir Technologies, raising questions about AI usage policies that forbid supporting violence while the Pentagon pushes for unrestricted access on classified networks.
Anthropics Claude artificial intelligence model played a previously undisclosed role in the US military's operation to capture former Venezuelan President Nicolas Maduro in early January, according to reports from the Wall Street Journal
1
. The deployment marks a significant moment in military AI integration, as Claude was used during the active operation itself, not merely in preparatory phases2
. While the precise role remains unclear, the US military has previously utilized Claude to analyze satellite imagery and intelligence data2
. The operation to capture Nicolas Maduro resulted in no American casualties, though Cuba and Venezuela reported dozens of their soldiers and security personnel were killed . Maduro was subsequently transported to New York to face drug-trafficking charges3
.
Source: Axios
Claude's deployment came through Anthropic's partnership with Palantir Technologies, a data firm whose platforms are extensively used by the Defense Department and federal law enforcement
1
. This arrangement allows Palantir to integrate Claude within its security products for government clients2
. Critically, Anthropic is currently the only major AI developer with models available on classified military networks through third-party partnerships, though the government remains bound by the company's usage policies4
. Most AI tools built for the US military operate only on unclassified networks typically used for military administration3
. The real-time data processing capabilities of artificial intelligence model systems like Claude are particularly valued by military operations in chaotic environments2
.
Source: Reuters
The Pentagon is actively pushing top AI companies, including OpenAI and Anthropic, to make their tools available on classified networks without many standard restrictions
1
. This drive for unrestricted deployment of AI creates tension with Anthropic's established AI usage policies, which explicitly forbid using Claude to support violence, design weapons, or carry out surveillance4
. Some senior officials at the Pentagon are reportedly "deeply frustrated" by Anthropic's posture on safeguards, according to sources2
. Anthropic, which raised $30 billion in its latest funding round and is now valued at $380 billion, is currently negotiating with the Pentagon around its terms of use1
. The company wants to ensure its technology is not used for mass surveillance of Americans or to operate fully autonomous weapons2
. An Anthropic spokesperson stated that any use of Claude "is required to comply with our Usage Policies" and that the company "work closely with our partners to ensure compliance"2
.Related Stories
Defense Secretary Pete Hegseth has actively embraced Pentagon's AI integration, stating he wants to quickly integrate it into all aspects of military operations, partly to maintain advantage over China
2
. OpenAI, Google, and xAI have all reached deals allowing military users to access their models without many safeguards that apply to ordinary users, though it remains unclear whether other models were used during the Venezuela raid2
. Discussions are ongoing between the Pentagon and multiple AI developers about allowing use of their tools in classified systems, while Anthropic and the Defense Department are also negotiating potentially loosening restrictions on Claude2
. This development highlights the complex ethical considerations facing AI developers as they balance national security needs with their stated safety commitments. The tension between military demands for unfettered access and AI companies' ethical guardrails will likely shape how artificial intelligence is deployed in military operations moving forward, with implications for surveillance, autonomous weapons systems, and the broader integration of AI into classified military networks.Summarized by
Navi
[4]
08 Nov 2024•Technology

30 Jan 2026•Policy and Regulation

18 Sept 2025•Policy and Regulation

1
Technology

2
Business and Economy

3
Science and Research
