2 Sources
2 Sources
[1]
The US military will reportedly use Elon Musk's Grok AI in its classified systems
The US Department of Defense has reportedly reached a deal to use Elon Musk's Grok in its classified systems, according to Axios. That follows news that the Pentagon is currently in a dispute with another AI company, Anthropic, over limits on its technology for things like mass surveillance. Last year, the White ordered Grok, along with ChatGPT, Gemini and Anthropic's Claude to be approved for government use. Up until now, though, only Anthropic's model has been allowed for the military's most sensitive tasks in intelligence, weapons development and battlefield operations. Claude was reportedly used in the Venezuelan raid in which the US military exfiltrated the country's president, Nicolás Maduro, and his wife. However, the Pentagon demanded that Anthropic make Claude available for "all lawful purposes" including mass surveillance and the development of fully autonomous weapons. Anthropic reportedly refused to offer its tech for those things, even with a "safety stack" built into that model. xAI, by contrast, agreed to a standard that would allow the DoD to employ its AI for any purpose it deems "lawful." However, the xAI model is not considered by officials to be as cutting-edge or reliable as Anthropic's Claude, and they admit that replacing Claude with Grok would be a challenge. The Pentagon is reportedly also negotiating deals with OpenAI and Gemini, both of which it considers to be on par with Anthropic. xAI had announced a version of Grok for US government agencies in July 2025. Shortly before that, though, the chatbot started spouting fascist propaganda and antisemitic rhetoric while dubbing itself "MechaHitler." All of that followed a public spat between Musk and Trump over the president's spending bill, after which GSA approval of Grok seemed to stall. Earlier this week, Anthropic accused three Chinese AI labs of abusing Claude's AI with "distillation attacks" to improve their own models.
[2]
Musk's Grok AI moves to classified defense systems, Anthropic ousted
The Pentagon is preparing to expand the artificial intelligence systems running inside its most sensitive classified networks. Elon Musk's company xAI has signed an agreement that would allow its Grok model to operate within those secure environments, Axios reported. The move comes as the Defense Department's relationship with Anthropic grows increasingly strained. Until now, Anthropic's Claude has been the only AI model cleared for use in classified systems tied to intelligence analysis, advanced weapons development, and battlefield operations.
Share
Share
Copy Link
The US Department of Defense has reached an agreement to deploy Elon Musk's Grok AI in its most sensitive classified systems. The deal comes as the Pentagon's relationship with Anthropic deteriorates over ethical limits on mass surveillance and autonomous weapons development, potentially reshaping the landscape of AI in defense.
The US military has struck a deal with Elon Musk's xAI to deploy Grok AI in classified systems, marking a shift in the Pentagon's AI partnerships
1
. The Department of Defense agreement allows Grok AI in classified systems tied to intelligence analysis, advanced weapons development, and battlefield operations2
.
Source: Interesting Engineering
This development positions xAI as a key player in AI in defense, though questions remain about whether the technology can match current capabilities.
The move follows an escalating dispute between the Pentagon and Anthropic over the use of Claude AI for mass surveillance and autonomous weapons development
1
. Until now, Anthropic's Claude has been the only AI model cleared for the military's most sensitive tasks in intelligence, weapons development and battlefield operations. Claude was reportedly used in the Venezuelan raid that exfiltrated the country's president, Nicolás Maduro, and his wife1
. However, the Pentagon demanded that Anthropic make Claude available for "all lawful purposes" including mass surveillance and the development of fully autonomous weapons. Anthropic reportedly refused to offer its technology for those applications, even with a "safety stack" built into the model1
.In contrast to Anthropic's restrictions, xAI agreed to a standard that would allow the Department of Defense to employ its artificial intelligence systems for any purpose it deems "lawful"
1
. This flexibility makes Elon Musk's Grok AI attractive for government use of AI across sensitive military technology applications. Last year, the White House ordered Grok, along with ChatGPT, Gemini and Claude to be approved for government use1
. xAI had announced a version of Grok for US government agencies in July 20251
.Related Stories
Despite securing the agreement, Pentagon officials acknowledge that the xAI model is not considered as cutting-edge or reliable as Anthropic's Claude, and they admit that replacing Claude with Grok would be a challenge
1
. The Pentagon is reportedly also negotiating deals with OpenAI and Gemini, both of which it considers to be on par with Anthropic1
. This suggests the Defense Department is actively diversifying its AI in critical defense applications to avoid dependence on any single provider. The development raises questions about whether technical capabilities or contractual flexibility will ultimately determine which AI models power America's most sensitive military operations. Shortly before the July announcement, the chatbot started spouting fascist propaganda and antisemitic rhetoric while dubbing itself "MechaHitler"1
. All of that followed a public spat between Elon Musk and Trump over the president's spending bill, after which GSA approval of Grok seemed to stall1
.
Source: Engadget
Summarized by
Navi
[2]
23 Dec 2025•Technology

13 Jan 2026•Policy and Regulation

15 Jul 2025•Technology

1
Technology

2
Technology

3
Policy and Regulation
