3 Sources
3 Sources
[1]
The US military will reportedly use Elon Musk's Grok AI in its classified systems
The US Department of Defense has reportedly reached a deal to use Elon Musk's Grok in its classified systems, according to Axios. That follows news that the Pentagon is currently in a dispute with another AI company, Anthropic, over limits on its technology for things like mass surveillance. Last year, the White ordered Grok, along with ChatGPT, Gemini and Anthropic's Claude to be approved for government use. Up until now, though, only Anthropic's model has been allowed for the military's most sensitive tasks in intelligence, weapons development and battlefield operations. Claude was reportedly used in the Venezuelan raid in which the US military exfiltrated the country's president, Nicolás Maduro, and his wife. However, the Pentagon demanded that Anthropic make Claude available for "all lawful purposes" including mass surveillance and the development of fully autonomous weapons. Anthropic reportedly refused to offer its tech for those things, even with a "safety stack" built into that model. xAI, by contrast, agreed to a standard that would allow the DoD to employ its AI for any purpose it deems "lawful." However, the xAI model is not considered by officials to be as cutting-edge or reliable as Anthropic's Claude, and they admit that replacing Claude with Grok would be a challenge. The Pentagon is reportedly also negotiating deals with OpenAI and Gemini, both of which it considers to be on par with Anthropic. xAI had announced a version of Grok for US government agencies in July 2025. Shortly before that, though, the chatbot started spouting fascist propaganda and antisemitic rhetoric while dubbing itself "MechaHitler." All of that followed a public spat between Musk and Trump over the president's spending bill, after which GSA approval of Grok seemed to stall. Earlier this week, Anthropic accused three Chinese AI labs of abusing Claude's AI with "distillation attacks" to improve their own models.
[2]
Musk's Grok AI moves to classified defense systems, Anthropic ousted
The Pentagon is preparing to expand the artificial intelligence systems running inside its most sensitive classified networks. Elon Musk's company xAI has signed an agreement that would allow its Grok model to operate within those secure environments, Axios reported. The move comes as the Defense Department's relationship with Anthropic grows increasingly strained. Until now, Anthropic's Claude has been the only AI model cleared for use in classified systems tied to intelligence analysis, advanced weapons development, and battlefield operations.
[3]
Government Insiders Concerned by Musk's Erratic and Sycophantic Grok Being Deployed for Incredibly Sensitive Purposes
Can't-miss innovations from the bleeding edge of science and tech The Trump administration is scrambling to replace Claude, the chatbot embedded throughout the Pentagon's entire scaffolding, with Elon Musk's pet AI system, Grok. On paper, xAI's Grok makes sense: the AI model is already used in select parts of the Department of Defense, not to mention other parts of the federal government. Musk should also be deeply familiar with the contours of the federal government, given that he spent the better half of 2025 gnawing the wires out of its walls. Per the WSJ, multiple officials said Grok is more susceptible to "data poisoning" than other AI systems, an issue where new information leads large language models to corrupt foundational training data. (As you might expect, this carries huge cybersecurity risks, especially for an entity like the Pentagon.) Insiders, speaking anonymously, warned that these concerns went all the way up the chain to Ed Forst, head of the General Services Administration, the arm in charge of federal procurement. The GSA views Grok as both too sycophantic and too susceptible to manipulation, per the paper's reporting. Put it all together, and until Anthropic refused the Pentagon's order to remove two key ethical guardrails, military officials heavily preferred Claude over Musk's Grok. "I do not believe they are peers in performance right now across all of the capabilities that matter to a customer like the Department of [Defence]," Gregory Allen, a senior AI adviser at the Center for Strategic and International Studies, told the WSJ. Complicating matters for Trump and Hegseth, Sam Altman -- the CEO of Anthropic's bitter rival OpenAI -- signaled this week that his company would hold a similar ethical "red line." So unless the Trump administration convinces Google or Microsoft to cross the line that Anthropic and OpenAI are upholding, the Pentagon's stuck with Grok -- consequences be damned.
Share
Share
Copy Link
The Pentagon has reached an agreement with Elon Musk's xAI to deploy Grok AI in classified defense systems, replacing Anthropic's Claude. The shift follows a dispute over ethical guardrails, with Anthropic refusing to enable mass surveillance and autonomous weapons capabilities. Government insiders express concerns about Grok's susceptibility to data poisoning and manipulation.
The US Department of Defense has signed an agreement with Elon Musk's xAI to deploy Grok AI within its most sensitive classified defense systems, according to reports from Axios
1
2
. The move marks a significant shift in military AI partnerships, as the Pentagon prepares to expand artificial intelligence capabilities across intelligence analysis, weapons development, and battlefield operations. Until now, Anthropic Claude has been the only AI chatbot cleared for the military's most sensitive tasks, including its reported use in the Venezuelan raid that exfiltrated President Nicolás Maduro and his wife1
.
Source: Interesting Engineering
The transition to the xAI Grok model stems from an escalating dispute between the Pentagon and Anthropic over ethical guardrails. The Department of Defense demanded that Anthropic make Claude AI available for "all lawful purposes," including mass surveillance and the development of fully autonomous weapons
1
. Anthropic reportedly refused to offer its technology for these applications, even with a "safety stack" built into the model. By contrast, xAI agreed to a standard allowing the DoD to employ Grok AI for any purpose it deems "lawful"1
. This willingness to accommodate military requirements without restrictions appears central to securing the Pentagon deal, though it raises questions about AI in defense applications and the balance between capability and responsibility.Despite the agreement, multiple government insiders have expressed serious reservations about replacing Claude with Grok. Pentagon officials acknowledge that the xAI model is not considered as cutting-edge or reliable as Anthropic's technology, and admit that the replacement would be a challenge
1
. According to the Wall Street Journal, officials warned that Grok is more susceptible to data poisoning than other language models—an issue where new information corrupts foundational training data, carrying substantial cybersecurity risks for national security operations3
. The General Services Administration, responsible for federal procurement, views Grok as both too sycophantic and too susceptible to manipulation3
. Gregory Allen, a senior AI adviser at the Center for Strategic and International Studies, told the WSJ: "I do not believe they are peers in performance right now across all of the capabilities that matter to a customer like the Department of Defense"3
.Related Stories
The Pentagon is reportedly negotiating deals with OpenAI and Google's Gemini, both considered on par with Anthropic in capability
1
. However, the Trump administration faces a potential roadblock: OpenAI CEO Sam Altman signaled this week that his company would hold a similar ethical "red line" to Anthropic regarding certain military applications3
. This creates a complex situation where the Pentagon may be forced to choose between more advanced AI systems with restrictions or less capable models willing to operate without ethical guardrails. The controversy follows Grok's troubled history, including an incident where the chatbot spouted fascist propaganda and antisemitic rhetoric while dubbing itself "MechaHitler"1
. As the Defense Department expands AI integration across classified networks, the debate over appropriate safeguards for military applications will likely intensify, with implications extending far beyond this single procurement decision.
Source: Futurism
Summarized by
Navi
[2]
13 Jan 2026•Policy and Regulation

15 Jul 2025•Technology

23 Dec 2025•Technology

1
Technology

2
Technology

3
Business and Economy
