Elon Musk admits xAI used OpenAI models to train Grok, calling it standard industry practice

Reviewed byNidhi Govil

4 Sources

Share

During testimony in a California federal court, Elon Musk confirmed that xAI used model distillation techniques on OpenAI's models to develop Grok. The admission exposes a controversial practice where AI companies train new models by querying competitors' systems, undermining their competitive advantage while raising questions about intellectual property and terms of service violations.

News article

Elon Musk Testimony Reveals xAI Used OpenAI's Models

Elon Musk confirmed on Thursday that xAI used OpenAI's models to help train Grok chatbot, marking a rare public acknowledgment of a controversial AI industry practice. During cross-examination in a California federal court, where Musk is pursuing his lawsuit against OpenAI, CEO Sam Altman, and co-founder Greg Brockman, the tech leader was directly asked whether xAI had employed distillation techniques on OpenAI's technology

1

. When pressed by OpenAI attorney William Savitt, Musk initially deflected, stating that "generally all the AI companies" engage in such practices, before conceding "Partly" when asked for a direct answer

2

. He later added, "It is standard practice to use other AIs to validate your AI"

3

.

Model Distillation Threatens Competitive Advantage

Model distillation involves training a smaller AI model to mimic the behavior of a larger, more capable model by systematically querying it through publicly-accessible chatbots and APIs

3

. This technique allows companies to create models that are nearly as capable as their competitors' offerings but at a fraction of the cost and time required for independent development. The practice undermines the competitive advantage that AI giants have built by investing heavily in compute infrastructure

1

. For xAI, which launched in July 2023—years after OpenAI established its market position—using distillation techniques provided a way to close the gap with more established players like Google, Microsoft, and OpenAI

4

.

Legal Gray Area and Terms of Service Violations

While model distillation is not explicitly illegal, it often raises questions about whether it violates platform rules governing API use and terms of service violations

1

. Anthropic has stated that "distillation is a widely used and legitimate training method" when frontier labs distill their own models to create smaller versions for customers, but it "can also be used for illicit purposes" when competitors use it to acquire capabilities from other labs

3

. The legal boundaries remain unclear, creating a gray area that companies navigate as they balance innovation with intellectual property concerns

4

.

Growing Concerns About Chinese AI Labs

The conversation around distillation techniques has intensified due to Chinese firms using the method to create open source models that rival U.S. offerings. OpenAI and Anthropic have publicly accused Chinese AI developers including DeepSeek, Moonshot, and MiniMax of using fraudulent accounts to extract large volumes of responses from their systems

2

. In a February 2026 memo to a House committee, OpenAI stated it has "taken steps to protect and harden our models against distillation," focusing on ensuring China can't advance by "appropriating and repackaging American innovation"

2

. Google has similarly taken steps to prevent what it calls "distillation attacks," describing them as "a method of intellectual property theft that violates Google's terms of service"

3

.

Frontier Labs Take Defensive Measures

OpenAI, Anthropic, and Google have reportedly launched an initiative through the Frontier Model Forum to share information about how to combat distillation attempts, particularly those involving suspicious mass queries designed to understand models' inner workings

1

. The Trump administration has also intervened, with White House director of the office of science and technology policy Michael Kratsios issuing an April 2026 memo promising to share information with U.S. AI companies about foreign distillation

2

. Meanwhile, AI companies have begun cutting off rivals' access to their systems. In August 2025, Anthropic blocked OpenAI's access to its Claude coding models after alleging terms of service violations, and more recently cut off xAI from using its AI models for coding as well

2

.

Implications for AI Governance

Musk's admission that xAI used OpenAI's models reveals that distillation is not just a foreign threat but an AI industry practice among U.S. companies

4

. During his testimony, Musk also ranked leading AI providers, placing Anthropic at the top spot, followed by OpenAI, Google, and Chinese open source models, while characterizing xAI as a much smaller company with just a few hundred employees

1

. The disclosure carries irony given that frontier labs have faced criticism for allegedly breaking copyright rules in their search for sufficient training data. As AI governance frameworks evolve, the industry faces questions about how to balance competitive innovation with protecting intellectual property while maintaining a level playing field against international competitors.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved