4 Sources
[1]
Elon Musk testifies that xAI trained Grok on OpenAI models | TechCrunch
OpenAI and Anthropic have been on the warpath lately against third-party efforts to train new AI models by prompting their publicly-accessible chatbots and APIs, a process known as "distillation." That conversation has focused on Chinese firms using distillation to create open-weight models that are nearly as capable as U.S. offerings, but available at a much lower cost. However, tech workers have widely assumed that American labs use these techniques on each other to avoid falling behind competitors. Now, we know it's true in at least one case: on the stand in a California federal court on Thursday, Elon Musk was asked if xAI has used distillation techniques on OpenAI models to train Grok, and he asserted it was a general practice among AI companies. Asked if that meant "yes," he said, "Partly." Musk is in the process of suing OpenAI, CEO Sam Altman, and Greg Brockman, alleging they breached the original nonprofit mission for OpenAI by shifting the entity to a for-profit structure. That trial began this week, featuring testimony from the tech leader. Musk's admission is notable because distillation threatens AI giants by undermining the advantage they've built by investing in compute infrastructure. This allows other software makers to create models that are nearly as capable on the cheap. There's no small amount of irony here, given the bending and alleged breaking of copyright rules by frontier labs in their search for sufficient data to train their models. It's no surprise that Musk's xAI, which started in 2023, years after OpenAI, would try to learn from the then-leader in the field. It's not clear that distillation is explicitly illegal, but rather may violate the terms of service companies set for the user of their products. OpenAI, Anthropic, and Google have reportedly launched an initiative through the Frontier Model Forum to share information about how to combat distillation attempts from China. These typically involve systematic querying of models to understand their inner workings. To stop the efforts, frontier labs are working to prevent users from making suspicious mass queries. OpenAI did not respond to a request for comment on Musk's admission at press time. Later in his testimony, Musk was asked about a claim he made last summer that xAI would soon be far beyond any company besides Google. In response, he ranked the world's leading AI providers, saying Anthropic held the top spot, followed by OpenAI, Google, and Chinese open source models. He characterized xAI as a much smaller company with just a few hundred employees.
[2]
Elon Musk Seemingly Admits xAI Has Used OpenAI's Models to Train Its Own
While testifying on Thursday in federal court, Elon Musk seemed to indicate that his AI lab may have used OpenAI's models to train xAI's own. He touched upon the topic while sitting on the witness stand answering cross-examination questions from an OpenAI attorney amid his ongoing legal battle against the ChatGPT-maker. This is the exchange, as best as WIRED could capture it: Distillation is a technique where a smaller AI model is trained to mimic the behavior of a larger, more capable model, making it cheaper and faster to run while preserving much of its performance. OpenAI's lawyer, William Savitt, then asked whether OpenAI's technology had been used in any way to develop xAI. OpenAI and xAI did not immediately respond to WIRED's request for comment. OpenAI has been trying to prevent its competitors from distilling its AI models, in particular, the Chinese AI lab DeepSeek. In a February 2026 memo to a House committee, OpenAI wrote that it has "taken steps to protect and harden our models against distillation." In that memo, OpenAI said it was focused on ensuring a playing field in which "China can't advance autocratic AI by appropriating and repackaging American innovation." The Trump administration has also taken steps to prevent Chinese companies from distilling American AI models. Michael Kratsios, the White House's director of the office of science and technology policy, said in an April 2026 memo that it would share information with US AI companies about foreign distillation. Kratsios said in a post on X that the "U.S. government is committed to the free and fair development of AI technologies across a competitive ecosystem." American AI labs have used each other's AI models in other ways, to test progress and assess safety. But in today's competitive landscape, some AI companies have completely cut off rival labs. In August 2025, Anthropic blocked OpenAI's access to its Claude coding models after the company alleged that its terms of service had been violated. More recently, Anthropic cut off xAI from using its AI models for coding as well. In his multi-day cross-examination of Musk, Savitt has questioned Musk about his attempts to assume control of OpenAI, and subsequently, his quest to beat the ChatGPT-maker. On Wednesday, Savitt presented emails and texts from 2017 to support a line of questions as to whether Musk squeezed OpenAI by withholding funding and hiring away key researchers.
[3]
Elon Musk confirms xAI used OpenAI's models to train Grok
In a federal courtroom in California on Thursday, Elon Musk testified that his own AI startup, xAI, has used OpenAI's models to improve its own. The matter at question is model distillation, a common industry practice by which one larger AI model acts as a "teacher" of sorts to pass on knowledge to a smaller AI model, the "student." Although it's often used legitimately within companies using one of their own AI models to train another, it's also a practice that's sometimes used by smaller AI labs to try to get their models to mimic the performance of a larger competitor's model. Asked on the stand whether he knew what model distillation was, Musk said it's to use one AI model to train another. When asked whether xAI has distilled OpenAI's technology, Musk seemed to avoid the question, saying that "generally all the AI companies" do such a thing. And when asked if that was a yes, he said, "Partly." When pressed, Musk said, "It is standard practice to use other AIs to validate your AI." Model distillation has been on the rise and has incited more controversy among AI labs, in recent years, since the lines for what's legal -- and what violates a company's certain terms or policies -- often fall within a gray area. Companies like OpenAI and Anthropic have accused Chinese firms of distilling their models, with OpenAI publicly stating its concerns about DeepSeek, and Anthropic specifically naming DeepSeek, Moonshot, and MiniMax. Google, also, has taken steps to try to prevent what it calls "distillation attacks," or "a method of intellectual property theft that violates Google's terms of service." In Anthropic's own blog post on the matter, the company wrote, "Distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers. But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently."
[4]
Elon Musk Says xAI Used OpenAI Models to Train Grok - Decrypt
The disclosure came during Musk's lawsuit against OpenAI over its shift to a for-profit model. Elon Musk said his artificial intelligence company xAI partly used OpenAI models while training its Grok chatbot, according to a report by TechCrunch. The admission is a rare public acknowledgment by a major AI developer of a practice under growing scrutiny. It comes as Musk's case against OpenAI moves forward in federal court, where the trial began this week and will examine the company's governance and the broader AI landscape. Musk made the statement Thursday while testifying in a California federal court, where he is suing OpenAI, CEO Sam Altman, and co-founder Greg Brockman. The lawsuit centers on Musk's claim that OpenAI moved away from its original nonprofit mission. During questioning, Musk was asked whether xAI used distillation techniques on OpenAI models. He reportedly said the answer was "partly," and described the approach as a broader industry practice. Musk co-founded OpenAI in 2015 with Sam Altman, Greg Brockman, Ilya Sutskever, John Schulman, and Wojciech Zaremba as a nonprofit focused on developing artificial intelligence for the benefit of humanity. Musk left the company in 2018. Distillation refers to training a new AI system by querying an existing model through its public interface or API and using those outputs as learning signals. In February, Anthropic accused several Chinese AI developers of using fraudulent accounts to extract large volumes of responses from its Claude chatbot to train competing systems. Earlier this month, the White House warned of "industrial-scale" campaigns using proxy accounts and jailbreaks to replicate U.S. AI capabilities. Musk's testimony indicates that the method is being used by U.S.-based AI companies, not only foreign competitors. The legal boundaries remain unclear. Distillation is not explicitly illegal, but it can raise questions about whether it violates platform rules or terms governing API use. Launched in July 2023, xAI entered a market that included Google, Microsoft, and OpenAI, companies with larger teams and more established infrastructure. Earlier that year, Musk and other tech figures signed an open letter calling for a six-month pause on developing more advanced AI systems, citing potential risks. Musk's remarks suggest the company may have used his former company's technology to close the gap.
Share
Copy Link
During testimony in a California federal court, Elon Musk confirmed that xAI used model distillation techniques on OpenAI's models to develop Grok. The admission exposes a controversial practice where AI companies train new models by querying competitors' systems, undermining their competitive advantage while raising questions about intellectual property and terms of service violations.

Elon Musk confirmed on Thursday that xAI used OpenAI's models to help train Grok chatbot, marking a rare public acknowledgment of a controversial AI industry practice. During cross-examination in a California federal court, where Musk is pursuing his lawsuit against OpenAI, CEO Sam Altman, and co-founder Greg Brockman, the tech leader was directly asked whether xAI had employed distillation techniques on OpenAI's technology
1
. When pressed by OpenAI attorney William Savitt, Musk initially deflected, stating that "generally all the AI companies" engage in such practices, before conceding "Partly" when asked for a direct answer2
. He later added, "It is standard practice to use other AIs to validate your AI"3
.Model distillation involves training a smaller AI model to mimic the behavior of a larger, more capable model by systematically querying it through publicly-accessible chatbots and APIs
3
. This technique allows companies to create models that are nearly as capable as their competitors' offerings but at a fraction of the cost and time required for independent development. The practice undermines the competitive advantage that AI giants have built by investing heavily in compute infrastructure1
. For xAI, which launched in July 2023—years after OpenAI established its market position—using distillation techniques provided a way to close the gap with more established players like Google, Microsoft, and OpenAI4
.While model distillation is not explicitly illegal, it often raises questions about whether it violates platform rules governing API use and terms of service violations
1
. Anthropic has stated that "distillation is a widely used and legitimate training method" when frontier labs distill their own models to create smaller versions for customers, but it "can also be used for illicit purposes" when competitors use it to acquire capabilities from other labs3
. The legal boundaries remain unclear, creating a gray area that companies navigate as they balance innovation with intellectual property concerns4
.The conversation around distillation techniques has intensified due to Chinese firms using the method to create open source models that rival U.S. offerings. OpenAI and Anthropic have publicly accused Chinese AI developers including DeepSeek, Moonshot, and MiniMax of using fraudulent accounts to extract large volumes of responses from their systems
2
. In a February 2026 memo to a House committee, OpenAI stated it has "taken steps to protect and harden our models against distillation," focusing on ensuring China can't advance by "appropriating and repackaging American innovation"2
. Google has similarly taken steps to prevent what it calls "distillation attacks," describing them as "a method of intellectual property theft that violates Google's terms of service"3
.Related Stories
OpenAI, Anthropic, and Google have reportedly launched an initiative through the Frontier Model Forum to share information about how to combat distillation attempts, particularly those involving suspicious mass queries designed to understand models' inner workings
1
. The Trump administration has also intervened, with White House director of the office of science and technology policy Michael Kratsios issuing an April 2026 memo promising to share information with U.S. AI companies about foreign distillation2
. Meanwhile, AI companies have begun cutting off rivals' access to their systems. In August 2025, Anthropic blocked OpenAI's access to its Claude coding models after alleging terms of service violations, and more recently cut off xAI from using its AI models for coding as well2
.Musk's admission that xAI used OpenAI's models reveals that distillation is not just a foreign threat but an AI industry practice among U.S. companies
4
. During his testimony, Musk also ranked leading AI providers, placing Anthropic at the top spot, followed by OpenAI, Google, and Chinese open source models, while characterizing xAI as a much smaller company with just a few hundred employees1
. The disclosure carries irony given that frontier labs have faced criticism for allegedly breaking copyright rules in their search for sufficient training data. As AI governance frameworks evolve, the industry faces questions about how to balance competitive innovation with protecting intellectual property while maintaining a level playing field against international competitors.Summarized by
Navi
1
Policy and Regulation

2
Science and Research

3
Entertainment and Society
