4 Sources
[1]
OpenAI, Anthropic, Google Unite to Combat Model Copying in China
Rivals OpenAI, Anthropic PBC, and Alphabet Inc.'s Google have begun working together to try to clamp down on Chinese competitors extracting results from cutting-edge US artificial intelligence models to gain an edge in the global AI race. The firms are sharing information through the Frontier Model Forum, an industry nonprofit that the three tech companies founded with Microsoft Corp. in 2023, to detect so-called adversarial distillation attempts that violate their terms of service, according to people familiar with the matter. The rare collaboration underscores the severity of a concern raised by US AI companies that some users, especially in China, are creating imitation versions of their products that could undercut them on price and siphon away customers while posing a national security risk. US officials have estimated that unauthorized distillation costs Silicon Valley labs billions of dollars in annual profit, according to a person familiar with the findings who described them on condition of anonymity. OpenAI confirmed it's part of the information sharing effort on adversarial distillation through the Frontier Model Forum and pointed to a recent memo it sent to Congress on the practice, where it accused Chinese firm DeepSeek of trying to "free-ride on the capabilities developed by OpenAI and other US frontier labs." Google, Anthropic, and the Frontier Model Forum declined to comment. Distillation is a technique where an older "teacher" AI model is used to train a newer, "student," model that replicates the capabilities of the earlier system -- often at a much lower cost than producing an original model from scratch. Some forms of distillation are widely accepted and even encouraged by AI labs, such as when companies create smaller, more efficient versions of their own models, or allow outside developers to use distillation to build non-competitive technologies. Yet distillation has been controversial when used by third parties -- particularly in adversary nations like China or Russia -- to replicate proprietary work without authorization. Leading US AI labs have warned that foreign adversaries could use the technique to develop AI models stripped of safety guardrails, such as limits that would prevent users from creating a deadly pathogen. Most models made by Chinese labs are open weight, meaning that parts of the underlying AI system are publicly available for users to freely download and run on their own platforms, and therefore cheaper to use. That poses an economic challenge for US AI companies that have kept their models proprietary, betting that customers will pay for access to their products and help offset the hundreds of billions of dollars they've spent on data centers and other infrastructure. Distillation first drew significant scrutiny in January 2025 in the weeks after DeepSeek's surprise release of the R1 reasoning model that took the AI world by storm. Soon after, Microsoft and OpenAI investigated whether the Chinese startup had improperly exfiltrated large amounts of data from the US firm's models to create R1, Bloomberg previously reported. In February, OpenAI warned US lawmakers that DeepSeek had continued to use increasingly sophisticated tactics to extract results from US models, despite heightened efforts to prevent misuse of its products. OpenAI claimed in its memo to the House Select Committee on China that DeepSeek was relying on distillation to develop a new version of its breakthrough chatbot. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Plus Signed UpPlus Sign UpPlus Sign Up By continuing, I agree to the Privacy Policy and Terms of Service. Information-sharing by US AI companies about adversarial distillation echoes a standard practice in the cybersecurity industry, where firms regularly swap data on attacks and adversaries' tactics as a way to strengthen network defenses. By working together, the AI firms are similarly seeking to more effectively detect the practice, identify who's responsible and try to prevent unauthorized users from succeeding. Trump administration officials have signaled their openness to fostering information sharing among AI companies to rein in adversarial distillation. The AI Action Plan unveiled by President Donald Trump last year called for the creation of an information sharing and analysis center, in part for this purpose. For now, information sharing on distillation remains limited due to AI companies' uncertainty about what can be shared under existing antitrust guidance to counter the competitive threat from China, according to people familiar with the matter. The firms would benefit from greater clarity from the US government, the people said. Distillation has ranked as a top concern among American AI developers since DeepSeek rattled global markets in early 2025 with its R1 release. Highly capable open-source models continue to proliferate in China, and many in the industry are watching closely for a major upgrade to DeepSeek's model. Last year, Anthropic blocked Chinese-controlled companies from using its Claude chatbot model, and in February it identified three Chinese AI labs -- DeepSeek, Moonshot, and MiniMax -- as illicitly extracting the model's capability via distillation. This year, Anthropic said the threat "extends beyond any single company or region" and poses a national security risk, since distilled models often lack safety guardrails designed to prevent bad actors from using AI tools for malicious activities. Google has published a blog saying it identified an increase in model extraction attempts. The three US AI labs have not yet provided evidence showing how much of China's model innovation is reliant on distillation, but they note that the prevalence of attacks can be measured based on volumes of large-scale data requests.
[2]
Google, OpenAI to Join Forces to Fight AI Model Copying in China
The three firms are reportedly sharing intel via the Frontier Model Forum The three giants of the artificial intelligence (AI) space -- Anthropic, Google, and OpenAI -- have reportedly joined hands to combat the attempts of model distillation by Chinese rivals. As per the report, the Silicon Valley firms have agreed to share information to ensure that their frontier AI models are not copied and brought to the market at a cheaper price point. Notably, in 2025, OpenAI first accused Chinese AI firm DeepSeek of distilling its model to build the DeepSeek-R1. Anthropic, Google, and OpenAI Unite to Fight Model Distillation Attempts According to a Bloomberg report, the three AI firms are now working together to crack down on attempts by Chinese AI developers that allegedly use model distillation techniques to copy Anthropic, Google, and OpenAI's proprietary large language models (LLMs). The AI companies are reportedly sharing information via the Frontier Model Forum, a nonprofit that was founded by the three companies alongside Microsoft in 2023. The body was formed to promote safe and responsible development of frontier AI systems and knowledge sharing. The same forum is said to be used to detect adversarial distillation attempts. In AI parlance, model distillation is a technique where a smaller, more efficient student model is trained to mimic the behaviour of a larger teacher model. In authorised scenarios, the method helps companies save costs on developing smaller models by using the frontier AI model as the reference point. It learns structured output and retrieval methods from the larger model, although it operates at a smaller scale. The allegation from the three AI giants is that Chinese companies are using their AI services in a way that breaches their terms of use. They are essentially generating a large amount of outputs to use the data to train their models. Once the trained models become efficient enough, these are released with highly competitive pricing that undercut the models from the Silicon Valley developers, the report claimed. The imitation models are costing US-based AI firms billions of dollars in annual profits, the report claimed, citing a person familiar with the findings. However, Gadgets 360 staff members were not able to independently verify these claims. Anthropic, Google, and OpenAI have reportedly also called the distillation efforts a national security risk.
[3]
OpenAI, Anthropic and Google cooperate to fend off Chinese bids to clone models
Rivals OpenAI, Anthropic, and Alphabet's Google have begun working together to try to clamp down on Chinese competitors extracting results from cutting-edge U.S. artificial intelligence models to gain an edge in the global AI race. The firms are sharing information through the Frontier Model Forum, an industry nonprofit that the three technology companies founded with Microsoft in 2023, to detect so-called adversarial distillation attempts that violate their terms of service, according to people familiar with the matter. The rare collaboration underscores the severity of a concern raised by U.S. AI companies that some users, especially in China, are creating imitation versions of their products that could undercut them on price and siphon away customers while posing a national security risk.
[4]
OpenAI, Google, Anthropic Unite to Counter AI Model Replication & Reshape Global Tech Race
Silicon Valley Rivals Unite to Confront Growing Risks of AI Model Replication OpenAI, Google, and Anthropic, companies that compete fiercely for talent, users, and headlines, have decided to join hands. The tech giants will work together to address issues related to AI model replication. Smaller models can learn by repeatedly querying more powerful systems, collecting answers, and recreating similar behaviour. Experts call it distillation. Companies now view this process as a shortcut that erodes years of research and investment.
Share
Copy Link
Three Silicon Valley rivals—OpenAI, Google, and Anthropic—are joining forces through the Frontier Model Forum to fight unauthorized copying of AI models by Chinese competitors. US officials estimate adversarial distillation costs American AI labs billions in annual profits, while raising national security concerns about imitation AI models stripped of safety guardrails.
In a striking shift from fierce competition to collaboration, OpenAI, Google, and Anthropic have begun working together to combat AI model copying by Chinese AI competitors. The three companies are sharing information through the Frontier Model Forum, an industry nonprofit they founded with Microsoft in 2023, to detect adversarial distillation attempts that violate their terms of service
1
. This OpenAI, Google, Anthropic collaboration marks a rare instance where Silicon Valley rivals unite to address what they describe as both an economic threat and a national security risk3
.
Source: Analytics Insight
Model distillation is a technique where a smaller, more efficient "student" model is trained to mimic the behavior of a larger "teacher" model, often at a fraction of the original development cost. While some forms of distillation are widely accepted—such as when companies create smaller versions of their own models—the practice becomes controversial when used by third parties to replicate proprietary work without authorization
1
. The unauthorized copying of AI models involves repeatedly querying powerful systems, collecting answers, and recreating similar behavior—a shortcut that experts say erodes years of research and development4
. US officials estimate that this practice costs Silicon Valley labs billions of dollars in annual profit, creating significant economic pressure on American AI innovation1
.The issue of adversarial distillation drew significant attention in January 2025 following DeepSeek's surprise release of the R1 reasoning model. Microsoft and OpenAI subsequently investigated whether the Chinese startup had improperly extracted large amounts of data from US models to create R1
1
. In February, OpenAI warned US lawmakers that DeepSeek had continued using increasingly sophisticated tactics to extract results from American models, claiming the Chinese firm was relying on distillation to develop new versions of its chatbot. OpenAI accused DeepSeek of attempting to "free-ride on the capabilities developed by OpenAI and other US frontier labs" in a memo to the House Select Committee on China1
.
Source: Bloomberg
Beyond economic losses, US AI companies warn that imitation AI models pose serious national security risks. Leading labs express concern that foreign adversaries could use distillation techniques to develop AI models stripped of safety guardrails, such as limits preventing users from creating deadly pathogens
1
. Most models made by Chinese AI labs are open weight, meaning parts of the underlying AI system are publicly available for free download and use, enabling significant price undercutting1
. This poses an economic challenge for US companies that have kept their large language models (LLMs) proprietary, betting customers will pay for access to help offset the hundreds of billions spent on data centers and infrastructure1
.Related Stories
The information sharing approach by US AI companies echoes standard practices in the cybersecurity industry, where firms regularly exchange data on attacks and adversaries' tactics to strengthen network defenses. By working together, the AI firms seek to more effectively detect the teacher-student model exploitation, identify who's responsible, and prevent unauthorized users from succeeding
1
. The Trump administration has signaled openness to fostering such cooperation, with President Donald Trump's AI Action Plan calling for creation of an information sharing and analysis center partly for this purpose1
.Despite the collaborative effort, information sharing remains limited due to AI companies' uncertainty about what can be shared under existing antitrust guidance while countering the competitive threat from China
1
. This tension between intellectual property protection and regulatory constraints will likely shape the global tech race moving forward. The allegations that Chinese companies breach terms of service by generating massive outputs to train their models highlight growing geopolitical tension in AI development2
. As the AI race intensifies, observers should watch for potential policy changes that could enable more robust cooperation among American firms while navigating competitive concerns in Silicon Valley.Summarized by
Navi
23 Feb 2026•Technology

14 Feb 2026•Technology

28 Aug 2025•Technology

1
Technology

2
Policy and Regulation

3
Science and Research
