OpenAI, Anthropic, Google Unite to Combat AI Model Copying Through Frontier Model Forum

Reviewed byNidhi Govil

2 Sources

Share

Rivals OpenAI, Anthropic, and Google are collaborating through the Frontier Model Forum to combat adversarial distillation by Chinese competitors. The rare partnership addresses concerns that unauthorized use of US AI models costs Silicon Valley billions annually while posing national security risks.

OpenAI, Anthropic, and Google Collaboration Takes Shape

In a rare display of unity, fierce competitors OpenAI, Anthropic, and Alphabet's Google have joined forces to address a mounting threat in the global AI race. The three companies are sharing information through the Frontier Model Forum, an industry nonprofit they founded with Microsoft in 2023, to detect adversarial distillation attempts that violate their terms of service

1

. This collaboration underscores growing concerns that Chinese bids to clone models are creating imitation AI products that could undercut US firms on price while siphoning away customers

2

.

Understanding Adversarial Distillation and Its Economic Impact

AI model copying through distillation involves using an older "teacher" model to train a newer "student" model that replicates capabilities at a fraction of the original development cost. While some forms of distillation are acceptedβ€”such as when companies create smaller versions of their own modelsβ€”the practice becomes controversial when third parties, particularly in adversary nations like China or Russia, use it to replicate proprietary work without authorization

1

. US officials estimate that unauthorized use of US AI models costs Silicon Valley labs billions of dollars in annual profit, according to sources familiar with the findings. This economic impact is amplified by the fact that most Chinese models are open weight, making them publicly available and cheaper to useβ€”a direct challenge to US companies that have invested hundreds of billions in data centers and infrastructure while keeping their models proprietary.

DeepSeek Controversy Catalyzes Information Sharing

The issue gained significant attention following DeepSeek's January 2025 release of its R1 reasoning model. Microsoft and OpenAI subsequently investigated whether the Chinese startup had improperly extracted large amounts of data from US models to create R1

1

.

Source: Bloomberg

Source: Bloomberg

In February, OpenAI warned the House Select Committee on China that DeepSeek continued using increasingly sophisticated tactics to extract results from US models, accusing the firm of attempting to "free-ride on the capabilities developed by OpenAI and other US frontier labs." OpenAI confirmed its participation in the information sharing effort on adversarial distillation through the Frontier Model Forum and referenced its congressional memo on the practice.

National Security Risks Drive Urgency to Protect Intellectual Property

Beyond economic concerns, leading US AI labs warn that foreign adversaries could use model extraction techniques to develop AI systems stripped of safety guardrails. These limits typically prevent users from creating deadly pathogens or other dangerous applications, making their removal a serious matter to mitigate national security risks

1

. The collaboration to protect intellectual property mirrors standard practices in the cybersecurity industry, where firms regularly exchange data on attacks and adversaries' tactics to strengthen network defenses. By working together, AI companies aim to more effectively detect unauthorized distillation, identify responsible parties, and prevent future attempts.

Trump Administration Signals Support for Information Sharing Center

The Trump administration has indicated openness to fostering collaboration among AI companies to combat adversarial distillation. The AI Action Plan unveiled by President Donald Trump called for creating an information sharing and analysis center partly for this purpose

1

. However, current information sharing remains limited due to uncertainty about what can be disclosed under existing antitrust guidance while countering the competitive threat from China. As AI innovation accelerates and price undercutting from imitation products intensifies, the industry faces critical decisions about balancing competitive interests with collective defense against unauthorized model copying that threatens both commercial viability and national security.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo