Google and OpenAI warn distillation attacks threaten AI models as rivals probe for IP theft

Reviewed byNidhi Govil

2 Sources

Share

Google and OpenAI revealed that competitors are using distillation attacks to clone their AI models through legitimate access. One campaign used over 100,000 prompts to extract Gemini's reasoning capabilities. Both companies warn this intellectual property theft poses risks to the AI industry, though critics note the irony given their own data scraping practices.

Google and OpenAI Sound Alarm on Distillation Attacks Targeting AI Models

Both Google and OpenAI issued warnings this week that competitors, including Chinese LLM providers like DeepSeek, are actively probing large language models to extract underlying reasoning capabilities and replicate them in their own systems. Google calls this practice distillation attacks, describing it as a form of intellectual property theft that violates terms of service. John Hultquist, chief analyst at Google Threat Intelligence Group, told The Register that threat actors from private-sector companies across the globe are targeting valuable model IP

1

. "Your model is really valuable IP, and if you can distill the logic behind it, there's very real potential that you can replicate that technology - which is not inexpensive," Hultquist explained.

Source: Futurism

Source: Futurism

How Cloning AI Models Works Through Legitimate Access

Google detected one campaign that used over 100,000 prompts attempting to replicate AI reasoning ability in non-English languages across various tasks

1

. The company's systems recognized this attack in real time and protected its internal reasoning traces

2

. This method of model distillation exploits legitimate API access to public-facing LLMs, making it significantly cheaper for competitors to develop their own chatbots without spending billions on training. As Google's report notes, adversaries no longer need conventional computer intrusion to steal trade secrets—they can simply use legitimate service access to clone AI model capabilities

2

.

OpenAI Blames DeepSeek and Calls for Government Intervention

OpenAI, in a Thursday memo to the House Select Committee on China, specifically blamed DeepSeek and other Chinese universities for copying ChatGPT and frontier models from U.S. firms

1

. The company noted that China's distillation methods have grown more sophisticated over the past year, evolving beyond chain-of-thought extraction to multi-stage operations involving synthetic-data generation and large-scale data cleaning. OpenAI warned that illicit model distillation poses a risk to "American-led, democratic AI" and called for U.S. government intervention to address adversary access to AI infrastructure

1

.

AI Ecosystem Security Requires Industry-Wide Approach

Both companies acknowledge that individual labs cannot solve this problem alone. OpenAI argues that AI ecosystem security requires an industry-wide approach, stating "it is not enough for any one lab to harden its protection because adversaries will simply default to the least protected provider"

1

. The company suggests U.S. government policy could help by sharing intelligence, developing best practices on distillation defenses, closing API router loopholes, and restricting adversary access to U.S. compute and cloud infrastructure. Hultquist warned that as more organizations develop models trained on internal, sensitive data, the risk spreads beyond tech giants to financial institutions and other businesses

1

.

Hypocrisy Accusations Shadow AI Industry Complaints

The complaints have sparked criticism given that both Google and OpenAI built their AI models by scraping vast amounts of internet content without permission or compensation, facing numerous copyright infringement lawsuits in the process

2

. Critics point out the double standard: while these companies characterize distillation as intellectual property theft, they've shown little regard for others' IP rights. The AI industry now faces a vulnerability that may be impossible to eliminate—public-facing models remain widely accessible, and enforcement against abusive accounts becomes a game of whack-a-mole

1

. Google can ban accounts for violation of terms of service or pursue legal action, but the fundamental nature of LLMs makes them susceptible to probing. As smaller entities potentially break through with lower upfront costs—similar to DeepSeek's disruption in early 2025—the stakes for protecting proprietary reasoning capabilities continue to rise

2

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo