Anthropic accuses Chinese AI labs of extracting Claude's capabilities through massive distillation

11 Sources

Share

Anthropic claims three Chinese AI companies created over 24,000 fraudulent accounts to generate 16 million exchanges with Claude AI, targeting its reasoning and coding abilities. The accusations fuel debates over AI chip exports to China as DeepSeek, Moonshot, and MiniMax allegedly used distillation to train their own models at a fraction of the cost.

Anthropic Exposes Industrial-Scale Copying Campaign

Anthropic has accused three Chinese AI labs—DeepSeek, Moonshot, and MiniMax—of conducting what it describes as industrial-scale copying campaigns against its Claude AI model. The San Francisco-based company claims these firms created over 24,000 fraudulent accounts and generated more than 16 million exchanges with Claude to improve their own models through a technique called distillation

1

. While distillation is a legitimate training method that AI labs use to create smaller, cheaper versions of their own models, Anthropic argues that competitors can exploit it to essentially copy the homework of rival companies

1

.

Source: Benzinga

Source: Benzinga

The allegations come as the U.S. debates how strictly to enforce export controls on advanced AI chip exports, a policy designed to curb China's AI development

1

. Anthropic's claims add fuel to growing concerns about Chinese firms improperly gaining an edge in the global AI race, particularly as DeepSeek prepares to release its V4 model, which reportedly can outperform both Claude AI and OpenAI's ChatGPT in coding tasks

1

.

Source: TechCrunch

Source: TechCrunch

How Chinese AI Labs Allegedly Extracted Claude's Capabilities

According to Anthropic, the three companies followed a consistent pattern: they used commercial proxy services that resell access to frontier AI models and built what the company calls "hydra clusters"—sprawling networks of fraudulent accounts that distribute traffic across Anthropic's API as well as third-party cloud providers

3

. In one case, a single proxy network managed more than 20,000 fraudulent accounts simultaneously

3

. The company pinpointed the three firms with "high confidence" based on internet protocol addresses, metadata, and corroboration from industry partners who observed the same actors on their platforms

5

.

The scale of each attack differed significantly. DeepSeek generated over 150,000 exchanges that targeted foundational logic and alignment, specifically around censorship-safe alternatives to politically sensitive questions about dissidents, party leaders, or authoritarianism

1

. Moonshot AI, known for its Kimi models, had more than 3.4 million exchanges targeting agentic reasoning and tool use, coding and data analysis, computer-use agent development, and computer vision

1

. MiniMax conducted the largest campaign with over 13 million exchanges focused on agentic coding and orchestration

4

. Anthropic observed MiniMax redirecting nearly half its traffic to siphon capabilities from the latest Claude model within 24 hours of its launch

1

.

National Security Risks and the Export Controls Debate

Anthropic warns that illicitly distilled models are "unlikely" to retain existing safeguards built into American AI systems

2

. The company argues that foreign labs that distill American models can remove these protections, feeding model capabilities into military, intelligence, and surveillance systems

2

. This enables authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance

1

. Anthropic and other U.S. companies build systems that prevent state and non-state actors from using AI to develop bioweapons or carry out malicious cyber activities, creating national security risks when those protections are stripped out

1

.

Source: Engadget

Source: Engadget

The accusations arrive at a critical moment for U.S. policy. Last month, the Trump administration formally allowed U.S. companies like Nvidia to export advanced AI chips such as the H200 to China

1

. Anthropic contends that executing distillation at this scale "requires access to advanced chips," reinforcing the rationale for export controls since restricted chip access limits both direct model training and the scale of illicit distillation

1

. Dmitri Alperovitch, chairman of the Silverado Policy Accelerator and co-founder of CrowdStrike, told TechCrunch that "part of the reason for the rapid progress of Chinese AI models has been theft via distillation of US frontier models," adding that this should provide "even more compelling reasons to refuse to sell any AI chips" to these companies

1

.

Claims of Hypocrisy Cloud the Accusations

While Anthropic calls for coordinated action from the AI industry, cloud providers, and policymakers, the company faces criticism over what some perceive as hypocrisy

3

. Anthropic itself has a controversial record of scraping data across the internet to train their own models, allegedly including numerous copyrighted books without the authors' permission

3

. Programmer Gergely Orosz, who publishes a newsletter for software engineers, tweeted that "Anthropic can't have it both ways," noting that the company trained Claude on copyrighted books and only paid copyright holders after a lawsuit

3

. Elon Musk also weighed in, questioning how Chinese companies could "steal the stuff Anthropic stole from human coders"

3

.

Despite the backlash, Anthropic says it is strengthening defenses to make large-scale distillation harder to carry out and easier to detect

4

. The company has deployed classifiers and behavioral fingerprinting systems to identify extraction patterns in API traffic, including chain-of-thought elicitation and coordinated multi-account activity

4

. It is also sharing technical indicators of large-scale distillation operations with other AI labs, cloud providers, and authorities, while tightening verification for educational, research, and startup accounts often used to create fraudulent access

4

. The company acknowledges that "no company can solve this alone" and that addressing distillation attacks at this scale requires coordinated industry and policy action

5

. OpenAI similarly warned U.S. lawmakers earlier this month about DeepSeek's "ongoing efforts to free-ride on the capabilities developed by OpenAI and other U.S. frontier labs"

2

. These campaigns are growing in intensity and sophistication, Anthropic warns, with the window to act narrowing as the threat extends beyond any single company or region

5

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo