Anthropic accuses Chinese AI labs of industrial-scale theft as chip export debate intensifies

Reviewed byNidhi Govil

38 Sources

Share

Anthropic alleges three Chinese AI companies—DeepSeek, Moonshot, and MiniMax—created over 24,000 fake accounts to extract Claude's capabilities through 16 million exchanges. The accusations spotlight distillation techniques and fuel debates over AI chip export controls, with critics noting the irony given Anthropic's own controversial data scraping practices.

Anthropic Alleges Massive AI Model Theft Campaign

Anthropic has accused three Chinese AI labs of orchestrating what it calls "industrial-scale campaigns" to steal capabilities from its Claude AI model

1

. The San Francisco-based company claims that DeepSeek, Moonshot, and MiniMax created over 24,000 fraudulent accounts and generated more than 16 million exchanges with Claude to improve their own AI systems

2

. The accusations arrive at a critical moment as U.S. policymakers debate the future of AI chip export controls, a policy designed to limit China's AI development.

Source: DT

Source: DT

The alleged AI model theft targeted Claude's most advanced features, including agentic reasoning, tool use, and coding capabilities

1

.

Source: Benzinga

Source: Benzinga

Anthropic says the Chinese AI labs employed a model distillation technique—a legitimate training method that AI companies use on their own models to create smaller, more efficient versions. However, when competitors deploy this approach, it effectively allows them to copy the capabilities of rival labs at a fraction of the development time and cost

2

.

How the Industrial-Scale Copying Operation Worked

The perpetrators used sophisticated methods to bypass Anthropic's regional access restrictions and commercial blocks on China

3

. They tapped commercial proxy services that resell access to frontier AI models at scale, building what Anthropic calls "hydra clusters"—sprawling networks of fraudulent accounts that distribute traffic across APIs and third-party cloud providers

4

. In one case, a single proxy network managed more than 20,000 fraudulent accounts simultaneously

3

.

Source: SiliconANGLE

Source: SiliconANGLE

The scale of illicit model extraction varied significantly across the three companies. DeepSeek conducted over 150,000 exchanges that targeted foundational logic and alignment, specifically seeking censorship-safe alternatives to politically sensitive queries about dissidents, party leaders, and authoritarianism

1

. Anthropic examined metadata on API traffic analysis and traced requests directly to staffers at DeepSeek and Moonshot AI .

Moonshot AI, which recently released its open-source Kimi K2.5 model and a coding agent, accounted for more than 3.4 million exchanges targeting agentic reasoning and tool use, coding and data analysis, computer-use agent development, and computer vision

1

. MiniMax conducted the largest campaign with over 13 million exchanges focused on agentic coding and orchestration

4

. Anthropic observed MiniMax redirecting nearly half its traffic to extract capabilities from Claude's latest model within 24 hours of its launch

1

.

National Security Risks and AI Chip Export Controls Debate

Anthropic argues that the scale of extraction requires access to advanced chips, reinforcing the rationale for AI chip export controls

1

. The company warns that illicit model extraction creates national security risks because models built through distillation are unlikely to retain AI safeguards that prevent malicious actors from using AI to develop bioweapons or carry out cyber operations

1

. Foreign labs that distill American models can strip out protections and feed capabilities into military, intelligence, and surveillance systems, enabling authoritarian governments to deploy frontier AI for offensive operations, disinformation campaigns, and mass monitoring

2

.

Dmitri Alperovitch, chairman of the Silverado Policy Accelerator and co-founder of CrowdStrike, told TechCrunch the findings confirm what many suspected: "It's been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of US frontier models. Now we know this for a fact"

1

. He argued this provides compelling reasons to refuse selling AI chips to these companies.

The accusations arrive as the Trump administration formally allowed U.S. companies like Nvidia to export advanced AI chips such as the H200 to China last month

1

. Critics have argued that loosening export controls increases China's AI computing capacity at a critical time in the global race for AI dominance.

Hypocrisy Claims and Enforcement Challenges

Despite Anthropic's allegations, the company faces accusations of hypocrisy on social media given its controversial record of scraping data across the internet to train Claude

3

. This allegedly included using copyrighted books without authors' permission. Programmer Gergely Orosz tweeted, "Sorry but Anthropic can't have it both ways. Also let's not forget how Anthropic itself trained Claude: on copyrighted books, only paying copyright holders after a lawsuit"

3

. Elon Musk also weighed in, questioning how Anthropic could complain about theft when it allegedly trained AI on projects from software developers without permission

3

.

Legal experts note that the violations center on breaches of end-user license agreements and terms of service rather than straightforward intellectual property theft

5

. Many legal scholars argue that generative AI companies probably don't own rights over either their models or the output of those models

5

. This makes enforcement across international borders particularly challenging, as Silicon Valley confronts the reality that laws which are unenforceable and profitable to break will likely be ignored

5

.

Industry Response and Future Implications

Anthropic says it will invest in defenses that make distillation attacks harder to execute and easier to identify, deploying classifiers and behavioral fingerprinting systems to detect extraction patterns in API traffic

4

. The company is sharing technical indicators with other AI labs, cloud providers, and authorities while tightening verification for educational, research, and startup accounts often used for fraudulent account creation

4

.

However, Anthropic acknowledges that countering attacks at this scale requires "a coordinated response across the AI industry, cloud providers, and policymakers"

1

. The company warns these campaigns are growing in intensity and sophistication, with the window to act narrowing

3

.

As AI becomes more important and revenue-generating, governments and companies in other parts of the world will likely refuse to comply with Silicon Valley's expectations and Washington's demands

5

. The more that frontier labs try to monetize their competitive edge, the more likely developing world companies will attempt to swim across their moat. Companies may need to follow the music industry's example—ending piracy not by complaining about the law, but by making products more accessible and raising revenues elsewhere

5

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo