11 Sources
11 Sources
[1]
Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports | TechCrunch
Anthropic is accusing three Chinese AI companies of setting up more than 24,000 fake accounts with its Claude AI model to improve their own models. The labs -- DeepSeek, Moonshot AI, and MiniMax -- allegedly generated more than 16 million exchanges with Claude through those accounts using a technique called "distillation." Anthropic said the labs "targeted Claude's most differentiated capabilities: agentic reasoning, tool use, and coding." The accusations come amid debates over how strictly to enforce export controls on advanced AI chips, a policy aimed at curbing China's AI development. Distillation is a common training method that AI labs use on their own models to create smaller, cheaper versions, but competitors can use it to essentially copy the homework of other labs. OpenAI sent a memo to House lawmakers earlier this month accusing DeepSeek of using distillation to mimic its products. DeepSeek first made waves a year ago when it released its open-source R1 reasoning model that nearly matched American frontier labs in performance at a fraction of the cost. DeepSeek is expected to soon release DeepSeek V4, its latest model, which reportedly can outperform Anthropic's Claude and OpenAI's ChatGPT in coding. The scale of each attack differed in scope. Anthropic tracked more than 150,000 exchanges from DeepSeek that seemed aimed at improving foundational logic and alignment, specifically around censor-ship safe alternatives to policy-sensitive queries. Moonshot AI had more than 3.4 million exchanges targeting agentic reasoning and tool use, coding and data analysis, computer-use agent development, and computer vision. Last month, the firm released a new open source model Kimi K2.5 and a coding agent. MiniMax's 13 million exchanges targeted agentic coding and tool use and orchestration. Anthropic said it was able to observe MiniMax in action as it redirected nearly half its traffic to siphon capabilities from the latest Claude model when it was launched. Anthropic says it will continue to invest in defenses that make distillation attacks harder to execute and easier to identify, but is calling on "a coordinated response across the AI industry, cloud providers, and policymakers." The distillation attacks come at a time when American chip exports to China are still hotly debated. Last month, the Trump administration formally allowed U.S. companies like Nvidia to export advanced AI chips (like the H200) to China. Critics have argued that this loosening of export controls increases China's AI computing capacity at a critical time in the global race for AI dominance. Anthropic says that the scale of extraction DeepSeek, MiniMax, and Moonshot performed "requires access to advanced chips." "Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation," per Anthropic's blog. Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think-tank and co-founder of CrowdStrike, told TechCrunch he's not surprised to see these attacks. "It's been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of US frontier models. Now we know this for a fact," Alperovitch said. "This should give us even more compelling reasons to refuse to sell any AI chips to any of these [companies], which would only advantage them further." Anthropic also said distillation doesn't only threaten to undercut American AI dominance, but could also create national security risks. "Anthropic and other U.S. companies build systems that prevent state and non-state actors from using AI to, for example, develop bioweapons or carry out malicious cyber activities," reads Anthropic's blog post. "Models built through illicit distillation are unlikely to retain those safeguards, meaning that dangerous capabilities can proliferate with many protections stripped out entirely." Anthropic pointed to authoritarian governments deploying frontier AI for things like "offensive cyber operations, disinformation campaigns, and mass surveillance," a risk that is multiplied if those models are open-sourced. TechCrunch has reached out to DeepSeek, MiniMax, and Moonshot for comment.
[2]
Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI
Anthropic claims DeepSeek and two other Chinese AI companies misused its Claude AI model in an attempt to improve their own products. In an announcement on Monday, Anthropic says the "industrial-scale campaigns" involved the creation of around 24,000 fraudulent accounts and more than 16 million exchanges with Claude, as reported earlier by The Wall Street Journal. The three companies -- DeepSeek, MiniMax, and Moonshot -- are accused of "distilling" Claude, or training a smaller AI model based on a more advanced one. Though Anthropic says that distillation is a "legitimate training method," it adds that it can "also be used for illicit purposes," including "to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently." Anthropic adds that illicitly distilled models are "unlikely" to carry over existing safeguards. "Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems -- enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance," Anthropic writes. DeepSeek, which caused a stir in the AI industry for its powerful but more efficient models, held over 150,000 exchanges with Claude and targeted its reasoning capabilities, according to Anthropic. It's also accused of using Claude to generate "censorship-safe alternatives to politically sensitive questions about dissidents, party leaders, or authoritarianism." In a letter to lawmakers last week, OpenAI similarly accused DeepSeek of "ongoing efforts to free-ride on the capabilities developed by OpenAI and other U.S. frontier labs." Moonshot and MiniMax had more than 3.4 million and 13 million exchanges with Claude, respectively. Anthropic is calling on other members in the AI industry, cloud providers, and lawmakers to address distillation, adding that "restricted chip access" could limit model training and "the scale of illicit distillation."
[3]
Anthropic Slams China for AI Theft, But Critics Say the Outrage Is Hypocritical
Anthropic is accusing Chinese developers of stealing Claude chatbot trade secrets, but critics note that Anthropic itself has a record of scraping the internet for AI training. On Monday, the San Francisco company said the Chinese firms behind DeepSeek, Moonshot AI, and MiniMax "created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models." The company claims the Chinese developers are essentially trying to clone Claude by tricking the chatbot into revealing "the internal reasoning behind a completed response." Last year, OpenAI accused DeepSeek of doing the same, but for its own AI models. In Anthropic's case, the company already blocks commercial access to China for national security reasons. However, the Chinese AI developers allegedly bypass those restrictions by tapping "commercial proxy services which resell access to Claude and other frontier AI models at scale." These "hydra clusters" then use "sprawling networks of fraudulent accounts that distribute traffic across our API as well as third-party cloud platforms," the company says. In one case, a single proxy network managed more than 20,000 fraudulent accounts simultaneously. Anthropic examined the metadata on Claude chatbot requests and traced them to staffers at DeepSeek and Moonshot AI. "These campaigns are growing in intensity and sophistication," the company warned. "The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community." But rather than receiving support, Anthropic is facing claims of hypocrisy on social media since it has a controversial record of scraping data across the internet to train its AI. This has allegedly included using numerous copyrighted books without the authors' permission to develop Claude. "Sorry but Anthropic can't have it both ways," tweeted programmer Gergely Orosz, who publishes a newsletter for software engineers. "Also let's not forget how Anthropic itself trained Claude: on copyrighted books, only paying copyright holders after a lawsuit." Elon Musk, who founded xAI, also chimed in, tweeting, "How dare they [China] steal the stuff Anthropic stole from human coders??" alleging that the company has been training its AI on projects from software developers without their permission. Still, the news from Anthropic suggests the company plans on taking stronger action to prevent China from accessing its technology. In addition to shutting off access, Anthropic also called for more export controls to prevent advanced US chips from falling into China's hands. "The apparently rapid advancements made by these [Chinese] labs are incorrectly taken as evidence that export controls are ineffective and able to be circumvented by innovation. In reality, these advancements depend in significant part on capabilities extracted from American models, and executing this extraction at scale requires access to advanced chips," Anthropic says. DeepSeek and the developers of Moonshot AI and MiniMax didn't immediately respond to a request for comment.
[4]
Anthropic accuses DeepSeek, other Chinese AI developers of 'industrial-scale' copying -- Claims 'distillation' included 24,000 fraudulent accounts and 16 million exchanges to train smaller models
Anthropic on Monday accused three leading Chinese developers of frontier AI models of using large-scale distillation to improve their own models by using Anthropic's Claude capabilities. In total, DeepSeek, Moonshot, and MiniMax made 16 million exchanges using 24,000 fraudulent accounts. Distillation is a machine learning technique in which a smaller or less capable model is trained on the outputs of a stronger model instead of using actual data to train. It can save time, create cheaper, more specialized models, extract capabilities from competitors, and/or lower requirements for hardware capabilities. While distillation is generally a legitimate technique, when a China-based entity with heavy restrictions does it, it violates both U.S. export controls and end-user license agreement with Anthropic. "Distillation can be legitimate: AI labs use it to create smaller, cheaper models for their customers," a statement by Anthropic published on X reads. "But foreign labs that illicitly distill American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems." American companies like OpenAI have long accused DeepSeek of using distillation to train some of their frontier models using outputs of ChatGPT and other services, but have not presented detailed explanation, unlike Anthropic. How Chinese companies use distillation from American AI models According to Anthropic, the perpetrators followed the same pattern: they used commercial services that resell access to frontier models and built what the company calls 'hydra cluster' networks -- large pools of accounts that spread traffic across Anthropic's API and third-party clouds. In one case, a single proxy setup allegedly controlled more than 20,000 fraudulent accounts at once. To avoid raising flags, it mixed extraction traffic with ordinary use requests. However, its prompt patterns stood out: very high volumes, tightly focused on specific capabilities, and highly repetitive. Such behavior was consistent with model training, but certainly not typical end-user interaction. DeepSeek alone generated over 150,000 exchanges that targeted reasoning tasks, rubric-based grading suitable for reinforcement learning reward models, and censorship-safe rewrites of politically sensitive queries, according to Anthropic. Anthropic also observed prompts designed to produce step-by-step internal reasoning and therefore reveal chain-of-thought training data. Moonshot, known for its Kimi models, accounted for more than 3.4 million exchanges, according to Anthropic. Its focus areas included agentic reasoning, tool use, coding, data analysis, computer-use agents, and computer vision. Moonshot allegedly used hundreds of fraudulent accounts spanning multiple access pathways and later tried to extract and reconstruct Claude's reasoning traces. MiniMax conducted the largest campaign with over 13 million exchanges that targeted agentic coding and orchestration. Anthropic says it detected this operation while it was still ongoing, as MiniMax was training its model that was to be released in the future, which provides the American company a unique view on the lifecycle of the extraction. After Anthropic introduced a new Claude model, MiniMax allegedly redirected nearly half its traffic within 24 hours to capture capabilities from the latest model. Anthropic's response To fight future distillation attempts, Anthropic says it is strengthening defenses to make large-scale distillation harder to carry out and easier to detect. The company has deployed classifiers and behavioral fingerprinting systems to identify extraction patterns in API traffic, including chain-of-thought elicitation and coordinated multi-account activity. The company is also sharing technical indicators of large-scale distillation operation with other AI labs, cloud providers, and authorities, as well as tightening verification for educational, research, and startup accounts often used to create fraudulent access. In parallel, it is developing product-, API-, and model-level safeguards to reduce their usefulness of outputs for illicit training without harming legitimate users. At the same time, the company admits that that countering attacks at this scale requires coordinated industry and policy action. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
[5]
Anthropic Says DeepSeek, MiniMax Distilled AI Models for Gains
Anthropic said it has built better detection and verification systems to mitigate the risk of distillation campaigns and is sharing information with other AI labs to help address the issue. Anthropic PBC said three leading artificial intelligence developers in China worked to "illicitly extract" results from its AI models to bolster the capabilities of rival products, adding to growing concerns in the US about Chinese firms improperly gaining an edge. The San Francisco-based company said DeepSeek, MiniMax Group Inc. and Moonshot violated its terms of service by generating more than 16 million exchanges in total with its Claude models using thousands of fraudulent accounts. With this tactic, known as distillation, Chinese AI labs can rapidly improve their models by training them on the outputs from more powerful systems, Anthropic said. Anthropic rival OpenAI warned US lawmakers earlier this month that DeepSeek had used distillation techniques as part of "ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs." Other US officials, including White House AI czar David Sacks, have also expressed concerns that DeepSeek used this method. "These campaigns are growing in intensity and sophistication," Anthropic said in a blog post on Monday. "The window to act is narrow, and the threat extends beyond any single company or region." Representatives for DeepSeek, MiniMax and Moonshot did not respond to requests for comment after normal business hours. DeepSeek upended the AI landscape a year ago with the release of R1, a competitive model purportedly built at a fraction of the cost of leading US alternatives. Since then, China has flooded the market with a wave of affordable text, video and image models. That has threatened to undercut adoption of AI software from US firms, making it harder to monetize the technology. MiniMax made its public market debut in January. Moonshot, meanwhile, is targeting a $10 billion valuation in a new financing round. Anthropic said the three firms each used "fraudulent accounts and proxy services to access Claude at scale while evading detection." Proxy networks can be used to provide a cloak of invisibility on the internet, allowing users in prohibited markets to make a large numbers of accounts for various online services. DeepSeek generated more than 150,000 exchanges with Claude, and MiniMax generated more than 13 million, Anthropic said, with a focus on reconstructing Claude's more advanced capabilities. The Claude maker said it was able to pinpoint the three companies with "high confidence" based on information on internet protocol addresses, metadata and "corroboration from industry partners who observed the same actors and behaviors on their platforms." Anthropic said it has built better detection and verification systems to mitigate the risk of distillation campaigns. It's also sharing information with other AI labs to help address the issue. "No company can solve this alone," the company said. "Distillation attacks at this scale require a coordinated response across the AI industry, cloud providers and policymakers."
[6]
Chinese companies used Claude to improve own models, Anthropic says
Feb 23 (Reuters) - Three Chinese companies have tried to use Claude to improperly obtain capabilities to improve their own models, the chatbot's creator Anthropic said in a blog post on Monday. DeepSeek, Moonshot and MiniMax created more than 16 million interactions with Claude using roughly 24,000 fake accounts, in violation of Anthropic's terms of service and regional access restrictions. They used a technique called "distillation," which involves having an older, more established and powerful AI model evaluate the quality of the answers coming out of a newer model, effectively transferring the older model's learnings, Anthropic said. "These campaigns are growing in intensity and sophistication. The window to act is narrow, and the threat extends beyond any single company or region." DeepSeek AI and MiniMax did not immediately respond to requests for comment. Reporting by Juby Babu in Mexico City; Editing by Alan Barona Our Standards: The Thomson Reuters Trust Principles., opens new tab
[7]
Anthropic accuses three Chinese AI labs of abusing Claude to improve their own models
Anthropic is issuing a call to action against AI "distillation attacks," after accusing three AI companies of misusing its Claude chatbot. On its website, Anthropic claimed that DeepSeek, Moonshot and MiniMax have been conducting "industrial-scale campaigns...to illicitly extract Claude's capabilities to improve their own models." Distillation in the AI world refers to when less capable models lean on the responses of more powerful ones to train themselves. While distillation isn't a bad thing across the board, Anthropic said that these types of attacks can be used in a more nefarious way. According to Anthropic, these three Chinese AI firms were responsible for more than "16 million exchanges with Claude through approximately 24,000 fraudulent accounts." From Anthropic's perspective, these competing companies were using Claude as a shortcut to develop more advanced AI models, which could also lead to circumventing certain safeguards. Anthropic said in its post that it was able to link each of these distilling attack campaigns to the specific companies with "high confidence" thanks to IP address correlation, metadata requests and infrastructure indicators, along with corroborating with others in the AI industry who have noticed similar behaviors. Early last year, OpenAI made similar claims of rival firms distilling its models and banned suspected accounts in response. As for Anthropic, the company behind Claude said it would upgrade its system to make distillation attacks harder to do and easier to identify. While Anthropic is pointing fingers at these other firms, it's also facing a lawsuit from music publishers who accused the AI company of using illegal copies of songs to train its Claude chatbot.
[8]
Anthropic Accuses 3 Chinese Companies of Harvesting Its Data
The San Francisco artificial intelligence start-up Anthropic has accused three Chinese companies of improperly harvesting large amounts of data from its A.I. technologies in an effort to accelerate the development of their own systems. Anthropic said in a blog post that DeepSeek, Moonshot and MiniMax -- three prominent Chinese start-ups -- used about 24,000 fraudulent accounts to generate over 16 million conversations with its Claude chatbot that could be used to teach skills to their own chatbots. Using data from one A.I. system to train another -- a process called distillation -- is common in A.I. work. But Anthropic's terms of service forbid anyone from surreptitiously harvesting data for distillation and do not allow its technologies to be used in China. OpenAI, Anthropic's primary rival, has also accused Chinese companies of lifting large amounts of data from its chatbot, ChatGPT, for similar proposes. In a memo sent to the House Select Committee on China last week, OpenAI said that DeepSeek and other Chinese start-ups were using new and "obfuscated" distillation methods as part of their "ongoing efforts to free-ride" on technologies developed by OpenAI and other U.S. companies. Like OpenAI, Anthropic said the practice was a national security risk, adding that it could allow China to build A.I. technologies to create bioweapons or tools for mass surveillance. The start-up has guardrails on its technologies designed to prevent them from being used in those ways, but the guardrails can be stripped away during distillation. Anthropic called on government officials and other A.I. companies to help prevent Chinese companies from distilling American models. "These campaigns are growing in intensity and sophistication," Anthropic said in its post. "The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers and the global A.I. community." DeepSeek, Moonshot and MiniMax did not immediately respond to requests for comment. Anthropic published its post amid a tussle with the Defense Department over the Pentagon's use of its technologies. The Pentagon has approved Anthropic's technologies for use with classified tasks, but it is threatening to sever ties with the start-up because Anthropic does not want its technologies used in situations involving autonomous weapons or domestic surveillance. Last year, DeepSeek spooked Silicon Valley tech companies and sent the U.S. financial markets into a tailspin after releasing A.I. technologies that matched the performance of anything else on the market. Until then, the prevailing wisdom in Silicon Valley had been that the most powerful systems could not be built without billions of dollars in specialized computer chips. But DeepSeek said it had created its technologies using far fewer resources. Like U.S. companies, DeepSeek, Moonshot and MiniMax build their A.I. technologies using computer code and data corralled from across the internet. A.I. companies across the globe lean heavily on a practice called open sourcing, which means they freely share the code that underpins their technologies and reuse code shared by others. They see this is as way of accelerating technological development. A.I. companies also need enormous amounts of online data to train their A.I. systems. The leading systems learn their skills by analyzing just about all of the text on the internet. Distillation is often used to train new systems. This is often allowed by open source technologies. But if a company takes data from proprietary technology, the practice may be legally problematic. Anthropic, which is now valued at $380 billion, is facing multiple lawsuits accusing it of illegally using copyrighted internet data to train its systems. In September as part of a landmark legal settlement, Anthropic agreed to pay $1.5 billion to a group of authors and publishers after a judge ruled it had illegally downloaded and stored millions of copyrighted books. It was the largest payout in the history of U.S. copyright cases. OpenAI and other A.I. companies face similar suits, including a lawsuit brought by The New York Times against OpenAI and its partner Microsoft. That suit contends that millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information. Both OpenAI and Microsoft deny the claims.
[9]
Anthropic says DeepSeek, other Chinese AI firms extracted Claude data
Anthropic has accused three major Chinese AI firms of using fraudulent accounts to extract data from its Claude models in an effort to boost rival systems. The San Francisco-based company said DeepSeek, MiniMax Group Inc. and Moonshot collectively generated more than 16 million exchanges with Claude by creating thousands of fake accounts and using proxy services to avoid detection. The tactic, known as distillation, allows developers to train their own models on the outputs of more advanced systems.
[10]
Anthropic accuses Chinese AI firms of data copying using fake accounts and AI distillation methods
U.S. AI company Anthropic has accused three Chinese AI firms of secretly taking data from its Claude AI system. The three companies named are DeepSeek, Moonshot AI, and MiniMax. Anthropic said these companies created more than 24,000 fake accounts on its platform to access Claude. Using those accounts, they sent over 16 million prompts to Claude to collect responses and learn from them. Anthropic claims this was done to train and improve their own AI models faster and cheaper. The activity is called "distillation," which means copying knowledge from a powerful AI to build another model quickly, as stated by The Wall Street Journal. Anthropic said distillation itself is legal and useful when companies use it for their own systems, but it can be misused to copy competitors. According to Anthropic, the scale of activity differed:Representatives of the three Chinese companies did not respond to requests for comment. Earlier this month, OpenAI also accused DeepSeek of using similar data-copying methods in a memo sent to U.S. lawmakers. Many Chinese AI firms, including Moonshot and MiniMax, recently launched advanced AI models with strong coding and reasoning abilities. DeepSeek is also preparing to release its next-generation AI model soon. When DeepSeek became popular last year, experts worried China could catch up quickly with U.S. AI companies even without top-level chips. In a past research paper, DeepSeek said it trained its model using webpages and ebooks, but some of those pages contained AI-generated answers, as per the report by The Wall Street Journal. This means its model may have indirectly learned from other powerful AI systems through internet data. Synthetic data, including distillation methods, is becoming common because AI companies are running out of high-quality training data. Moonshot itself confirmed in a technical report that it used synthetic data to train one of its AI models. Anthropic warned this issue could become a national security risk for the U.S. The company said copied AI technology could potentially be used in military, intelligence, or surveillance systems. Overall, the case shows that the global AI race between U.S. and Chinese companies is becoming more intense. Q1. Why did Anthropic accuse Chinese AI companies? Anthropic said some Chinese AI firms created fake accounts and used millions of prompts to take data from its Claude AI system. Q2. What is AI distillation and why is it controversial? AI distillation means copying knowledge from a powerful AI model to train another model faster and cheaper, which can raise fairness and security concerns.
[11]
Anthropic Says Chinese Labs Used 24,000 Fake Accounts To Rip Off Claude: Here's What It Means For AMZN, PLTR - Alphabet (NASDAQ:GOOGL)
Anthropic accused three Chinese AI companies of running 24,000 fraudulent accounts to siphon capabilities from its Claude chatbot, in what may be the largest documented case of AI model theft to date. DeepSeek, Moonshot AI and MiniMax generated over 16 million exchanges with Claude, violating terms of service and geographic access restrictions. The labs used a technique called distillation, where a weaker model trains on the outputs of a stronger one, to extract Claude's most advanced reasoning, coding and tool-use capabilities. Anthropic's head of threat intelligence Jacob Klein said the company has "high confidence these labs were conducting distillation attacks at scale." What The Labs Were After DeepSeek ran over 150,000 exchanges specifically designed to make Claude walk through its reasoning step-by-step, generating chain-of-thought training data that could be fed directly into a competing model. Moonshot AI hit 3.4 million exchanges targeting reasoning and coding. MiniMax was the heaviest user at 13 million exchanges. All three accessed Claude through proxy services that resell API access in China, where Anthropic doesn't operate commercially. What Prediction Markets Say Polymarket traders currently give Anthropic a 92% chance of holding the top-ranked AI model at the end of February, based on the Chatbot Arena leaderboard. That market has drawn $19.6 million in volume. DeepSeek, despite the alleged distillation campaigns, sits at just 1%. A separate Polymarket contract pricing an AI bubble burst by end of 2026 sits at 19%, with $2 million in volume. One of the trigger conditions is Anthropic filing for bankruptcy, which traders clearly aren't worried about. The company just closed a $30 billion Series G at a $380 billion valuation. Why Investors Should Care Anthropic is private, but its backers are not. The risk for investors is straightforward. If Chinese labs can replicate Claude's most valuable capabilities for a fraction of the cost, it erodes the competitive moat that justifies Anthropic's $380 billion valuation and the billions poured into training infrastructure behind it. Amazon.com Inc (NASDAQ:AMZN) has invested $8 billion and provides the classified cloud infrastructure through which Claude reaches Pentagon networks. Alphabet holds an estimated 10-14% stake after more than $3 billion in total investment. Palantir Technologies Inc (NASDAQ:PLTR) could also be also worth watching. Palantir delivers Claude to classified defense environments through its Maven Smart System, and the Pentagon is already reviewing Anthropic's $200 million contract amid a separate dispute over military use restrictions. If that relationship fractures, Palantir needs a replacement frontier model. If it holds, the distillation revelations may actually strengthen the case for keeping Claude inside the classified stack. Wedbush's Dan Ives called it an "AI Ghost Trade" and said the fears are a "huge disconnected overhang" on the sector, arguing cybersecurity is "the next frontier for the AI Revolution." Image: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
Share
Share
Copy Link
Anthropic claims three Chinese AI companies created over 24,000 fraudulent accounts to generate 16 million exchanges with Claude AI, targeting its reasoning and coding abilities. The accusations fuel debates over AI chip exports to China as DeepSeek, Moonshot, and MiniMax allegedly used distillation to train their own models at a fraction of the cost.
Anthropic has accused three Chinese AI labs—DeepSeek, Moonshot, and MiniMax—of conducting what it describes as industrial-scale copying campaigns against its Claude AI model. The San Francisco-based company claims these firms created over 24,000 fraudulent accounts and generated more than 16 million exchanges with Claude to improve their own models through a technique called distillation
1
. While distillation is a legitimate training method that AI labs use to create smaller, cheaper versions of their own models, Anthropic argues that competitors can exploit it to essentially copy the homework of rival companies1
.
Source: Benzinga
The allegations come as the U.S. debates how strictly to enforce export controls on advanced AI chip exports, a policy designed to curb China's AI development
1
. Anthropic's claims add fuel to growing concerns about Chinese firms improperly gaining an edge in the global AI race, particularly as DeepSeek prepares to release its V4 model, which reportedly can outperform both Claude AI and OpenAI's ChatGPT in coding tasks1
.
Source: TechCrunch
According to Anthropic, the three companies followed a consistent pattern: they used commercial proxy services that resell access to frontier AI models and built what the company calls "hydra clusters"—sprawling networks of fraudulent accounts that distribute traffic across Anthropic's API as well as third-party cloud providers
3
. In one case, a single proxy network managed more than 20,000 fraudulent accounts simultaneously3
. The company pinpointed the three firms with "high confidence" based on internet protocol addresses, metadata, and corroboration from industry partners who observed the same actors on their platforms5
.The scale of each attack differed significantly. DeepSeek generated over 150,000 exchanges that targeted foundational logic and alignment, specifically around censorship-safe alternatives to politically sensitive questions about dissidents, party leaders, or authoritarianism
1
. Moonshot AI, known for its Kimi models, had more than 3.4 million exchanges targeting agentic reasoning and tool use, coding and data analysis, computer-use agent development, and computer vision1
. MiniMax conducted the largest campaign with over 13 million exchanges focused on agentic coding and orchestration4
. Anthropic observed MiniMax redirecting nearly half its traffic to siphon capabilities from the latest Claude model within 24 hours of its launch1
.Anthropic warns that illicitly distilled models are "unlikely" to retain existing safeguards built into American AI systems
2
. The company argues that foreign labs that distill American models can remove these protections, feeding model capabilities into military, intelligence, and surveillance systems2
. This enables authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance1
. Anthropic and other U.S. companies build systems that prevent state and non-state actors from using AI to develop bioweapons or carry out malicious cyber activities, creating national security risks when those protections are stripped out1
.
Source: Engadget
The accusations arrive at a critical moment for U.S. policy. Last month, the Trump administration formally allowed U.S. companies like Nvidia to export advanced AI chips such as the H200 to China
1
. Anthropic contends that executing distillation at this scale "requires access to advanced chips," reinforcing the rationale for export controls since restricted chip access limits both direct model training and the scale of illicit distillation1
. Dmitri Alperovitch, chairman of the Silverado Policy Accelerator and co-founder of CrowdStrike, told TechCrunch that "part of the reason for the rapid progress of Chinese AI models has been theft via distillation of US frontier models," adding that this should provide "even more compelling reasons to refuse to sell any AI chips" to these companies1
.Related Stories
While Anthropic calls for coordinated action from the AI industry, cloud providers, and policymakers, the company faces criticism over what some perceive as hypocrisy
3
. Anthropic itself has a controversial record of scraping data across the internet to train their own models, allegedly including numerous copyrighted books without the authors' permission3
. Programmer Gergely Orosz, who publishes a newsletter for software engineers, tweeted that "Anthropic can't have it both ways," noting that the company trained Claude on copyrighted books and only paid copyright holders after a lawsuit3
. Elon Musk also weighed in, questioning how Chinese companies could "steal the stuff Anthropic stole from human coders"3
.Despite the backlash, Anthropic says it is strengthening defenses to make large-scale distillation harder to carry out and easier to detect
4
. The company has deployed classifiers and behavioral fingerprinting systems to identify extraction patterns in API traffic, including chain-of-thought elicitation and coordinated multi-account activity4
. It is also sharing technical indicators of large-scale distillation operations with other AI labs, cloud providers, and authorities, while tightening verification for educational, research, and startup accounts often used to create fraudulent access4
. The company acknowledges that "no company can solve this alone" and that addressing distillation attacks at this scale requires coordinated industry and policy action5
. OpenAI similarly warned U.S. lawmakers earlier this month about DeepSeek's "ongoing efforts to free-ride on the capabilities developed by OpenAI and other U.S. frontier labs"2
. These campaigns are growing in intensity and sophistication, Anthropic warns, with the window to act narrowing as the threat extends beyond any single company or region5
.Summarized by
Navi
[1]
05 Sept 2025•Technology

18 Sept 2025•Policy and Regulation

02 Aug 2025•Technology
1
Technology

2
Technology

3
Science and Research
