6 Sources
6 Sources
[1]
Anthropic blocks Chinese-controlled firms from Claude AI -- cites 'legal, regulatory, and security risks'
Anthropic says that it wants to ensure that "authoritarian" and "adversarial" regimes cannot access its models. Anthropic has updated its terms of service to block access to its Claude AI models for any company that's majority-owned or controlled by Chinese entities, regardless of where those companies are based. The company says this decision is about "legal, regulatory, and security risks" and ensuring that "authoritarian" regimes do not have access to its cutting-edge models. Chinese entities "could use our capabilities to develop applications and services that ultimately serve adversarial military and intelligence services," Anthropic said in its press release published on September 5. The decision covers all Claude models, including Claude 3.5 Sonnet, and all developer-facing tools. It also includes subsidiaries and joint ventures that fall under Chinese ownership. In practice, this means firms like ByteDance, Tencent, and Alibaba, as well as any portfolio companies or foreign-incorporated divisions, are now cut off. Anthropic has acknowledged that this will impact revenue in the "low hundreds of millions of dollars," but maintains the policy is necessary to protect against the misuse of U.S. AI technology in sensitive or strategic contexts. This isn't the first time that Chinese users have been blocked from U.S.-developed models, but it is the first instance of a provider pre-emptively cutting off access based on corporate ownership rather than geography or specific use cases. Within hours of the block coming to light, Chinese AI startup Zhipu released a migration toolkit aimed at Claude users. It reportedly offers plug-and-play switching to its GLM-4.5 API, alongside a developer package that costs a fraction of Claude's pricing. According to Zhipu, the package includes 20 million free tokens and throughput three times higher than Claude's base tier. The company has also promised support for large context windows and compatibility with existing Claude workflows. Last year, Alibaba launched Alibaba Cloud after OpenAI restricted API access for developers in China. The company launched a migration program encouraging developers to switch to its Qwen-plus model, offering free tokens and competitive pricing compared to ChatGPT. But the stakes are higher now. Many enterprise users are already building on Claude for tasks like customer service and internal code generation. Those users affected by Claude's restrictions now face a choice of either rebuilding around local models or seeking exemptions through multicloud setups. Anthropic has not yet clarified how it will enforce the policy around global cloud resellers.
[2]
Anthropic Clamps Down on AI Services for Chinese-Owned Firms
Anthropic is blocking its services from Chinese-controlled companies, saying it's taking steps to prevent a US adversary from advancing in AI and threatening American national security. The San Francisco-based startup is widening existing restrictions on "authoritarian" regimes to cover any company that's majority-owned by entities from countries such as China. That includes their overseas operations, it said in a statement. Foreign-based subsidiaries could be used to access its technology and further military applications, the startup added.
[3]
Anthropic to stop selling AI services to majority Chinese-owned groups
Anthropic will stop selling artificial intelligence services to groups majority owned by Chinese entities, in the first such policy shift by an American AI company. The San Francisco-based developer of Claude AI is trying to limit the ability of Beijing to use its technology to benefit China's military and intelligence services, according to an Anthropic executive who briefed the Financial Times. The policy, which takes effect immediately, will apply to Chinese companies from ByteDance and Tencent to Alibaba. "We are taking action to close a loophole that allows Chinese companies to access frontier AI," said the executive, who added that the policy would also apply to US adversaries including Russia, Iran and North Korea. The executive said the policy was designed "to align with our broader commitment that transformational AI capabilities advance democratic interests in US leadership in AI". The shift reflects rising concerns in the US about Chinese groups setting up subsidiaries abroad in an effort to conceal their attempts to obtain American technology. Direct customers and groups that access Anthropic's services via cloud services will also be affected. The executive said the impact on Anthropic's global revenues would be in the "low millions of dollars". He said Anthropic understood that it would lose some business to rivals, but said the company felt that the move was necessary to highlight that the issue was a "significant problem". It comes as concerns rise in the US about China using AI for military purposes ranging from hypersonic weapons to nuclear weapons modelling. Chinese start-up DeepSeek sent shockwaves through the AI industry earlier this year when it released its open-source R1 model, which is considered comparable to leading US models. OpenAI later said it had evidence that DeepSeek had accessed its models inappropriately to train R1. DeepSeek has not commented on the claims. The Biden administration imposed sweeping export controls in an effort to make it harder for China to obtain American AI. The Trump administration has so far implemented almost no new controls as President Donald Trump tries to secure a meeting with China's President Xi Jinping. One person familiar with the situation said the policy was partly aimed at the growing number of Chinese subsidiaries in Singapore that companies on the mainland are using to access US technology with less scrutiny. It reflects the fact that groups in China must share data with the government when asked, posing a national security risk to the US. It also points to concerns about China appropriating American AI technology in ways that give it a commercial advantage over AI groups in the US. "This move could potentially impact companies like ByteDance, Alibaba and Tencent," said one person familiar with the situation. Anthropic was founded in 2021 by former OpenAI employees who wanted to prioritise AI safety. The company on Tuesday announced that it had raised $13bn in fresh funding, valuing it at $170bn. Earlier this year, Anthropic chief executive Dario Amodei advocated for strengthening export controls on China. Anthropic's main rival, OpenAI, has also offered support for controls to "protect" the US's lead in AI. Access to US chatbots -- such as Claude, OpenAI's ChatGPT, Google's Gemini and Meta's AI -- is banned in China. But users can access the technology by using virtual private networks, which is against the platforms' terms of service.
[4]
US AI giant Anthropic bars Chinese-owned entities
Anthropic is barring Chinese-run companies and organizations from using its artificial intelligence services, the US tech giant said, as it toughened restrictions on "authoritarian regions." The startup, heavily backed by Amazon, is known for its Claude chatbot and positions itself as focused on AI safety and responsible development. Companies based in China, as well as in countries including Russia, North Korea and Iran, are already unable to access Anthropic's commercial services over legal and security concerns. ChatGPT and other products from US competitor OpenAI are also unavailable within China -- spurring the growth of homegrown AI models from Chinese companies such as Alibaba and Baidu. Anthropic said in a statement dated Friday that it was going a step further in an update to its terms of service. Despite current restrictions, some groups "continue accessing our services in various ways, such as through subsidiaries incorporated in other countries," the US firm said. So "this update prohibits companies or organizations whose ownership structures subject them to control from jurisdictions where our products are not permitted, like China, regardless of where they operate." Anthropic -- valued at $183 billion -- said that the change would affect entities more than 50% owned, directly or indirectly, by companies in unsupported regions. "This is the first time a major US AI company has imposed a formal, public prohibition of this kind," said Nicholas Cook, a lawyer focused on the AI industry with 15 years of experience at international law firms in China. "The immediate commercial effect may be modest, since US AI providers already face barriers to operating in this market and relevant groups have been self-selecting for their own locally developed AI tech," he told AFP. But "taking a stance like this will inevitably lead to questions as to whether others will or should take a similar approach." An Anthropic executive told the Financial Times that the move would have an impact on revenues in the "low hundreds of millions of dollars." The San Francisco-headquartered company was founded in 2021 by former executives from OpenAI. It announced this week it had raised $13 billion in its latest funding round, saying it now has more than 300,000 business customers. And the number of accounts on pace to generate more than $100,000 annually is nearly seven times larger than a year ago, Anthropic said Tuesday. Some users in China do access US generative AI chatbots such as ChatGPT or Claude by using VPN services. Assumptions that the US was far ahead of China in the fast-moving AI sector were upended this year when Chinese start-up DeepSeek unveiled a chatbot that matched top American systems for an apparent fraction of the cost.
[5]
US AI giant Anthropic bars Chinese-owned entities
San Francisco (United States) (AFP) - Anthropic is barring Chinese-run companies and organizations from using its artificial intelligence services, the US tech giant said, as it toughened restrictions on "authoritarian regions." The startup, heavily backed by Amazon, is known for its Claude chatbot and positions itself as focused on AI safety and responsible development. Companies based in China, as well as in countries including Russia, North Korea and Iran, are already unable to access Anthropic's commercial services over legal and security concerns. ChatGPT and other products from US competitor OpenAI are also unavailable within China -- spurring the growth of homegrown AI models from Chinese companies such as Alibaba and Baidu. Anthropic said in a statement dated Friday that it was going a step further in an update to its terms of service. Despite current restrictions, some groups "continue accessing our services in various ways, such as through subsidiaries incorporated in other countries," the US firm said. So "this update prohibits companies or organizations whose ownership structures subject them to control from jurisdictions where our products are not permitted, like China, regardless of where they operate." Anthropic -- valued at $183 billion -- said that the change would affect entities more than 50 percent owned, directly or indirectly, by companies in unsupported regions. "This is the first time a major US AI company has imposed a formal, public prohibition of this kind," said Nicholas Cook, a lawyer focused on the AI industry with 15 years of experience at international law firms in China. "The immediate commercial effect may be modest, since US AI providers already face barriers to operating in this market and relevant groups have been self-selecting for their own locally developed AI tech," he told AFP. But "taking a stance like this will inevitably lead to questions as to whether others will or should take a similar approach." An Anthropic executive told the Financial Times that the move would have an impact on revenues in the "low hundreds of millions of dollars". The San Francisco-headquartered company was founded in 2021 by former executives from OpenAI. It announced this week it had raised $13 billion in its latest funding round, saying it now has more than 300,000 business customers. And the number of accounts on pace to generate more than $100,000 annually is nearly seven times larger than a year ago, Anthropic said Tuesday. Some users in China do access US generative AI chatbots such as ChatGPT or Claude by using VPN services. Assumptions that the US was far ahead of China in the fast-moving AI sector were upended this year when Chinese start-up DeepSeek unveiled a chatbot that matched top American systems for an apparent fraction of the cost.
[6]
Anthropic Service Policy Update Bans AI Access to Chinese Firms
On September 5, 2025, Anthropic announced a new update to its Terms of Service, restricting the use of its AI services in certain regions due to legal, regulatory, and security risks. While the company did not provide a comprehensive list of unsupported jurisdictions, it explicitly mentioned China as a key example of an "unsupported region". The policy prohibits companies or organisations whose ownership structures subject them to control from jurisdictions where Anthropic's products are not permitted, such as China, regardless of where they operate. "This includes entities that are more than 50% owned, directly or indirectly, by companies headquartered in unsupported regions", said the press release. Anthropic says its decision reflects growing concerns over national security risks, such as potential coercion to share data with intelligence services, and the fear that access to Anthropic's AI could further authoritarian military or intelligence objectives. Anthropic's September 5, 2025, update to its Terms of Service marked a significant escalation in its regional access policy. Previously, the company had imposed restrictions on services to unsupported regions, but the new policy now prohibits services to companies more than 50% owned by entities in jurisdictions like China, Russia, Iran, and North Korea, regardless of their operational location. In comparison, other major AI companies have also implemented region-based access controls, but with varying approaches. OpenAI restricts access to its services in certain countries, including China, Russia, and Iran, citing legal and regulatory reasons. Similarly, Google's Gemini is also not available in Iran, Russia, and China. Furthermore, Meta AI has faced regulatory challenges in Europe, which has led to the company not offering its AI assistant in Europe. Since mid-2025, the US has intensified its trade restrictions to curb China's AI ambitions. Notably, Nvidia and AMD agreed to pay the US government 15% of their revenue from AI chip sales to China in return for export licences for advanced chips such as Nvidia's H20 and AMD's MI308 chip, following a temporary chip export ban citing national security concerns. China had also implemented a 125% tariff on all US products but rolled back the tariffs on semiconductors in April 2025. However, the country also emphasised the need for multilateral governance of AI. At the World Artificial Intelligence Conference in Shanghai, Chinese Premier Li Qiang proposed forming a global AI cooperation organisation and unveiled a 13-point Global AI Governance Action Plan. The framework highlights inclusive digital infrastructure, open-source collaboration, and international standard-setting through bodies like the International Telecommunication Union (ITU), the International Organisation for Standardisation (ISO), and the International Electrotechnical Commission (IEC). By contrast, the Trump administration's AI Action Plan prioritises domestic growth, rapid deregulation, and strengthening US strategic dominance. It emphasises infrastructure expansion, private sector-led innovation, and securing export-control leverage, aiming to maintain technological superiority Anthropic's decision to ban Chinese-owned firms from accessing its AI services underscores the tightening restrictions on technology flows between the United States and China. By expanding its policy to cover entities majority-owned by companies in unsupported jurisdictions, Anthropic has closed loopholes that previously allowed subsidiaries or overseas branches to access advanced AI models. This move aligns Anthropic with other leading AI developers, such as OpenAI and Google's Gemini, which already blocked access in China, Russia, and Iran. At the same time, the announcement reflects the broader context of the escalating US-China technology confrontation. Washington has intensified controls on AI-related products, most notably by requiring Nvidia and AMD to share 15% of revenue from chip sales to China in return for export licences. Beijing has responded by imposing and later partially rolling back high tariffs on American microchips, while also proposing a Global AI Governance Action Plan focused on international cooperation and technical standards. Together, these developments highlight how commercial policies, national security concerns, geopolitics, and global AI governance efforts are converging to shape access to advanced AI technologies.
Share
Share
Copy Link
Anthropic, a leading US AI company, has updated its terms of service to prohibit Chinese-controlled entities from accessing its AI models, including Claude, citing legal, regulatory, and security concerns.
Anthropic, a prominent US-based AI company, has implemented a significant policy change by blocking Chinese-controlled firms from accessing its AI services, including the Claude AI models. This decision, effective immediately, is aimed at preventing "authoritarian" and "adversarial" regimes from exploiting US AI technology
1
.Source: Financial Times News
The updated terms of service prohibit any company that is majority-owned or controlled by Chinese entities from using Anthropic's AI models, regardless of where these companies are based
2
. This ban extends to:1
Anthropic cites several reasons for this policy change:
1
3
The policy change is expected to have significant financial implications:
1
1
In response to the ban, Chinese AI startup Zhipu has released a migration toolkit for Claude users, offering competitive pricing and features
1
.Related Stories
This move by Anthropic reflects growing concerns in the US about Chinese entities accessing and potentially misusing American AI technology:
5
3
3
Source: MediaNama
The policy shift by Anthropic may have far-reaching consequences:
1
Source: Tom's Hardware
As the AI industry continues to evolve rapidly, this policy change by Anthropic marks a significant moment in the ongoing debate about AI access, national security, and global technological competition.
Summarized by
Navi
[1]
[2]
[3]
[4]
[5]
06 Jun 2025•Technology
08 Nov 2024•Technology
09 Nov 2024•Technology