5 Sources
[1]
China mandates domestic firms source 50% of chips from Chinese producers -- Beijing continues to squeeze companies over reliance on foreign semiconductors
Questions remain about how native chips will work with existing AI infrastructure and software platforms like CUDA. China has mandated that all domestic data centers begin using more Chinese-produced processors as part of an increasing initiative to make China self-sufficient when it comes to silicon, as reported by SCMP. This comes at a time of enormous global investment in datacenters, AI, and processor production, with an increased drive towards self-sufficiency and nationalistic stances on cutting-edge technology. Moving forward, publicly-owned Chinese datacenter firms will reportedly be required to source more than 50% of their chips from domestic producers, according to people said to be familiar with the matter. This comes from guidelines initially published last March by the Shanghai municipality. This order appears to have now been extended to the wider country and could have a large impact on Chinese chip adoption and investment, as well as the country's overall interest in US processors. Major US companies like Nvidia and AMD have faced federal hurdles to selling their latest chips to Chinese firms for some time, and are currently only able to sell watered-down versions of their latest designs, all while giving the US government a 15% cut of any proceeds, per terms of the latest White House deal. Although it seems that the US government does want to maintain favorable trading relationships with China and its domestic companies, the process has been fraught with turbulence in recent months, raising concerns that companies unable to source the chips they needed would turn to other avenues to acquire them. That's led to smuggling, but also may be part of the drive towards more local chip production in China. Like Nvidia and other US company initiatives to spend hundreds of billions of dollars in datacenter construction and US-based chip fabrication, China has also announced a plethora of new datacenter projects in the past few years. Many of these were expected to be powered by some version of the latest GPUs from Nvidia and similar companies. However, amidst on-again, off-again trade blockades and Chinese concerns over tracking hardware built into the graphics cards, it's looking to drive more investment into locally produced chips moving forward. The news follows reports last week that China was discouraging the use of products like Nvidia's H20, hinting at security concerns. The result of this policy may be a slowdown in Chinese AI innovation, though. While locally produced chips are said to be capable enough for AI inference - the day-to-day operation of running the AIs - they aren't as capable as the latest Nvidia and AMD GPUs for training. Limited access to the latest designs - and any future designs going forward - could severely limit Chinese innovation in this space. Its domestically produced GPUs are considered years behind the likes of Nvidia's latest developments. Notably, DeepSeek R2 was recently reported to have run into delays after it was trained on Huawei chips rather than Nvidia hardware at the behest of Beijing. There's also the question of what software such Chinese chips might run on, and how that might work in conjunction with existing American processors. Nvidia chips can leverage the powerful and near-ubiquitous CUDA ecosystem, which Chinese-produced chips will not be able to take advantage of.
[2]
Fragmented ecosystems and limited supply: Why China cannot break free from Nvidia hardware for AI
Can China's attempts to use homegrown chips work for the country's AI ambitions? Last week saw major twists in China's AI landscape: Trump imposed a 15% sales tax on AMD and Nvidia hardware sold to China, Beijing froze new Nvidia H20 GPU purchases over security concerns, and DeepSeek dropped plans to train its R2 model on Huawei's Ascend NPUs -- raising doubts about China's ability to rely on domestic hardware for its AI sector. As part of its recurring five-year strategic plans, China's long-stated goal has been to gain its own technological independence, particularly in new and emerging segments that it sees as key to its national security. However, after years of plowing billions into fab startups and its own nascent chip industry, that country still lags behind its Western counterparts and has struggled to build its own truly insulated supply chain that can create AI accelerators. Additionally, the country lacks an effective software ecosystem to rival Nvidia's CUDA, creating even more challenges. Here's a closer look at how this is impacting the country's AI efforts. China has had a self-sufficiency plan for its semiconductor industry in general since the mid-2010s. Over time, as the U.S. imposed sanctions against the People's Republic's high-tech sectors, the plan evolved to address supercomputers (including those capable of AI workloads) and fab tools. In 2025, China has created several domestic AI accelerators, and Huawei has even managed to develop its rack-scale CloudMatrix 384. However, ever since the AI Diffusion Rule was canned, and the incumbent Trump administration banned sales of AMD's Instinct MI308 and Nvidia's HGX H20 to Chinese entities, the PRC doubled down on its efforts to switch crucially important AI companies to using domestic hardware. As a result, when the U.S. government announced plans to grant AMD and Nvidia export licenses to sell their China-specific AI accelerators to clients in the People's Republic, U.S. President Trump announced an unprecedented 15% sales tax on AMD's and Nvidia's hardware sold to China. China's government then made shipments of Nvidia's HGX H20 hardware strategic, and instructed leading cloud service providers to halt new purchases of Nvidia's H20 GPUs while it examines alleged security threats, a move that could potentially bolster demand for domestic hardware. This may be good news for companies like Biren Technology, Huawei, Enflame, and Moore Threads. There's a twist in this tale, though -- DeepSeek reportedly had to abandon training of its next-generation R2 model on domestically developed Huawei's Ascend platforms because of unstable performance, slower chip-to-chip connectivity, and limitations of Huawei's Compute Architecture for Neural Networks (CANN) software toolkit. This all begs the question: can China rely on its homegrown hardware for AI development? Nvidia has been supplying high-performance AI GPUs fully supported by a stable and versatile CUDA software stack for a decade, so it's not surprising that many, if not all, of the major Chinese AI hyperscalers -- Alibaba, Baidu, Tencent, and smaller players like DeepSeek currently use Nvidia's hardware and software. Although Alibaba and Baidu develop their own AI accelerators (primarily for inference), they still procure tons of Nvidia's HGX H20 processors. SemiAnalysis estimated that Nvidia produced around a million HGX H20 processors last year, and almost all of them were purchased by Chinese entities. No other company in China supplied a comparable number of AI accelerators in 2024. However, analyst Lennart Heim believes that Huawei had managed to illegally obtain around three million Ascend 910B dies in 2024 from TSMC, which is enough to build around 1.4 - 1.5 million Ascend 910C chips in 2024 - 2025. This is comparable to what Nvidia supplied to China in the same period. However, while Huawei may have enough Ascend processors to train its Pangu AI models, it appears that other companies have other preferences. DeepSeek trained the R1 model on a cluster of 50,000 Hopper-series GPUs. This consisted of 30,000 HGX H20s, 10,000 H800s, and 10,000 H100s. These chips were reportedly purchased by DeepSeek's investor, High-Flyer Capital Management. As a result, it's logical that the whole software stack of DeepSeek -- arguably China's most influential AI software developer -- is built around Nvidia's CUDA. However, when the time came to assemble a supercluster to train DeepSeek's upcoming R2 model, the company was reportedly persuaded by the authorities to switch to Huawei's Ascend 910-series processors. However, when it encountered unstable performance, slower chip-to-chip connectivity, and limitations of Huawei's CANN software toolkit, it decided to switch back to Nvidia's hardware for training, but use Ascend 910 AI accelerators for inference. Speaking of these exact accelerators, we do not know whether DeepSeek used Huawei's latest CloudMatrix 384, based on the latest Ascend 910C, or something else. Since DeepSeek has not disclosed these challenges officially, we can only rely on a report from the Financial Times. The publication claims that Huawei's Ascend platforms did not work well for DeepSeek. Why they were deemed to be unstable is another question. It's a distinct possibility that DeepSeek only began to work with CANN this Spring, so the company simply has not had enough time to port its programs from Nvidia's CUDA to Huawei's CANN toolkit. It is extremely complicated to analyze high-tech industries in China, as companies tend to keep secrets closely guarded and fly under the U.S. government's radar. However, two important factors that may have a drastic effect on the development of AI hardware in China occurred this summer. Firstly, the Model-Chip Ecosystem Innovation Alliance was formed, and secondly, Huawei made its CANN software stack open source. The Model Chip Ecosystem Innovation Alliance includes Huawei, Biren Technologies, Enflame, and Moore Threads and others. The group aims to build a fully localized AI stack, linking hardware, models, and infrastructure, which is a clear step away from Nvidia or any other foreign hardware. Its success depends on achieving interoperability among shared protocols and frameworks to reduce ecosystem fragmentation. While low-level software unification may be difficult due to varied architectures (e.g., Arm, PowerVR, custom ISAs), mid-level standardization is more realistic. By aligning around common APIs and model formats, the group hopes to make models portable across domestic platforms. Developers could write code once -- e.g., in PyTorch -- and run it on any Chinese accelerator. This would strengthen software cohesion, simplify innovation, and help China build a globally competitive AI industry using its own hardware. There is also an alliance called the Shanghai General Chamber of Commerce AI Committee, which focuses on applying AI in real-world industries; this also unites hardware and software makers. Either as part of the commitment to the new alliance, or as part of the general attempt to make its Ascend 910-series the platform of choice among China-based companies, Huawei open-sourced CANN in early August, which is specifically optimized for AI and its Ascend hardware. Until this summer, Huawei's AI toolkit for its Ascend NPUs was distributed in a restricted form. Developers had access to precompiled packages, runtime libraries, and bindings, which allowed TensorFlow, PyTorch, and MindSpore to run on the hardware. These pieces worked well enough to allow users to train and deploy models, but the underlying stack, such as compilers or libraries, remained closed. Now, this barrier has been removed. The company released the source code for the full CANN toolchain; however, it did not formally confirm what exactly it unseals, so we can only wonder or speculate. The list of opened up technologies likely includes compilers that convert model instructions into commands that Ascend NPUs understand, such as low-level APIs, libraries of AI operators that accelerate core math functions, and a system-level runtime. This will allow the management of memory, scheduling, and communication. This isn't officially confirmed, but merely an educated guess as to what CANN's open-sourcing might enable. By opening up CANN, Huawei can attract a broad community of developers from academia, startups, and other enterprises to its platform, and enable them to experiment with performance tuning or framework integration (beyond TensorFlow and PyTorch). This will inevitably speed up CANN's evolution and bug fixing. Eventually, these efforts could bring CANN closer to what CUDA offers, which will be a useful string in Huawei's bow. For Huawei, opening up CANN ahead of other Model-Chip Alliance members was beneficial, as it already had the most mature AI hardware platform in production, and needed to position its Ascend platform as the baseline software ecosystem others could rely on. This move makes CANN the default foundation for domestic models and hardware developers (at least for now). By taking this first step, Huawei set a reference point for interoperability and signalled a commitment to shared standards, which could help reduce fragmentation in China's AI software stack. But while unification of the software stack is a step in the right direction, there is an elephant in the room regarding China's AI hardware self-reliance. The People's Republic still cannot produce hardware that is on par with AMD or Nvidia in volume domestically. The hardware that can be made in China is years behind the processors developed on U.S. soil. All leading developers of AI accelerators in China, like Biren, Huawei, and Moore Threads, are in the U.S. Department of Commerce's Entity List. This means that they do not have access to the advanced fabrication capabilities of TSMC. To that end, they have to produce their chips at China-based SMIC, whose process technologies cannot match those offered by TSMC. While SMIC can produce chips on its 7nm-class fabrication process, Huawei had to obtain the vast majority of silicon for its Ascend 910B and Ascend 910C processors by deceiving TSMC. Companies like Biren or Moore's Threads do not disclose which foundry they use, but they do not have the luxury of choice. Of course, neither Huawei nor SMIC stands still. The two companies are working to advance China's semiconductor industry and build a local fab tools supply chain that will replace the leading-edge equipment that SMIC cannot acquire. Before this happens, SMIC is expected to start building chips on its 6nm-class process technology and even 5nm-class production node, so it may well build advanced AI processors for Huawei and other players. But the big question is whether volumes will manage to meet the demands of AI training and inference, especially if Nvidia hardware is largely unobtainable in China. The maturity of Huawei's CANN (and competing stacks) lags behind Nvidia's CUDA largely because there has not been a broad, stable installed base of Ascend processors outside Huawei's own projects. Developers follow scale, and CUDA became dominant because millions of Nvidia GPUs were shipped and widely available, which justified investment in tuning, libraries, and community support. In contrast, Huawei and other Chinese developers have their proprietary software stacks and cannot ship millions of Ascend NPUs or Biren GPUs due to sanctions from the U.S. government. On the other hand, even if Huawei and others managed to flood the market with Ascend NPUs or Moore Threads GPUs, a weak software stack makes them unattractive for developers. DeepSeek's attempt to train R2 on Ascend is a good example: performance instability, weaker interconnects, and CANN's immaturity reportedly made the project impractical, forcing a return to Nvidia hardware for training. Hardware volume alone will not change that. The new Model-Chip Ecosystem Innovation Alliance is attempting to address the issue by setting common mid-level standards -- things like shared model formats, operator definitions, and framework APIs. The idea is that developers could write code once in PyTorch or TensorFlow and then run it on any Chinese AI accelerator, whether it is from Huawei, Biren, or another vendor. However, until these standards are actually in place, fragmentation means every company will face several problems at once. The hardware and software face competition across multiple fronts in a saturated market. As a result, the low volume of China-developed AI accelerators, a lack of common standards, and competition on various fronts will make it very hard for Chinese companies to challenge Nvidia's already dominant ecosystem.
[3]
China has reportedly told its data center operators to source more than 50% of their chips from domestic manufacturers in an effort to break away from US tech
The Chinese government has reportedly mandated that its domestic data center operators nationwide should source more than 50 per cent of their chips from domestic producers. The mandate is said to have originated in guidelines proposed last March for the Shanghai municipality, which stipulated that "adoption of domestic computing and storage chips at the city's intelligent computing centres should be above 50 per cent by 2025" (via the South China Morning Post). According to the SCMP, a source working as an adviser in the data center industry told the outlet that the Shanghai chip quotas for the city's data centers had since become a mandatory nationwide policy. In the great AI race, China is often viewed to be somewhat behind when it comes to computing power, despite efforts in recent years to build more than 500 new data centers across the country. Sources have told the SCMP that while Chinese chips are considered "usable" in inference training for AI models within these facilities, Nvidia chips are still the go-to choice. While the US government has recently decided to grant licenses to Nvidia to sell its H20 GPUs to the country once more, a requirement of 50% or more all-Chinese chips would likely affect potential sales -- although given the rate of expansion and the apparent popularity of Nvidia hardware in the country, I'd say it was still likely to shift a fair few units to Chinese shores. The SCMP also reports that data centers are facing adaptation challenges in integrating Nvidia's hardware with domestic solutions. Nvidia's AI GPUs make use of Nvidia's CUDA software ecosystem, while Chinese models often use Huawei CANN or similar. Integrating the two together is apparently quite the technical challenge, particularly if firms wish to min/max the number of faster Nvidia chips they're allowed to use. Despite these issues, it should be noted that Chinese developers were still able to create DeepSeek's AI models under previous chip restrictions, an open-source alternative that shook the industry considerably at the start of this year. While China may be behind in the hardware game, it doesn't appear to have held it back as much as advocates for the previous US sanctions may have hoped, with Nvidia CEO Jensen Huang openly praising the Chinese AI industry last month. In reference to fears of the Chinese military advancing its tech with US AI hardware, Huang said: "There's plenty of computing capacity in China already. If you just think about the number of supercomputers in China, built by amazing Chinese engineers, that are already in operation." "They don't need Nvidia's chips, certainly, or American tech stacks, in order to build their military." How much of China's current AI output is being trained and run on Chinese hardware, compared to US equivalents, is currently unclear. Still, this mandate looks to be an attempt by the Chinese government to put curbs on a potential future US tech dominance within its AI industry, and ensure that Chinese AI hardware remains at the forefront of its AI development.
[4]
NVIDIA's GPU Software Makes Chinese Efforts To Switch To Domestic AI Chip Use Full Of Headaches, Says Report
This is not investment advice. The author has no position in any of the stocks mentioned. Wccftech.com has a disclosure and ethics policy. Chinese AI data centers are finding it difficult to make the switch from NVIDIA's AI GPUs to Huawei's products due to software constraints, reports the South China Morning Post. The report outlines that the Chinese government has mandated all publicly funded AI data centers to use at least 50% domestic chips in order to reduce reliance on foreign chips. The chip mandate stemmed from the Shanghai municipality's guidelines last year, which required the city's computing centers to use 50% domestic chips, and these quotas were made mandatory nationwide across China this year, according to the SCMP's sources. After the Trump administration allowed NVIDIA to sell its H20 GPUs to China, the chips were caught in a controversy which hinted that they contained backdoors, tracking software or other vulnerabilities. While NVIDIA denied these reports, additional reports claimed that the Chinese government was wary of the hardware. However, soon, sources speculated that China was also wary of relying too much on foreign chips for its AI computing needs - a fact that NVIDIA CEO Jensen Huang had relied on when arguing for the export control restriction removal on his firm's products. The latest report from the SCMP suggests that China has now made it mandatory for state-run or owned computing infrastructures to rely on domestic chips as well. The details suggest that these centers will now have to use at least 50% of domestically procured chips, a requirement that stems from the Shanghai municipality's rules, which were introduced in 2024. The latest Chinese chips, which can act as a substitute for NVIDIA's hardware, are those designed by Huawei and manufactured by SMIC. SMIC is the only firm Huawei can turn to for making its chips since US sanctions restrict it from relying on TSMC. As a result, the latest Chinese indigenous chips are restricted to the 7-nanometer process, as shifting to advanced technologies requires EUV equipment, which has also been sanctioned for sale to SMIC by the US. NVIDIA's chips are also key for training new AI models, according to SCMP's sources. As a result, while Huawei's chips can be used to run new AI models, the government's requirements have created headaches for cluster operators whose AI applications have been developed with NVIDIA's chips. The troubles stem from the complementary software models required to run the chips. NVIDIA's GPUs run on the CUDA platform while Huawei's chips rely on the CANN platform instead. As a result, data centers that are now forced to use at least 50% of Huawei's chips are struggling to make their AI models that were trained with NVIDIA's chips compatible with Huawei's chips.
[5]
NVIDIA's Emerging AI Chip Rival in China, Cambricon, Plans to Raise $560 Million to Boost Competition as Beijing Moves to Mandate Homegrown AI Chips For Datacenters
China's AI industry has seen massive developments recently, as the government starts to make efforts to ensure reliance on domestic AI chips. The recent trade situation, particularly between the US and China, has prompted both nations to tighten their grip on their respective AI technologies due to their importance. We recently saw how Beijing is moving against NVIDIA's H20 AI chip by reportedly advising local tech giants not to buy foreign chips as they could contain security flaws. And now, China is pushing domestic AI chip efforts, as the nation is now imposing requirements on data center buildouts to use homegrown solutions, heavily benefiting the likes of Huawei and Cambricon. It is claimed that the government is demanding that data centers be equipped with more than fifty percent of AI chips coming from domestic AI companies. The nation wants to reduce its reliance on NVIDIA, and considering that the US government has plans to impose security backdoors into chips flowing into China, the switch towards chips like Huawei's Ascend might be much more widespread. It is revealed that domestic AI chips cannot deliver the performance required to train top-tier AI models, which is why it is rumored that DeepSeek's next R2 model is delayed. With the growing demand for Chinese AI chips, firms like Cambricon are capitalizing on the hype by raising capital to fund their optimistic projects. Cambricon is expected to raise around 4 billion yuan, as the firm pushes its AI chips to replace the likes of AMD and NVIDIA. The Chinese AI firm offers options like the Siyuan Series for data centers and cloud computing, and is currently developing advanced options for LLMs to allow domestic AI firms to train next-gen AI models. But for now, the company has yet to make a solid breakthrough. There are a few options available to Chinese AI customers, mainly coming from Huawei. The firm offers its Ascend AI chip lineup, which includes models like the Ascend 910B and 910C, with the latter one rumored to beat NVIDIA's H100 chip in training performance. Similarly, Huawei also has a rack-scale solution called the CloudMatrix 384, which is said to rival NVIDIA's Blackwell NVL72 system. However, switching to domestic AI solutions isn't easy for Chinese firms, especially when no software matches the likes of NVIDIA's CUDA for now. China is looking for an alternative to NVIDIA's AI chips for now, but at least in the near term, the nation has to rely on American technology, given that domestic counterparts are
Share
Copy Link
China has mandated that domestic data centers source over 50% of their chips from Chinese producers, aiming to reduce reliance on foreign semiconductors. This move poses challenges for NVIDIA's market position and highlights China's push for technological self-sufficiency in AI development.
China has taken a significant step towards technological self-sufficiency by mandating that domestic data centers source more than 50% of their chips from Chinese producers 12. This move, initially proposed for Shanghai in March 2024, has now been extended nationwide, signaling Beijing's determination to reduce reliance on foreign semiconductors, particularly in the crucial field of artificial intelligence (AI) 1.
Source: pcgamer
This policy shift poses a direct challenge to major US companies like NVIDIA and AMD, who have long dominated the AI chip market 1. These firms have already faced federal hurdles in selling their latest chips to Chinese entities, with current sales limited to watered-down versions and subject to a 15% US government cut 1. The new mandate could further restrict their market access in China, potentially slowing the country's AI innovation in the short term.
While the move aims to bolster China's domestic chip industry, it faces several hurdles:
The mandate is expected to benefit domestic chip manufacturers such as Huawei, Biren Technology, Enflame, and Moore Threads 2. Cambricon, an emerging NVIDIA rival in China, plans to raise $560 million to boost competition in this newly favorable market 5. Huawei, in particular, has made significant strides with its Ascend AI chip lineup and the CloudMatrix 384 rack-scale solution 5.
Source: Tom's Hardware
The challenges of transitioning to domestic chips were highlighted by DeepSeek's experience. The company reportedly abandoned training its next-generation R2 model on Huawei's Ascend platforms due to unstable performance, slower chip-to-chip connectivity, and limitations of Huawei's Compute Architecture for Neural Networks (CANN) software toolkit 2. This setback underscores the current limitations of Chinese AI hardware and software ecosystems.
China's push for chip independence comes amid increasing global investment in data centers, AI, and processor production 1. While the country has made significant progress in developing its own AI accelerators, questions remain about their ability to match the performance of leading US-made chips 24.
Source: Wccftech
The success of this initiative will depend on China's ability to rapidly improve its chip technology and develop a robust software ecosystem to support AI development. As the situation evolves, it could potentially reshape the global AI chip market and influence international trade dynamics in high-tech sectors 35.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
13 hrs ago
9 Sources
Technology
13 hrs ago
Google's Made by Google 2025 event showcases the Pixel 10 series, featuring advanced AI capabilities, improved hardware, and ecosystem integrations. The launch includes new smartphones, wearables, and AI-driven features, positioning Google as a strong competitor in the premium device market.
4 Sources
Technology
13 hrs ago
4 Sources
Technology
13 hrs ago
Palo Alto Networks reports impressive Q4 results and forecasts robust growth for fiscal 2026, driven by AI-powered cybersecurity solutions and the strategic acquisition of CyberArk.
6 Sources
Technology
13 hrs ago
6 Sources
Technology
13 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
21 hrs ago
6 Sources
Technology
21 hrs ago
President Trump's plan to deregulate AI development in the US faces a significant challenge from the European Union's comprehensive AI regulations, which could influence global standards and affect American tech companies' operations worldwide.
2 Sources
Policy
5 hrs ago
2 Sources
Policy
5 hrs ago