2 Sources
2 Sources
[1]
Top China silicon figure calls on country to stop using Nvidia GPUs for AI -- says current AI development model could become 'lethal' if not addressed
Wei Shaojun, vice president of China Semiconductor Industry Association, and a senior Chinese academic and government adviser, has called on China and other Asian countries to ditch using Nvidia GPUs for AI training and inference. At a forum in Singapore, he warned that reliance on U.S.-origin hardware poses long-term risks for China and its regional peers, reports Bloomberg. Wei criticized the current AI development model across Asia, which closely mirrors the American path of using compute GPUs from Nvidia or AMD for training large language models such as ChatGPT and DeepSeek. He argued that this imitation limits regional autonomy and could become 'lethal' if not addressed. According to Wei, Asia's strategy must diverge from the U.S. template, particularly in foundational areas like algorithm design and computing infrastructure. After the U.S. government imposed restrictions on the performance of AI and HPC processors that could be shipped to China in 2023, it created significant hardware bottlenecks in the People's Republic, which slowed down the training of leading-edge AI models. Despite these challenges, Wei pointed to examples such as the rise of DeepSeek as evidence that Chinese companies are capable of making significant algorithmic advances even without cutting-edge hardware. He also noted Beijing's stance against using Nvidia's H20 chip as a sign of the country's push for true independence in AI infrastructure. At the same time, he acknowledged that while China's semiconductor industry has made progress, it is still years behind America and Taiwan, so the chances that China-based companies will be able to build AI accelerators that offer performance comparable to that of Nvidia's high-end offerings are thin. Wei proposed that China should develop a new class of processors tailored specifically for large language model training, rather than continuing to rely on GPU architectures, as they were originally aimed at graphics processing. While he did not outline a concrete design, his remarks are a call for domestic innovation at the silicon level to support China's AI ambitions. However, he did not point out how China plans to catch up with Taiwan and the U.S. in the semiconductor production race. He concluded on a confident note, stating that China remains well-funded and determined to continue building its semiconductor ecosystem despite years of export controls and political pressure from the U.S. The overall message was clear: China must stop following and start leading by developing unique solutions suited to its own technological and strategic needs. Nvidia GPUs became dominant in AI because their massively parallel architecture was ideal for accelerating matrix-heavy operations in deep learning, offering far greater efficiency than CPUs. Also, the CUDA software stack introduced in 2006 enabled developers to write general-purpose code for GPUs, paving the way for deep learning frameworks like TensorFlow and PyTorch to standardize on Nvidia hardware. Over time, Nvidia reinforced its lead with specialized hardware (Tensor Cores, mixed-precision formats), tight software integration, and widespread cloud and OEM support, making its GPUs the default compute backbone for AI training and inference. Nvidia's modern architectures like Blackwell for data centers have plenty of optimizations for AI training and inference and have almost nothing to do with graphics. By contrast, special-purpose ASICs -- which are advocated by Wei Shaojun -- are yet to gain traction for either training or inference.
[2]
Top Beijing Adviser Says China Should Ditch Nvidia For Own Tech
China should develop chips to create AI that doesn't rely on the type of accelerators popularized by Nvidia Corp., a top government adviser said, warning that Asian companies in particular risk becoming beholden to US technology. Asian nations including China should reduce their dependence on the general-purpose graphics processing units now used around the world to train platforms from ChatGPT to DeepSeek, Wei Shaojun, a professor at Beijing-based Tsinghua University, told a forum in Singapore.
Share
Share
Copy Link
Wei Shaojun, a top Chinese semiconductor figure, calls for China and Asian countries to stop using Nvidia GPUs for AI, proposing the development of new AI-specific processors. This move aims to reduce dependence on U.S. technology and foster technological independence in AI development.
In a bold call for technological independence, Wei Shaojun, a prominent figure in China's semiconductor industry, has urged China and other Asian countries to cease using Nvidia GPUs for AI training and inference. Wei, who serves as the vice president of the China Semiconductor Industry Association and is a senior academic and government adviser, made these remarks at a forum in Singapore, highlighting the potential risks of relying on U.S.-origin hardware for long-term AI development
1
.Source: Tom's Hardware
Wei criticized the current AI development model across Asia, which closely mirrors the American approach of using compute GPUs from companies like Nvidia or AMD for training large language models. He argued that this imitation not only limits regional autonomy but could become "lethal" if not addressed promptly. Wei emphasized the need for Asia to diverge from the U.S. template, particularly in foundational areas such as algorithm design and computing infrastructure
1
.The call for independence comes in the wake of U.S. government restrictions imposed in 2023 on the performance of AI and HPC processors that could be shipped to China. These restrictions have created significant hardware bottlenecks in China, slowing down the training of leading-edge AI models. Despite these challenges, Wei pointed to examples such as the rise of DeepSeek as evidence that Chinese companies can make significant algorithmic advances even without cutting-edge hardware
1
.Wei proposed that China should develop a new class of processors tailored specifically for large language model training, rather than continuing to rely on GPU architectures originally designed for graphics processing. While he did not outline a concrete design, his remarks serve as a call for domestic innovation at the silicon level to support China's AI ambitions
1
2
.Related Stories
Wei acknowledged that while China's semiconductor industry has made progress, it still lags behind America and Taiwan by several years. This gap presents significant challenges in developing AI accelerators that can match the performance of Nvidia's high-end offerings. Nevertheless, Wei expressed confidence in China's determination and funding to continue building its semiconductor ecosystem, despite years of export controls and political pressure from the U.S.
1
.The call to move away from Nvidia GPUs comes against the backdrop of the company's dominant position in AI computing. Nvidia's GPUs became the standard for AI due to their massively parallel architecture, which is ideal for accelerating matrix-heavy operations in deep learning. The introduction of the CUDA software stack in 2006 further solidified Nvidia's position by enabling developers to write general-purpose code for GPUs, paving the way for deep learning frameworks to standardize on Nvidia hardware
1
.Summarized by
Navi
[1]
[2]
1
Business and Economy
2
Business and Economy
3
Technology