26 Sources
26 Sources
[1]
Huawei announces new AI infrastructure as Nvidia gets locked out of China
Tech giant Huawei unveiled new AI infrastructure meant to help boost compute power and allow the company to better compete with rival chipmaker Nvidia. At a keynote at its Huawei Connect conference on Thursday, Shenzhen, China-based Huawei announced new SuperPoD Interconnect technology that can link up to 15,000 graphics cards, including Huawei's Ascend AI chips, together to increase compute power. This tech seems to be a competitor for Nvidia's NVLink infrastructure, which facilitates high-speed communication between AI chips. Technology like this is critical for Huawei to better compete with semiconductors like Nvidia's. While Huawei's AI chips are less powerful than Nvidia's, being able to cluster them together will give its users access to more compute power, which is needed for training and scaling AI systems. This news also comes just a day after China banned domestic tech companies from buying Nvidia's hardware, including Nvidia's RTX Pro 600D servers specifically designed for the market in China. TechCrunch reached out to Huawei for more information.
[2]
Huawei reveals long-range Ascend chip roadmap -- three-year plan includes ambitious provision for in-house HBM with up to 1.6 TB/s bandwidth
Huawei's AI silicon roadmap promises in-house HBM and a future for Chinese compute beyond Nvidia. Huawei's AI silicon roadmap is no longer a state secret. Speaking at the Huawei Connect conference on September 18, rotating chairman Xu Zhijun outlined the company's first official long-range Ascend chip strategy, with four new parts scheduled across the next three years: Ascend 950PR and 950DT in early 2026, followed by Ascend 960 and 970 in 2027 and 2028, respectively. Huawei says its upcoming 950PR chip will ship in Q1 next year with in-house HBM designed to compete with the likes of SK hynix and Samsung. That's a pretty bold claim considering HBM supply and factors like packaging and bandwidth efficiency have arguably become the single biggest constraint on AI accelerator performance at scale. According to Huawei, the 950PR will feature 128GB of its in-house HBM delivering up to 1.6 TB/s of bandwidth, while the 950DT increases those figures to 144GB and 4 TB/s, but Huawei hasn't disclosed how its in-house HBM is manufactured, what packaging is used, or which foundry is producing the chip itself. Under U.S. sanction rules, Huawei is barred from accessing TSMC's advanced nodes and CoWoS packaging lines, both of which Nvidia uses to stack HBM around its top-end Hopper and Blackwell GPUs. If Huawei is working with SMIC or other domestic fabs, yields and bandwidth may prove to be hugely limiting factors. That hasn't stopped the company from talking scale, though. Alongside its chip roadmap, Huawei teased new so-called "supernodes" that will house thousands of Ascend chips. The Atlas 950 and 960 systems are positioned as next-gen AI compute clusters that, on paper, rival Nvidia's GB200 NVL72 configurations in deployment scale, with up to 15,488 Ascend accelerators in a single system. Huawei says Atlas 950 will debut in Q4 this year. But big numbers don't necessarily translate into performance. Nvidia's big advantage isn't just its silicon but NVLink and a tightly optimized software stack that keeps its clusters saturated across large model workloads. To challenge that, Huawei is going to need more than a boastful chip roadmap -- a roadmap that has landed conveniently alongside demands from the Chinese government to produce more domestic silicon and a ban on procuring Nvidia parts. Huawei will need a proven end-to-end platform that can match Nvidia in training, efficiency, and model throughput for its roadmap to succeed. Right now, it doesn't, and plans alone don't break bottlenecks.
[3]
Huawei lays out multi-year AI accelerator roadmap
On the same day that fellow Chinese giant Tencent says its overseas cloud clientele doubled Chinese tech giant Huawei has kicked off its annual "Connect" conference by laying out a plan to deliver increasingly powerful AI processors that look to have enough power that Middle Kingdom users won't need to try getting Nvidia parts across the border. Huawei already offers the Ascend 910C accelerator that Chinese AI upstart DeepSeek is thought to have used to develop its impressively efficient models. At Connect today, Huawei promised four successors. First off the rank, in the first quarter of 2026, will be the Ascend 950PR which, according to slideware shown at the conference, will boast one petaflop performance with the 8-bit floating-point (FP8) computation units used for many AI inferencing workloads. The chip will also include 2 TB/s interconnect bandwidth and 128GB of 1.6 TB/s memory. In 2026's final quarter Huawei plans to deliver the 950DT, which will be capable of two petaflops of FP4 performance thanks to the inclusion of 144GB of 4 TB/s memory. In 2027, Huawei plans the Ascend 960 that will include 288GB of 9.6TB/s memory. 2028 will see the debut of the Ascend 970, in which memory will speed along at 14.4 TB/s. Those memory speeds suggest Huawei has created its own high-bandwidth memory, or sourced some from within China, and is confident enough to include it on a multi-year roadmap. As The Register has previously reported, users of Huawei's existing accelerators have reported results that suggest the parts don't always perform brilliantly or match claims they are equivalent to kit from the likes of Nvidia and AMD. Such comparisons stopped mattering on Wednesday, when China forbade local companies from shopping for American accelerators. Now that Chinese buyers can't acquire chips from offshore, they'll have to learn to live with Huawei's wares. And so may some of the rest of us, because China's clouds continue to target overseas markets and appear to be enjoying considerable success: Tencent Cloud today revealed that its overseas client base has doubled since last year. Also today, Huawei made it easy for its compatriots to deploy Ascend accelerators by announcing a pair of "SuperPoD" rigs that can run 8,192 and 15,488 Ascend units apiece, plus "superclusters" that combine multiple SuperPoDs and scale to 500,000 and "over one million" Ascend parts. If the likes of Tencent Cloud do the patriotic thing and deploy Huawei's clusters outside China, accelerators from Huawei and other Middle Kingdom firms will become available to organizations around the world and create competition for US chipmakers. China's tech export playbook typically sees its vendors develop strong products that undercut established rivals on price, a strategy that came undone once national security organizations got a close look at Huawei's and ZTE's telecoms kit and the people who installed it. Denied access to the West, Beijing now targets developing nations in which price matters, scrutiny may be lighter, and AI hardware from Nvidia and AMD is even more terrifyingly expensive than it is elsewhere. ®
[4]
Key products in Huawei's AI chips and computing power roadmap
BEIJING, Sept 18 (Reuters) - China's Huawei (HWT.UL) has ended years of secrecy to detail its product roadmap for chips and computing power systems, discussing for the first time how it intends to challenge global industry leader Nvidia (NVDA.O), opens new tab. Here are some highlights from Huawei's presentation: ASCEND ARTIFICIAL INTELLIGENCE CHIPS The Ascend series is Huawei's main competitive offering to Nvidia's AI chips and its current chip on the market is the Ascend 910C. Huawei is planning three new series over the next three years, namely the Ascend 950, 960 and 970, with the 950 to be launched in the first quarter of next year. There will be two types of Ascend 950 chips, the 950PR and 950DT, with the former suitable for the prefill stage of inference and to run recommendations, while the latter will be optimised for the decode stage of inference and model training. The Ascend 960 will have twice the computing power and memory capacity of the 950 and the company aims to go even higher with the 970. Huawei, which is barred from working with the world's top chip foundry TSMC (2330.TW), opens new tab by U.S. export controls, did not say who will manufacture its chips. However, analysts say the company works with chip equipment suppliers and China's largest foundry, Semiconductor Manufacturing International Corp (0981.HK), opens new tab. HIGH BANDWIDTH MEMORY Huawei said it now has its own proprietary high-bandwidth memory - technology currently dominated by South Korea's SK Hynix (000660.KS), opens new tab and Samsung Electronics (005930.KS), opens new tab. The Ascend 950PR will integrate Huawei's HBM chip called HiBL 1.0 Huawei says its HBM chip is more cost-effective than industry-leading HBM3E and HBM4E, providing 128 Gigabytes (GB) of memory and delivering 1.6 terabytes (TB) per second in memory bandwidth. Ascend 950DT will feature a higher-performance HBM chip called HiZQ2.0, providing 144 GB of memory and 4 TB/s memory bandwidth. SUPERNODE AND CLUSTERS Huawei has been using cluster computing which puts multiple computers or "supernodes" to work together in order to improve the performance of its chips. Its current main product is the Atlas 900 A3 SuperPoD, a system which uses 384 of Huawei's latest 910C chips and which one industry expert says rivals some of Nvidia's most advanced offerings. In June, Huawei showed off the system and the cloud service that runs on it, CloudMatrix 384, at a key industry show. Huawei plans to introduce the Atlas 950 SuperPod in Q4 2026. The system will pack 8,192 Ascend 950DT chips and comprise 160 cabinets -- 128 compute cabinets and 32 communications cabinets -- deployable in a 1,000 square meter space (10,764 square feet). Eric Xu, Huawei's rotating chairman, said the Atlas 950 SuperPod will have 6.7 times more computing power and 15 times more memory capacity than the NVL144 system that Nvidia intends to launch in 2026, and would continue to beat a successor system Nvidia is planning for in 2027. Huawei is also planning a second supernode product it called the Atlas 960 SuperPoD which it said would pack up to 15,488 Ascend 960 chips, and comprise 220 cabinets across a 2,200 square metre space. KUNPENG Besides the Ascend series, Huawei has another CPU chip product line called Kunpeng that is used for general servers. Huawei first announced the Kunpeng 920 chipset in 2019. The company will roll out new versions, the Kunpeng 950 and 960, in 2026 and 2028, Xu said, adding that this would be accompanied by another new cluster computing product called the TaiShan 950 SuperPod that he said would be focused on general-purpose computing. Reporting by Che Pan and Brenda Goh, editing by Ed Osmond Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * China Brenda Goh Thomson Reuters Brenda Goh is Reuters' Shanghai bureau chief and oversees coverage of corporates in China. Brenda joined Reuters as a trainee in London in 2010 and has reported stories from over a dozen countries.
[5]
Alibaba's AI chip goes head-to-head with Nvidia H20 in state-backed benchmark demo
A state TV broadcast puts Alibaba's PPU side-by-side with Nvidia's H20, but the claim rests entirely on optics. Alibaba's semiconductor unit, T-Head, has reportedly developed a new AI processor that it claims matches the performance of Nvidia's H20 -- the GPU built specifically for the Chinese market that's currently stuck in geopolitical purgatory. The demonstration aired Tuesday, September 16, on China Central Television (CCTV), during a broadcast covering Premier Li Qiang's visit to China Umicom's Sanjiangyuan Energy Intelligent Computing Centre in Qinghai. In the segment, T-Head's new "PPU" accelerator was directly compared with Nvidia's H20 and A800, as well as Huawei's Ascend 910B, with a chart implying performance parity between the Alibaba and Nvidia parts. The chip, an ASIC designed for AI workloads, features 96 GB of HBM2e, 700 GB/s chip-to-chip interconnect, PCIe support, and 400 W board power, according to the on-screen specs as reported by South China Morning Post. While the broadcast didn't disclose the specifics of the testing methodology used or publish raw figures, it's the first public benchmark placing Alibaba's hardware in the same class as Nvidia's datacenter GPUs. According to Reuters, China Unicom has already deployed 16,384 of Alibaba's PPU cards across its infrastructure, accounting for more than half of the almost 23,000 domestic accelerators currently installed at the Qinghai facility. Together, the cards deliver 3,579 petaflops of compute, with the site expected to scale to more than 20,000 petaflops once all phases are complete. There's just as much geopolitical context behind the CCTV demonstration as there is technical. Nvidia's H20 was introduced to comply with U.S. export controls limiting the sale of high-performance silicon to China. Built on Hopper architecture but cut down to meet restrictions, the H20 ships with 96 GB of HBM3 and roughly 4.0 TB/s of memory bandwidth. That lends some perspective to Alibaba's matching 96 GB HBM2e capacity, though not necessarily its real-world performance. The biggest unknown right now is on the software side. While Alibaba is understandably eager to show it can meet AI hardware needs in-house, the company has not disclosed details about frameworks, toolchains, or compatibility with existing model stacks. Until independent benchmarks and developer support materialize, the PPU's parity with Nvidia's hardware is just a claim backed by Chinese state TV and endorsed by the Chinese government. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!
[6]
How Huawei plans to outperform global tech leaders with less powerful chips
BEIJING (AP) -- China's Huawei Technologies said Thursday that it would roll out the world's most powerful AI computing clusters over the next two years as it seeks to outperform global leaders despite relying on less powerful domestic semiconductors. China is racing to develop its own technology as America restricts what can be sold to China, including its most advanced chips. At the same time, the Chinese government has reportedly told companies to stop buying some American chips as it seeks to transform China into a global tech leader and one that is less reliant on imported components. Huawei, at the forefront of efforts to develop home-grown technology, said at an annual customer event in Shanghai that it would launch new "superpods" in late 2026 and late 2027. That's computer industry lingo for a group of interconnected computers that, in Huawei's case, combines the power of thousands of chips. That immense power is needed to run models in the burgeoning field of artificial intelligence, an area of hot competition between the U.S. and China. Huawei announced plans to release the Atlas 950 and 960 superpods over the next two years. Dozens of the "SuperPoDs," as Huawei brands them, could be connected to form what Huawei said would be the world's most powerful "SuperClusters." The 950 and 960 are the most powerful superpods in the world and would remain so for years to come, a company news release said, based on product road maps from others in the industry. The challenge for China is how to keep pace with American competitors such as Open AI and Google without access to the world's most powerful semiconductors, notably those from America's market-leading Nvidia. The answer has been to use many more chips and develop the architecture to make them work well together. "Our strategy is to create a new computing architecture, and develop computing SuperPoDs and SuperClusters, to sustainably meet long-term demand for computing power," Eric Xu, the current rotating chairman of Huawei, told the customer conference, according to a transcript provided by the company. Huawei, based in Shenzhen in southern China, also announced plans to launch new AI chips in its Ascend series over the next three years. The Atlas 950 and 960 superpods would be based on the Ascend 950 and 960 chips, due out in 2026 and 2027. A planned Ascend 970 chip could follow in 2028.
[7]
Huawei touts world's most powerful AI chip cluster as Nvidia's China challenges mount
BEIJING -- Chinese telecommunications giant Huawei announced Thursday new computing systems for powering artificial intelligence with its in-house Ascend chips, as it steps up pressure on U.S. rival Nvidia. The company said it plans to launch its new "Atlas 950 SuperCluster" as soon as next year. The U.S. has sought to cut China off from the most advanced semiconductors for training AI models. To cope, Chinese companies have turned more to grouping large numbers of less efficient, often homegrown, chips together to achieve similar computing capabilities. Under Huawei's AI computing infrastructure, a supercluster is connected to multiple superpods, which, in turn, are built from multiple supernodes. Supernodes, which form the base, are built on Ascend chips, using system design to overcome technical limitations imposed by U.S. sanctions. Huawei said its new Atlas 950 supernode would support 8,192 Ascend chips, and that the Atlas 950 SuperCluster would use more than 500,000 chips. A more advanced Atlas 960 version, slated for launch in 2027, would support 15,488 Ascend chips per node. The full supercluster would have more than 1 million Ascend chips, according to Huawei. It was not immediately clear how the systems compared with those powered by Nvidia chips. Huawei claimed in a press release that the new supernodes would be the world's most powerful by computing power for several years. "Huawei's announcement on its computing breakthrough is well timed with recent increasing emphasis by the Chinese government on self-reliance on China's own chip technologies," said George Chen, partner and co-chair, digital practice, The Asia Group. While he cautioned that Huawei might exaggerate its technical capabilities, Chen pointed out that the Chinese company's ambition to be a world AI leader "cannot be underestimated."
[8]
Huawei unveils chipmaking, computing power plans for the first time
SHANGHAI, Sept 18 (Reuters) - Huawei said on Thursday it would roll out four new iterations of its Ascend AI chip over the next three years, breaking years of secrecy to reveal its chipmaking progress and ambitions to compete against Nvidia (NVDA.O), opens new tab for the first time. The Chinese technology giant has been one of the key players in leading efforts to develop a domestic semiconductor manufacturing industry, aiming to reduce reliance on a supply chain dominated by the United States. After the launch of the Ascend 910C in the year's first quarter, Vice Chairman Eric Xu said the company plans to launch next year two variants of its successor, the Ascend 950, and follow up with the 960 version in 2027 and the 970 in 2028. "Computing power has always been, and will continue to be, key to artificial intelligence, and even more so to China's AI," Xu told the annual Huawei Connect conference in the commercial hub of Shanghai, the company said. The Ascend 950 chip would be powered by the company's own proprietary high-bandwidth memory, he said, revealing that it had overcome a key bottleneck China faced in the technology, limited for years to South Korean and U.S. suppliers. Huawei also plans to roll out new computing power supernodes called the Atlas 950 and Atlas 960, which Xu described as the world's most powerful, supporting 8,192 and 15,488 Ascend chips respectively. The chips are successors to the Atlas 900, also known as the CloudMatrix 384, which uses 384 of Huawei's latest 910C chips. On some metrics, the Huawei product outperforms Nvidia's GB200 NVL72, which uses 72 B200 chips, research group SemiAnalysis has said. Huawei says the system uses "supernode" architecture that allows the chips to interconnect at super-high speeds. Reporting by Brenda Goh and Che Pan; Editing by Jacqueline Wong and Clarence Fernandez Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Media & Telecom Brenda Goh Thomson Reuters Brenda Goh is Reuters' Shanghai bureau chief and oversees coverage of corporates in China. Brenda joined Reuters as a trainee in London in 2010 and has reported stories from over a dozen countries.
[9]
Huawei unveils 'world's most powerful' AI cluster to rival Nvidia
Chinese tech giant Huawei is gearing up to launch four new iterations of its Ascend AI chip in the coming three years. In its announcement on Thursday at Huawei Connect in Shanghai, the company revealed the rollout timelines for each chip. Huawei aims to build a robust AI infrastructure to fuel its ambitions of reducing dependency on Nvidia and the US. Using a proprietary high-bandwidth memory, the company claims to overcome the bottleneck that forced China to rely on US and South Korean suppliers for the chips.
[10]
Huawei throws massive 1-million NPU gauntlet at Nvidia and AMD as it positions itself an alternative to US AI giants
Huawei has announced what it calls the, "world's most powerful SuperPoDs and superclusters". In AI nomenclature, a SuperPOD is a group of racks that work as a single unit integrating compute, network, software, storage and management. A supercluster is a group of superPODs. Given that Nvidia trademarked SuperPOD, the Chinese company was smart enough to call its product SuperPoD (AMD has its own version called MegaPOD). Its Atlas 950 SuperPoD comprises of 8,192 Ascend NPU (essentially AI accelerators) with a superior version, the 960, delivering almost twice that number at 15,488 (89% more). The Atlas 950 will use the newly announced Ascend 950 chips while the Ascend 960 series will fit in the Atlas 960. The 950 series will be available in Q1 26, while the 960 will come in Q4 27 and - you've guessed it - there's an Ascend 970 planned for Q4 2028. Huawei's deputy chairman, Eric Xu, went on to claim that its SuperPoDs "are currently the most powerful SuperPoDs in the world, and will remain so for years to come", based on publicly available roadmaps from Nvidia and AMD (although he didn't name them). Xu went on to announce superclusters based on the Atlas 950 and Atlas 960, with more than 520,000 (64 superPoDs) and more than one million NPUs, respectively (at least 66 superPoDs). This, Xu posits, will outstrip xAI Colossus, currently the world's largest computing cluster. The caveat is that xAI is already working on Colossus 2, one of a number of Gigawatt-class clusters that will almost certainly surpass 950-based and 960-based superclusters. Oracle/OpenAI, Meta and AWS/Anthropic are also building such (hyper)clusters as well. A third surprise announcement was the launch of UnifiedBus, which is Huawei's alternative to Nvidia's Infiniband. The company is keen to create an open UnifiedBus ecosystem, but its press release doesn't mention whether this interconnect protocol will be open-sourced. Huawei's claims - if true - are impressive: 100x improved reliability for optical interconnect, with maximum range extended to more than 200 meters (almost 700 feet) and NPU-NPU latency reduced to 2.1ms, a 30% improvement over current technologies. Chinese companies and China's central government are most certainly going to be the main customers of Huawei's SuperPoDs and superclusters for now. Pricing, performance and power consumption are yet to be detailed.
[11]
Huawei's Atlas 950 supercomputing node to debut in Q4, media reports
SHANGHAI, Sept 18 (Reuters) - Huawei's Vice Chairman Eric Xu said on Thursday the company plans to launch the world's most powerful computing power node called the Atlas 950 in the fourth quarter of this year, local media China Star Market reported. Xu, who also serves as Huawei's rotating chairman, added that Huawei would launch the next generation, Atlas 960, in the fourth quarter of 2027, the online publication reported. The Atlas 950 and Atlas 960 can support 8,192 and 15,488 Ascend chips, respectively, Xu said. Chinese authorities have urged firms to prioritise Huawei Ascend AI chips and other domestic alternatives over Nvidia (NVDA.O), opens new tab chips, which offer comparable computing performance. Xu said that the systems excel in metrics such as card count, total compute, memory capacity and interconnect bandwidth. Reporting by Brenda Goh and Che Pan; Editing by Jacqueline Wong Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Media & Telecom Brenda Goh Thomson Reuters Brenda Goh is Reuters' Shanghai bureau chief and oversees coverage of corporates in China. Brenda joined Reuters as a trainee in London in 2010 and has reported stories from over a dozen countries.
[12]
Huawei reveals its Ascend AI chip plans, taking the fight to Nvidia with new superclusters | Fortune Asia
Huawei, the company most connected to China's drive for tech self-sufficiency, unveiled its AI chip plans on Thursday, laying down the product roadmap for a semiconductor division that's become increasingly important to both the company and to the country more broadly. On Thursday, Huawei rotating chair Eric Xu revealed the next iterations of the company's Ascend AI chips. Huawei will release the Ascend 950PR in the first quarter of 2026, followed by the Ascend 950DT at the end of that same year. The Ascend 960 and the Ascend 970 would follow in Q4 2027 and Q4 2028 respectively. Xu noted that all the chips will come with Huawei-designed high-bandwidth memory. Huawei also unveiled its new SuperPoD cluster designs, which link together thousands of Ascend chips. Xu called the cluster "a single logical machine, made up of multiple physical machines that can learn, think, and reason as one," according to an official company transcript of his remarks. "We're pretty confident that, for the next few years, the Atlas 950 SuperPoD will remain the world's most powerful SuperPoD. And it will far exceed its counterparts across all major metrics," Xu said. Clustering chips together is one way that Huawei is trying to keep up with Nvidia, the market leader in AI chips. Due to U.S. sanctions, Huawei can't access the most advanced chip production techniques used by manufacturers like TSMC. Products like the SuperPod allow Huawei to connect less powerful chips to get higher performance. In June, Huawei founder Ren Zhengfei told state media that the company's chips were still a generation behind processors from the U.S., but packaging techniques could help close the performance gap. Huawei isn't the only Chinese company working to develop AI processors. Smaller chip startups like Cambricon are trying to manufacture processors domestically. China's big tech companies, like Baidu and Alibaba, are also designing their own AI chips in-house. Optimism over AI chips is helping to boost Chinese tech stocks. Shares in Chinese chip manufacturing "fab" Semiconductor Manufacturing International Corporation, which works with Huawei to make the Ascend chips, are up by around 35% in the past month. Cambricon and Baidu are both up by around 50% over the same period. Investors have piled into China's tech sector since the start of the year, when Hangzhou-based startup DeepSeek released its open-source AI model. "DeepSeek has come up with new ways to train models using significantly less computing power. But artificial general intelligence (AGI) and physical AI will still need a massive amount of computing power," Xu said Thursday. "We believe that computing power is -- and will continue to be -- key to AI." AI chips designed by Huawei and others have taken on greater importance in China due to the country's constrained access to Nvidia's top-of-the-line products. Since 2022, the U.S. has barred Nvidia and fellow chipmaker AMD from selling their most advanced processors to China. Both companies have consequently had to design special, less powerful China-focused versions of their processors that complied with U.S. restrictions. These modified chips were in demand in China, since they still were more powerful and easier-to-use than what local chipmakers could produce. But now, locally-made chips may be considered as good as what Nvidia can sell. Chinese officials are also looking more skeptically at Nvidia's chips. Earlier this week, Beijing reportedly ordered tech companies to stop using Nvidia's RTX Pro 6000D, one of the company's China-focused chips; that follows earlier scrutiny of the H20, another other China-focused processor.
[13]
Huawei unveils new computing tech as China seeks AI strength
Chinese tech juggernaut Huawei plans to launch powerful computing setups that allow chips to connect at high speeds, an executive said Thursday, as Beijing looks to bolster domestic AI prowess and reduce reliance on Western firms. Geopolitical tensions between China and the United States have intensified technological competition between the countries, each seeking to achieve supremacy in the vital fields of artificial intelligence and advanced computer chips. Shenzhen-based Huawei and California-based Nvidia are among the tech giants that have repeatedly been caught up in the rivalry, each facing various restrictions on their overseas operations. Huawei's Deputy Chairman Eric Xu said Thursday that the firm intends to launch the Atlas 950 and Atlas 960 "SuperPoDs," part of efforts to meet "long-term computing demand," according to a press release. The products will be used to integrate thousands of Huawei chips, significantly enhancing the computing power that underpins various AI applications. They are expected to be launched in the fourth quarters of 2026 and 2027, respectively, according to a copy of Xu's speech seen by AFP. An earlier report by state-controlled Chinese business news outlet Jiemian incorrectly stated that the Atlas 950 would launch this year. "These two SuperPoDs will deliver an industry-leading performance across multiple key metrics, including the number of NPUs (neural processing units), total computing power, memory capacity, and interconnect bandwidth," said Xu, quoted in the press release. The announcement comes a day after a report by the Financial Times said China's internet regulator had instructed domestic tech giants, including Alibaba and ByteDance to terminate orders for certain Nvidia products. According to the FT, citing unnamed sources, the Cyberspace Administration of China ordered companies to end all testing and purchase plans for Nvidia's RTX Pro 6000D chips, state-of-the-art processors made especially for the country. Nvidia chief executive Jensen Huang said Wednesday that he was "disappointed" by the report. Chinese foreign ministry spokesman Lin Jian did not confirm new restrictions when asked about the report at a regular press conference on Thursday. "We always oppose discriminatory practices targeting specific countries when it comes to economic, trade and technology issues," he said. "China is willing to maintain dialogue and cooperation with all parties to protect the stability of the global supply chain." Observers believe that Beijing's moves to wean Chinese tech companies off Nvidia's offerings are part of its effort to accelerate domestic production from companies like Huawei. The FT report also said that Beijing regulators have recently summoned Huawei and Cambricon -- another domestic chipmaker -- for discussions on how their products stack up against Nvidia's chips for the Chinese market.
[14]
What do we know about Huawei's' most powerful' AI chip cluster?
Chinese telecommunications company Huawei says it has developed the world's 'most powerful' AI computing systems as it takes on US AI chip giant Nvidia. Chinese telecommunications company Huawei has announced a slew of new computing systems and artificial intelligence (AI) computing chips that it claims could be more powerful than Nvidia's. The company said there would be two new "logical machines" that are able to learn, think, and reason: the Atlas 950 SuperPoD and the Atlas 960 SuperPoD. Huawei said on Thursday that the SuperPoDs will be the "most powerful" in the world on several fronts, including computing power, memory, bandwidth, and the number of neural processing units (NPUs) that will accelerate AI and machine learning. "Computing power is - and will continue to be - key to AI. This is especially true in China," said Eric Xu, Huawei's deputy chairman. The company also announced it will be launching a new series of AI computing chips called Ascend starting in 2026. Both the SuperPoDs and the chips are part of a new "computing architecture" that Xu said will "sustainably meet long-term demand for computing power". The announcement comes as Huawei and other companies try to compete with US chip giant Nvidia. China is pressuring its tech companies to break their reliance on foreign chip makers so it can compete in the AI race. Xu claimed that the Atlas950 SuperPoD will have "56.8 times" more NPUs, 6.7 times more computing power and 15 times more memory capacity than Nvidia's upcoming NVL144 system. It wasn't immediately clear how the two systems compare. Nvidia has faced increased restrictions on its chip exports to China under both former US president Joe Biden and current President Donald Trump. The United States banned Nvidia from selling its most powerful chips, the Blackwell chip, to China in April, arguing it was necessary to safeguard US national and economic security as the AI global race gains pace. The US appeared to reverse course after Nvidia agreed to pay the government 15 per cent of what it earns from chip sales to China. While selling the Blackwell chip to China is still up in the air, Nvidia did get the clearance to export its watered-down H20 chip to China. Meanwhile, Nvidia CEO Jensen Huang said earlier this month that his company is discussing a potential new computer chip designed for China with the Trump administration. However, the Financial Times reported on Wednesday that China's internet regulator had banned local companies from buying Nvidia's RTX Pro 6000 chips, as Beijing tries to reduce dependence on foreign semiconductors. The Chinese government accused Nvidia earlier this week of violating the country's anti-monopoly laws when it bought Israeli technology company Mellanox Technologies in 2019 for $6.9 billion (€5.83 billion).
[15]
Alibaba, Baidu begin using own chips to train AI models, The Information reports
Sept 11 (Reuters) - China's Alibaba (9988.HK), opens new tab and Baidu (9888.HK), opens new tab have started using internally designed chips to train their AI models, partly replacing those made by Nvidia (NVDA.O), opens new tab, The Information reported on Thursday, citing four people with direct knowledge of the matter. Alibaba has been using own chips for smaller AI models since early this year, while Baidu is experimenting with training new versions of its Ernie AI model using its Kunlun P800 chip, the report said. Alibaba and Baidu did not immediately respond to Reuters requests for comment. The move is a significant shift in China's tech and AI landscape, where companies largely rely on Nvidia's powerful processors for AI development. Increasing U.S. export restrictions on supply of advanced AI chips to China have led Chinese companies to ramp up their own arsenal of AI chips, with growing pressure from Beijing on companies to use home-grown technology. Neither Alibaba nor Baidu has fully abandoned Nvidia, the report said, with both companies using Nvidia chips to develop their most cutting-edge models. While Nvidia's H20 chip - the most powerful AI processor it is allowed to sell in China - does not have as much computing power as H100 or Blackwell series, it still outpaced Chinese alternatives in performance. However, Alibaba's AI chip, the Zhenwu perse processing unit, is now good enough to compete with Nvidia's H20, The Information said in its report, citing three employees who have used the chip. The shift to home-grown chips in China would further dent Nvidia's China business. The company has already struck a deal with President Donald Trump for export licenses in exchange for 15% of China sales of its H20 AI chips. Late last month, CEO Jensen Huang said discussions with the White House to allow the company to sell a less advanced version of its next-generation chip to China will take time. Reporting by Deborah Sophia in Bengaluru; Editing by Tasim Zahid and Arun Koyyur Our Standards: The Thomson Reuters Trust Principles., opens new tab
[16]
How Huawei plans to outperform global tech leaders with less powerful chips
BEIJING -- China's Huawei Technologies said Thursday that it would roll out the world's most powerful AI computing clusters over the next two years as it seeks to outperform global leaders despite relying on less powerful domestic semiconductors. China is racing to develop its own technology as America restricts what can be sold to China, including its most advanced chips. At the same time, the Chinese government has reportedly told companies to stop buying some American chips as it seeks to transform China into a global tech leader and one that is less reliant on imported components. Huawei, at the forefront of efforts to develop home-grown technology, said at an annual customer event in Shanghai that it would launch new "superpods" in late 2026 and late 2027. That's computer industry lingo for a group of interconnected computers that, in Huawei's case, combines the power of thousands of chips. That immense power is needed to run models in the burgeoning field of artificial intelligence, an area of hot competition between the U.S. and China. Huawei announced plans to release the Atlas 950 and 960 superpods over the next two years. Dozens of the "SuperPoDs," as Huawei brands them, could be connected to form what Huawei said would be the world's most powerful "SuperClusters." The 950 and 960 are the most powerful superpods in the world and would remain so for years to come, a company news release said, based on product road maps from others in the industry. The challenge for China is how to keep pace with American competitors such as Open AI and Google without access to the world's most powerful semiconductors, notably those from America's market-leading Nvidia. The answer has been to use many more chips and develop the architecture to make them work well together. "Our strategy is to create a new computing architecture, and develop computing SuperPoDs and SuperClusters, to sustainably meet long-term demand for computing power," Eric Xu, the current rotating chairman of Huawei, told the customer conference, according to a transcript provided by the company. Huawei, based in Shenzhen in southern China, also announced plans to launch new AI chips in its Ascend series over the next three years. The Atlas 950 and 960 superpods would be based on the Ascend 950 and 960 chips, due out in 2026 and 2027. A planned Ascend 970 chip could follow in 2028.
[17]
How Huawei plans to outperform global tech leaders with less powerful chips
BEIJING (AP) -- China's Huawei Technologies said Thursday that it would roll out the world's most powerful AI computing clusters over the next two years as it seeks to outperform global leaders despite relying on less powerful domestic semiconductors. China is racing to develop its own technology as America restricts what can be sold to China, including its most advanced chips. At the same time, the Chinese government has reportedly told companies to stop buying some American chips as it seeks to transform China into a global tech leader and one that is less reliant on imported components. Huawei, at the forefront of efforts to develop home-grown technology, said at an annual customer event in Shanghai that it would launch new "superpods" in late 2026 and late 2027. That's computer industry lingo for a group of interconnected computers that, in Huawei's case, combines the power of thousands of chips. That immense power is needed to run models in the burgeoning field of artificial intelligence, an area of hot competition between the U.S. and China. "This is a significant milestone," said Charlie Dai, a technology analyst at the research firm Forrester Research. "It signals a stronger push toward self-reliance and resilience in the face of export restrictions." Huawei announced plans to release the Atlas 950 and 960 superpods over the next two years. Dozens of the "SuperPoDs," as Huawei brands them, could be connected to form what Huawei said would be the world's most powerful "SuperClusters." The 950 and 960 are the most powerful superpods in the world and would remain so for years to come, a company news release said, based on product road maps from others in the industry. The challenge for China is how to keep pace with American competitors such as Open AI and Google without access to the world's most powerful semiconductors, notably those from America's market-leading Nvidia. The answer has been to use many more chips and develop the architecture to make them work well together. "Our strategy is to create a new computing architecture, and develop computing SuperPoDs and SuperClusters, to sustainably meet long-term demand for computing power," Eric Xu, the current rotating chairman of Huawei, told the customer conference, according to a transcript provided by the company. Huawei, based in Shenzhen in southern China, also announced plans to launch new AI chips in its Ascend series over the next three years. The Atlas 950 and 960 superpods would be based on the Ascend 950 and 960 chips, due out in 2026 and 2027. A planned Ascend 970 chip could follow in 2028.
[18]
How Huawei Plans to Outperform Global Tech Leaders With Less Powerful Chips
BEIJING (AP) -- China's Huawei Technologies said Thursday that it would roll out the world's most powerful AI computing clusters over the next two years as it seeks to outperform global leaders despite relying on less powerful domestic semiconductors. China is racing to develop its own technology as America restricts what can be sold to China, including its most advanced chips. At the same time, the Chinese government has reportedly told companies to stop buying some American chips as it seeks to transform China into a global tech leader and one that is less reliant on imported components. Huawei, at the forefront of efforts to develop home-grown technology, said at an annual customer event in Shanghai that it would launch new "superpods" in late 2026 and late 2027. That's computer industry lingo for a group of interconnected computers that, in Huawei's case, combines the power of thousands of chips. That immense power is needed to run models in the burgeoning field of artificial intelligence, an area of hot competition between the U.S. and China. Huawei announced plans to release the Atlas 950 and 960 superpods over the next two years. Dozens of the "SuperPoDs," as Huawei brands them, could be connected to form what Huawei said would be the world's most powerful "SuperClusters." The 950 and 960 are the most powerful superpods in the world and would remain so for years to come, a company news release said, based on product road maps from others in the industry. The challenge for China is how to keep pace with American competitors such as Open AI and Google without access to the world's most powerful semiconductors, notably those from America's market-leading Nvidia. The answer has been to use many more chips and develop the architecture to make them work well together. "Our strategy is to create a new computing architecture, and develop computing SuperPoDs and SuperClusters, to sustainably meet long-term demand for computing power," Eric Xu, the current rotating chairman of Huawei, told the customer conference, according to a transcript provided by the company. Huawei, based in Shenzhen in southern China, also announced plans to launch new AI chips in its Ascend series over the next three years. The Atlas 950 and 960 superpods would be based on the Ascend 950 and 960 chips, due out in 2026 and 2027. A planned Ascend 970 chip could follow in 2028.
[19]
How Huawei plans to outperform global tech leaders with less powerful chips
China's Huawei Technologies said Thursday that it would roll out the world's most powerful AI computing clusters over the next two years as it seeks to outperform global leaders despite relying on less powerful domestic semiconductors. China is racing to develop its own technology as America restricts what can be sold to China, including its most advanced chips. At the same time, the Chinese government has reportedly told companies to stop buying some American chips as it seeks to transform China into a global tech leader and one that is less reliant on imported components. Huawei, at the forefront of efforts to develop home-grown technology, said at an annual customer event in Shanghai that it would launch new "superpods" in late 2026 and late 2027. That's computer industry lingo for a group of interconnected computers that, in Huawei's case, combines the power of thousands of chips. That immense power is needed to run models in the burgeoning field of artificial intelligence, an area of hot competition between the U.S. and China. Huawei announced plans to release the Atlas 950 and 960 superpods over the next two years. Dozens of the "SuperPoDs," as Huawei brands them, could be connected to form what Huawei said would be the world's most powerful "SuperClusters." The 950 and 960 are the most powerful superpods in the world and would remain so for years to come, a company news release said, based on product road maps from others in the industry. The challenge for China is how to keep pace with American competitors such as Open AI and Google without access to the world's most powerful semiconductors, notably those from America's market-leading Nvidia. The answer has been to use many more chips and develop the architecture to make them work well together. "Our strategy is to create a new computing architecture, and develop computing SuperPoDs and SuperClusters, to sustainably meet long-term demand for computing power," Eric Xu, the current rotating chairman of Huawei, told the customer conference, according to a transcript provided by the company. Huawei, based in Shenzhen in southern China, also announced plans to launch new AI chips in its Ascend series over the next three years. The Atlas 950 and 960 superpods would be based on the Ascend 950 and 960 chips, due out in 2026 and 2027. A planned Ascend 970 chip could follow in 2028.
[20]
Huawei Unveils Next-Gen Atlas 950 & 960 SuperPoD AI Servers, Claiming to Surpass NVIDIA's Rubin Lineup
Huawei claims to be taking the competition in the rack-scale segment directly to NVIDIA's ground, as their latest announcement includes unveiling cutting-edge AI clusters. The Chinese AI firm has been at the forefront of competing with NVIDIA in China's AI market, particularly with rack-scale solutions. Huawei announced its CloudMatrix 384 AI system a few months ago, which was reportedly to have surpassed NVIDIA's Blackwell AI system. Now, at the Huawei Connect 2025, the firm has announced new iterations of its 'SuperPoD' AI clusters. These will be the Atlas 950 and the Atlas 960, with the earlier one featuring the new Ascend AI chips, and interestingly, will compete with NVIDIA's Rubin lineup. We'll talk about the Atlas 950 Super Cluster ahead. Diving a bit into the specifications reported by Huawei, it is claimed that the Atlas 950 SuperPoD will feature 8,192 of the Ascend 950 AI chips, and they will bring in a cumulative performance of eight EFLOPS FP8 and 16 EFLOPS FP16 with a total interconnect bandwidth of a whopping 16.3 PB/s, which are huge figures. Based on these on-paper specifications, the SuperPoD is expected to be on-par with NVIDIA's NVL144 Vera Rubin AI rack, which means Huawei already has plans to level the competition with Team Green next year. Now, Huawei plans to use the above Atlas 950 SuperPoD AI clusters, join them together to create the Atlas 950 SuperCluster, which will feature a whopping 524,288 of the Ascend 950 AI chips, which is how the Chinese firm plans to bring such power onboard. Huawei claims that the Atlas 950 SuperPoD will be the world's largest AI cluster, in terms of the chips onboard, since systems with 500K to 1 million 'dedicated' AI chips onboard is a rare feat. Huawei also announced the Atlas 950 SuperCluster and Atlas 960 SuperCluster, which can scale up to support between 500,000 and 1 million processors, making them the "largest AI compute clusters" in the world, the company said. - SCMP Huawei has managed to reach such a high level of figures because the Chinese firm doesn't have to consider performance efficiency or cluster pricing; rather, it relies on squeezing out immense computing power to compete against Western alternatives. We are yet unaware of the complete specifications of Huawei's new AI clusters, but considering that it has 500K AI chips onboard, the power consumption will be phenomenal to say the least. There are many caveats to such announcements, but Huawei seems to be making them to ensure that it meets domestic computing demand. It is safe to say that Chinese AI firms are giving their all to reduce their dominance on the Western tech stack.
[21]
Huawei unveils AI chip road map to challenge Nvidia's lead
Huawei Technologies unveiled new technology from memory chips to AI accelerators Thursday, outlining publicly for the first time its multiyear plan to challenge Nvidia's dominance in a growing market. The highlight of the company's presentation on Thursday were new SuperPod cluster designs that will allow Huawei to link as many as 15,488 of its Ascend neural processing units for artificial intelligence and operate them as a coherent system, rotating chairman Eric Xu said at the event. Those SuperPod products will be built with new generations of Ascend chips from next year. The next-generation Ascend 950 series will be accompanied by new high-bandwidth memory designed by Huawei itself, Xu said, without elaborating on who will fabricate the semiconductors. Huawei also plans to roll out an Ascend 960 in late 2027, to be succeeded by a 970 model in late 2028.
[22]
Huawei Showcases Its 'Highly Competitive' AI Chip Roadmap; Ascend 950PR To Feature Self-Built HBM With Release Slated For Q1 2026
It seems like Huawei plans to ramp up the competition in the domestic markets through its next-gen AI chips, as it unveils an optimistic roadmap. Huawei is known to be one of the leading entities in China when it comes to introducing high-end 'homegrown' AI chips that are known to compete with the likes of NVIDIA's H20, and more importantly, the firm is focused explicitly on shifting to an internal tech stack, which has been the primary focus of today's roadmap showcase. Based on a report by MyDrivers, at the Huawei Connect 2025, the firm has revealed its AI chip offerings planned up till 2027, and this includes advanced options that will rely on self-built products. Starting with the successor of the Ascend 910C, Huawei plans to launch the Ascend 950PR, which will be the company's first option to offer a self-built HBM technology. In terms of specifications, you are looking at support for low-precision data formats, up to FP8, with 1 PFLOPS of FP8 compute and 2 PFLOPS of FP4. The chip will feature an interconnect bandwidth of 2 TB/s, and the most interesting element of the 950PR is likely the use of a internal HBM technology. It is claimed that Huawei plans to integrate their 'HiBL 1.0' HBM technology, which features a capacity of 128GB and a bandwidth of 1.6TB/s, and will be utilized in the 950PR. Huawei also has a second generation of their HBM planned, called the 'HiZQ 2.0', which will come with a capacity of 144GB and a bandwidth of 4TB/s. The Ascend 950PR is claimed to be an inference-focused chip, targeted towards prefill and recommendation performance. Apart from the 950PR, an Ascend 950DT is also planned, slated for Q4 2026. That chip is claimed to be a training-focused option, featuring HiZQ 2.0 memory, which features increased bandwidth and memory capacity relative to the 950PR. Huawei has also unveiled options for 2027 and 2028 as well, with the Ascend 960 releasing in Q4 2027, offering 2.2 TB/s interconnected bandwidth along with 288 GB (likely HiZQ 2.0 HBM) and 9.6 TB/s memory bandwidth. The chip will feature 2 PFLOPS FP8 and 4 PFLOPS FP4 compute, which shows that there are massive upgrades planned ahead. By 2028, Huawei will introduce the Ascend 970, with significant upgrades planned in the memory and compute segment, which means that the company will have an extensive roadmap to cater to China's computing needs moving into the future.
[23]
Alibaba's AI Chip Reportedly Competes With NVIDIA's H20 & Is Already Being Used To Train AI Models; Competition in China is Growing For Team Green
Chinese Big Tech giants seem to be pushing towards adopting domestic computing solutions, as Alibaba's self-built Zhenwu chip is now claimed to feature H20 level performance. There has been a push towards pivoting away from the Western AI tech stack in China, part of which involves AI firms like Huawei, Alibaba, and Baidu developing their own custom solutions, specific to inference or training workloads. While the specific details about these chips aren't certain, a report by The Information claims that both Alibaba and Baidu have started to utilize in-house AI chips for model training. In particular, Alibaba's Zhenwu chip is said to feature performance similar to NVIDIA's H20 AI chip, which is a huge achievement for the Chinese tech giant. But Alibaba's AI chip, for instance, is now good enough to compete with Nvidia's H20 chip, the scaled-down chip designed for the Chinese market, according to three employees who have used Alibaba's Zhenwu chip. - The Information Relative to NVIDIA's solutions in general, Chinese firms are way behind, not just in terms of technological innovations, but also in production capacity. However, when you factor in China's push towards in-house solutions and how far the nation has come in a span of a few years, the Alibaba chip is indeed a massive feat for them. Not just Alibaba, but Baidu has also employed the Kunlun P800 chip for model training and inference workloads, which shows that NVIDIA's tech stack for Chinese customers is facing massive competition. For now, NVIDIA is facing troubles when it comes to accessing China's AI market. Beijing has placed the hurdles, which shows that the firm is in a tough spot right now. It is rumored that NVIDIA is pushing towards introducing a 'Blackwell-based' solution for the domestic markets, called the B40 AI chip, but for now, the plans aren't official, and it will likely require Jensen convincing the Trump administration to allow them to sell high-end solutions to China.
[24]
How Huawei plans to outperform global tech leaders with less powerful chips
BEIJING -- China's Huawei Technologies said Thursday that it would roll out the world's most powerful AI computing clusters over the next two years as it seeks to outperform global leaders despite relying on less powerful domestic semiconductors. China is racing to develop its own technology as America restricts what can be sold to China, including its most advanced chips. At the same time, the Chinese government has reportedly told companies to stop buying some American chips as it seeks to transform China into a global tech leader and one that is less reliant on imported components. Huawei, at the forefront of efforts to develop home-grown technology, said at an annual customer event in Shanghai that it would launch new "superpods" in late 2026 and late 2027. That's computer industry lingo for a group of interconnected computers that, in Huawei's case, combines the power of thousands of chips. That immense power is needed to run models in the burgeoning field of artificial intelligence, an area of hot competition between the U.S. and China. Huawei announced plans to release the Atlas 950 and 960 superpods over the next two years. Dozens of the "SuperPoDs," as Huawei brands them, could be connected to form what Huawei said would be the world's most powerful "SuperClusters." The 950 and 960 are the most powerful superpods in the world and would remain so for years to come, a company news release said, based on product road maps from others in the industry. The challenge for China is how to keep pace with American competitors such as Open AI and Google without access to the world's most powerful semiconductors, notably those from America's market-leading Nvidia. The answer has been to use many more chips and develop the architecture to make them work well together. "Our strategy is to create a new computing architecture, and develop computing SuperPoDs and SuperClusters, to sustainably meet long-term demand for computing power," Eric Xu, the current rotating chairman of Huawei, told the customer conference, according to a transcript provided by the company. Huawei, based in Shenzhen in southern China, also announced plans to launch new AI chips in its Ascend series over the next three years. The Atlas 950 and 960 superpods would be based on the Ascend 950 and 960 chips, due out in 2026 and 2027. A planned Ascend 970 chip could follow in 2028.
[25]
Key products in Huawei's AI chips and computing power roadmap
BEIJING (Reuters) -China's Huawei has ended years of secrecy to detail its product roadmap for chips and computing power systems, discussing for the first time how it intends to challenge global industry leader Nvidia. Here are some highlights from Huawei's presentation: ASCEND ARTIFICIAL INTELLIGENCE CHIPS The Ascend series is Huawei's main competitive offering to Nvidia's AI chips and its current chip on the market is the Ascend 910C. Huawei is planning three new series over the next three years, namely the Ascend 950, 960 and 970, with the 950 to be launched in the first quarter of next year. There will be two types of Ascend 950 chips, the 950PR and 950DT, with the former suitable for the prefill stage of inference and to run recommendations, while the latter will be optimised for the decode stage of inference and model training. The Ascend 960 will have twice the computing power and memory capacity of the 950 and the company aims to go even higher with the 970. Huawei, which is barred from working with the world's top chip foundry TSMC by U.S. export controls, did not say who will manufacture its chips. However, analysts say the company works with chip equipment suppliers and China's largest foundry, Semiconductor Manufacturing International Corp. HIGH BANDWIDTH MEMORY Huawei said it now has its own proprietary high-bandwidth memory - technology currently dominated by South Korea's SK Hynix and Samsung Electronics. The Ascend 950PR will integrate Huawei's HBM chip called HiBL 1.0 Huawei says its HBM chip is more cost-effective than industry-leading HBM3E and HBM4E, providing 128 Gigabytes (GB) of memory and delivering 1.6 terabytes (TB) per second in memory bandwidth. Ascend 950DT will feature a higher-performance HBM chip called HiZQ2.0, providing 144 GB of memory and 4 TB/s memory bandwidth. SUPERNODE AND CLUSTERS Huawei has been using cluster computing which puts multiple computers or "supernodes" to work together in order to improve the performance of its chips. Its current main product is the Atlas 900 A3 SuperPoD, a system which uses 384 of Huawei's latest 910C chips and which one industry expert says rivals some of Nvidia's most advanced offerings. In June, Huawei showed off the system and the cloud service that runs on it, CloudMatrix 384, at a key industry show. Huawei plans to introduce the Atlas 950 SuperPod in Q4 2026. The system will pack 8,192 Ascend 950DT chips and comprise 160 cabinets -- 128 compute cabinets and 32 communications cabinets --deployable in a 1,000 square meter space (10,764 square feet). Eric Xu, Huawei's rotating chairman, said the Atlas 950 SuperPod will have 6.7 times more computing power and 15 times more memory capacity than the NVL144 system that Nvidia intends to launch in 2026, and would continue to beat a successor system Nvidia is planning for in 2027. Huawei is also planning a second supernode product it called the Atlas 960 SuperPoD which it said would pack up to 15,488 Ascend 960 chips, and comprise 220 cabinets across a 2,200 square metre space. KUNPENG Besides the Ascend series, Huawei has another CPU chip product line called Kunpeng that is used for general servers. Huawei first announced the Kunpeng 920 chipset in 2019. The company will roll out new versions, the Kunpeng 950 and 960, in 2026 and 2028, Xu said, adding that this would be accompanied by another new cluster computing product called the TaiShan 950 SuperPod that he said would be focused on general-purpose computing. (Reporting by Che Pan and Brenda Goh, editing by Ed Osmond)
[26]
Huawei Unveils AI Chip Pipeline as Tech Rivalry Heats Up
Huawei Technologies plans to roll out more artificial-intelligence chips in the next three years, giving a rare glimpse into its ambitions to compete with Nvidia in China. The first will be a new generation of its Ascend chips, seen as a potential challenger to Nvidia's offerings in the country. As the U.S. tech giant faces more difficulty in doing business in China, local companies are vying to step in to fill the void. Huawei announced its chip pipeline on Thursday, which includes two variants of the new lineup, called Ascend 950, that handle different AI workloads. The chips will use high-bandwidth memory units that Huawei develops, and support a computing format that is increasingly popular among Chinese AI developers. "Computing power is, and will continue to be, key to AI. This is especially true in China," Huawei Deputy Chairman Eric Xu said at an event in Shanghai. Huawei plans to release the first of the new chips in the first three months of next year, followed by another in the fourth quarter. It aims to roll out two upgraded lineups in 2027 and 2028. "We will follow a one-year release cycle and double compute with each release," Xu said. He said Huawei is using the chip-making technology that is "practically available" to China to address the country's surging demand for computing. The Chinese company also launched technology that can bundle thousands of chips, an advance it said could help address a bottleneck in large-scale AI computing infrastructure. Washington has blacklisted Huawei since 2019, forcing the Chinese tech juggernaut to bet on its own alternatives. It has since become a key player in Beijing's national push for self-sufficiency, participating in projects to build a domestic semiconductor supply chain. That has pitted Huawei against Nvidia as proxies in the escalating technology battle between Beijing and Washington. Since 2022, the U.S. government has restricted China's access to advanced semiconductor technologies. That has caused headaches for AI developers in China, but also prompted them to find workarounds. Huawei's current Ascend AI chips still lag behind Nvidia's best products but the Chinese company has been investing in developing networking technology--an area it excelled in as a telecommunications equipment maker--to bundle more chips together to boost computing capabilities. Huawei's latest supernode products, Atlas 950 and Atlas 960, can link 8,192 and 15,488 Ascend chips, respectively, it said. The chip-bundling technology can be used to build big groups of computers, known as clusters, containing up to a million of Huawei's AI chips and working together as a single system, the company said. Building these types of clusters is key to training large-scale AI models. Nvidia's technology for connecting individual chips and facilitating data transfer has been a secret sauce that keeps global engineers--including those in China--addicted. According to Xu, the new Huawei system built with the Atlas 950 supernode, which will be available in late 2026, will surpass a system that Nvidia plans to release in 2027. Nvidia didn't immediately respond to a request for comment. Write to Raffaele Huang at [email protected] and Sherry Qin at [email protected]
Share
Share
Copy Link
Huawei announces new AI infrastructure and a multi-year roadmap for advanced AI chips, positioning itself as a domestic alternative to Nvidia in China's AI market. This move comes as China bans local tech companies from purchasing Nvidia hardware.
Chinese tech giant Huawei unveiled an aggressive multi-year roadmap for its Ascend AI chips, aiming to displace Nvidia's dominance, particularly in China. This move follows recent bans on Chinese companies buying Nvidia hardware, creating a crucial domestic market opportunity
1
.Source: Wccftech
Huawei's chairman, Xu Zhijun, announced four new Ascend chip models through 2028: 950PR, 950DT (2026), 960, and 970 (2027-2028). A key innovation is Huawei's proprietary High Bandwidth Memory (HBM). The 950PR will have 128GB HBM (1.6 TB/s bandwidth), and the 950DT 144GB (4 TB/s), potentially disrupting the HBM market
2
4
.Source: TechRadar
To boost AI compute power, Huawei introduced "SuperPoD" technology, capable of linking up to 15,488 AI chips. This includes Atlas 950 SuperPod (8,192 Ascend 950DT chips) and Atlas 960 SuperPod (up to 15,488 Ascend 960 chips), enhancing its AI infrastructure
1
4
.Source: AP NEWS
Related Stories
Despite ambitious plans, U.S. sanctions restrict Huawei's access to advanced chip manufacturing. Past accelerators also reportedly underperformed
2
3
. However, successful development could see Huawei's AI accelerators become globally available via Chinese cloud providers, intensifying competition. Success relies on matching Nvidia's performance in training efficiency and model throughput3
.Summarized by
Navi
[2]
[3]