3 Sources
[1]
Nvidia GB200 production ramps up after suppliers tackle AI server overheating and liquid cooling leaks
There are also reports of software bugs and inter-chip connectivity problems Nvidia suppliers building its Blackwell AI server racks have reportedly solved a series of technical hurdles, allowing them to accelerate production of the GB200 AI rack. According to the Financial Times, suppliers including Foxconn, Inventec, Dell, and Wistron have made "a series of breakthroughs" to allow shipments to kick off. According to the report, shipments of the GB200 were delayed due to technical issues that emerged at the end of last year, disrupting production. According to the report, Nvidia's Taiwanese partners announced at Computex 2025 that shipments of the GB200 racks commenced at the end of Q1 2025, stating that "Production capacity was now being rapidly scaled up." One engineer at an unnamed partner manufacturer of Nvidia reportedly told FT that internal testing revealed connectivity problems, requiring supply chain collaboration with Nvidia two or three months ago. FT reports that supply chain partners have spent "several months" tackling other challenges with the GB200 racks, including overheating and leaks in the liquid cooling systems. Other issues cited by engineers reportedly include "software bugs and inter-chip connectivity problems stemming from the complexity of synchronising such a large number of processors." One analyst told FT that "Nvidia had not allowed the supply chain sufficient time to be fully ready," and that inventory risk for the GB200 would ease in the second half of the year. According to the report, as Nvidia prepares for the rollout of the GB300 (expected in Q3), Nvidia has compromised some facets of the GB300 design. FT claims it has ditched the Cordelia chip board layout in favor of the older Bianca design it uses in the GB200. The report states that two suppliers reported installation issues; however, the move will preclude the replacement of individual GPUs in the system. This matches a report from earlier in May, claiming Nvidia was delaying the introduction of SOCAMM memory tech originally planned for the Blackwell Ultra GB300, with reports at the time citing the Cordelia to Bianca switch was behind the postponement. According to that earlier report and FT's latest story, Nvidia is still planning to implement Cordelia in its next-generation Rubin chips.
[2]
Nvidia's suppliers resolve AI 'rack' issues in boost to sales
Nvidia's suppliers are accelerating production of its flagship AI data centre "racks" following a resolution of technical issues that had delayed shipments, as the US chipmaker intensifies its global sales push. The semiconductor giant's partners -- including Foxconn, Inventec, Dell and Wistron -- have made a series of breakthroughs that have allowed them to start shipments of Nvidia's highly anticipated "Blackwell" AI servers, according to several people familiar with developments at the groups. The recent fixes are a boost to chief executive Jensen Huang, who unveiled Blackwell last year promising it would massively increase the computing power needed to train and use large language models. Technical problems that emerged at the end of last year had disrupted their production, threatening the US company's ambitious annual sales targets. The GB200 AI rack includes 36 "Grace" central processing units and 72 Blackwell graphics processing units, connected through Nvidia's NVLink communication system. Speaking at the Computex conference in Taipei last week, Nvidia's Taiwanese partners Foxconn, Inventec and Wistron said shipments of the GB200 racks began at the end of the first quarter. Production capacity is now being rapidly scaled up, they added. "Our internal tests showed connectivity problems . . . the supply chain collaborated with Nvidia to solve the issues, which happened two to three months ago," said an engineer at one of Nvidia's partner manufacturers. The development comes ahead of Nvidia's quarterly earnings on Wednesday, where investors will be watching for signs that Blackwell shipments are proceeding at pace following the initial technical problems. Saudi Arabia and the United Arab Emirates recently announced plans to acquire thousands of Blackwell chips during President Donald Trump's tour of the Gulf, as Nvidia looks beyond the Big Tech "hyperscaler" companies to nation states to diversify its customer base. Nvidia's supply chain partners have spent months tackling several challenges with the GB200 racks, including overheating caused by its 72 high-performance GPUs, and leaks in the liquid cooling systems. Engineers also cited software bugs and inter-chip connectivity problems stemming from the complexity of synchronising such a large number of processors. "This technology is really complicated. No company has tried to make this many AI processors work simultaneously in a server before, and in such a short timeframe," said Chu Wei-Chia, a Taipei-based analyst at consultancy SemiAnalysis. "Nvidia had not allowed the supply chain sufficient time to be fully ready, hence the delays. The inventory risk around GB200 will ease off as manufacturers increase rack output in the second half of the year," Chu added. To ensure a smoother deployment for major customers such as Microsoft and Meta, suppliers have beefed up testing protocols before shipping, running more checks to ensure the racks function for AI workloads. Nvidia is also preparing for the rollout of its next-generation GB300 AI rack, which features enhanced memory capabilities and is designed to handle more complex reasoning models such as OpenAI's 01 and DeepSeek R1. Huang said last week that GB300 will launch in the third quarter. In a bid to accelerate deployment, Nvidia has compromised on aspects of the GB300's design. It had initially planned to introduce a new chip board layout, known as "Cordelia," allowing for the replacement of individual GPUs. But in April, the company told partners it would revert to the earlier "Bianca" design -- used in the the current GB200 rack -- due to installation issues, according to two suppliers. The decision could help Nvidia to achieve its sales targets. In February the company said it was aiming for around $43bn in sales for the quarter to the end of April, a record figure which would be up around 65 per cent year on year. Analysts have said the Cordelia board would have offered the potential for better margins and made it easier for customers to do maintenance. Nvidia has not abandoned Cordelia and has informed suppliers it intends to implement the redesign within its next-generation AI chips, according to three people familiar with the matter. Separately, Nvidia is working to offset revenue losses in China, following a US government ban on exports of its H20 chip -- a watered-down version of its AI processors. The company said it expects to incur $5.5bn in charges related to the ban, due to inventory write-offs and purchase commitments. Last week Bank of America analyst Vivek Arya wrote that the China sales hit would drag down Nvidia's gross margins for the quarter from the 71 per cent previously indicated by the company to around 58 per cent. But he wrote that a faster than expected rollout of Blackwell due to the company reverting back to Bianca boards could help offset the China revenue hit in the second half of the year.
[3]
Nvidia suppliers ramp up AI server production after technical snag - FT By Investing.com
Investing.com -- NVIDIA (NASDAQ:NVDA) suppliers, including industry leaders like Foxconn (SS:601138), Inventec, Dell (NYSE:DELL), and Wistron, have overcome technical challenges that previously delayed shipments of Nvidia's flagship AI data center racks, the Financial Times reported on Tuesday. This development marks a significant stride in the company's efforts to enhance its global sales initiatives. The technical issues, which arose toward the end of last year, had impeded the production of the highly anticipated "Blackwell" AI servers. These servers are expected to substantially boost computing power for training and utilizing large language models, a promise made by Nvidia's CEO, Jensen Huang, upon the product's announcement last year. The resolution of these technical difficulties is a welcome relief for Nvidia, as it aims to meet its ambitious annual sales targets. The suppliers' breakthroughs have enabled the commencement of shipments, which is likely to contribute positively to the company's performance. Nvidia's Blackwell AI servers are part of a broader strategy to solidify the chipmaker's position in the rapidly growing field of artificial intelligence. The successful deployment of these servers is crucial for the company to maintain its competitive edge and fulfill the expectations set by its leadership. This latest advancement underscores Nvidia's resilience in addressing production challenges and reaffirms its commitment to delivering cutting-edge technology solutions to its customers. As shipments begin, the market is watching closely to see how this will translate into financial performance for the semiconductor giant in the coming quarters. All eyes are on NVIDIA ahead of earnings on Wednesday after the close. Shares of the AI leader closed up 3.2% on Tuesday and are up 25% over the last month, as tariff and DeepSeek fears have subsided.
Share
Copy Link
Nvidia's suppliers have resolved technical issues with the GB200 AI server racks, allowing for accelerated production and shipment of the highly anticipated Blackwell AI servers. This development comes as a significant boost to Nvidia's global sales push and ambitious annual targets.
Nvidia's suppliers, including Foxconn, Inventec, Dell, and Wistron, have successfully resolved a series of technical issues that had previously delayed the production and shipment of the company's flagship AI data center racks 1. This breakthrough comes as a significant boost to Nvidia's global sales push and ambitious annual targets.
Source: Tom's Hardware
The GB200 AI rack, part of Nvidia's highly anticipated "Blackwell" AI server line, includes 36 "Grace" central processing units and 72 Blackwell graphics processing units, connected through Nvidia's NVLink communication system 2. Shipments of the GB200 racks began at the end of the first quarter of 2025, with production capacity now being rapidly scaled up.
Nvidia's supply chain partners spent several months tackling various challenges with the GB200 racks, including:
An engineer at one of Nvidia's partner manufacturers reported that internal tests revealed connectivity problems, which were resolved through collaboration with Nvidia two to three months ago 2.
The resolution of these technical issues is crucial for Nvidia as it prepares for its quarterly earnings report. Investors will be watching for signs that Blackwell shipments are proceeding at pace following the initial setbacks 2. The development is particularly significant as Nvidia looks to diversify its customer base beyond Big Tech "hyperscaler" companies to include nation-states, with Saudi Arabia and the United Arab Emirates recently announcing plans to acquire thousands of Blackwell chips 2.
Source: Financial Times News
As Nvidia prepares for the rollout of its next-generation GB300 AI rack in the third quarter, the company has made some design compromises to accelerate deployment. Initially planning to introduce a new chip board layout known as "Cordelia," Nvidia has reverted to the earlier "Bianca" design used in the current GB200 rack due to installation issues 12.
While this decision may help Nvidia achieve its sales targets, it could impact the potential for better margins and ease of maintenance for customers. However, Nvidia has not abandoned the Cordelia design entirely and plans to implement it in its next-generation AI chips 2.
The successful resolution of technical issues and the ramp-up in production of Blackwell AI servers are expected to have positive implications for Nvidia's market performance. The company's shares closed up 3.2% on Tuesday and have risen 25% over the last month 3. As Nvidia continues to solidify its position in the rapidly growing field of artificial intelligence, the market will be closely watching how these developments translate into financial performance in the coming quarters.
Elon Musk's xAI invests $300 million in Telegram, integrating Grok AI chatbot into the messaging app. The deal promises enhanced AI features for users and revenue sharing between the companies.
16 Sources
Technology
13 hrs ago
16 Sources
Technology
13 hrs ago
Opera introduces Neon, an AI-driven browser that promises to automate web tasks, create content, and continue working even when users are offline, marking a significant shift in how we interact with the internet.
17 Sources
Technology
13 hrs ago
17 Sources
Technology
13 hrs ago
Anthropic introduces a voice mode for its Claude AI chatbot on mobile apps and extends web search capabilities to all users, including those on free plans. The update aims to enhance user interaction and accessibility.
10 Sources
Technology
21 hrs ago
10 Sources
Technology
21 hrs ago
Reed Hastings, Netflix co-founder and former CEO, has been appointed to the board of directors at Anthropic, a leading AI startup. This move brings significant tech industry experience to one of OpenAI's major competitors.
6 Sources
Business and Economy
5 hrs ago
6 Sources
Business and Economy
5 hrs ago
Humain, Saudi Arabia's state-owned AI company, announces plans for a $10 billion venture fund and massive AI infrastructure investments, aiming to process 7% of global AI workloads by 2030.
3 Sources
Business and Economy
13 hrs ago
3 Sources
Business and Economy
13 hrs ago