3 Sources
[1]
Large-scale shipments of Nvidia GB300 servers tipped to start in September -- GB200 demand remains 'robust' despite widespread coolant leak reports
While Dell and probably some other Nvidia partners have initiated early production of their GB300-based servers, large-scale shipments of such machines are expected to begin only in September 2025, reports DigiTimes. The rollout is expected to proceed more smoothly than prior generations due to strategic design reuse and improved coordination across the supply chain. Yet, liquid cooling still represents a challenge for original design manufacturers (ODMs). One of the major factors enabling the faster transition is Nvidia's decision to retain the motherboard design used in the current GB200 platform, according to DigiTimes. But that's not all, as Nvidia gives its partners a lot more freedom than before. For the GB300, Nvidia is shifting to a more modular approach, according to SemiAnalysis. Instead of delivering a fully assembled motherboard, Nvidia is said to provide the B300 GPU on an SXM Puck module, the Grace CPU in a separate BGA package, and the hardware management controller (HMC) from Axiado. Customers now source the remaining motherboard components themselves, and the CPU memory uses standard SOCAMM memory modules that are obtainable from various vendors. Nvidia continues to provide the switch tray and copper backplane as before. The reuse eliminates the need for a complete redesign, streamlining production processes and reducing risk. With GB200, Nvidia provides the complete Bianca motherboard, which includes the B200 GPU, Grace CPU, 512GB of LPDDR5X memory, and power delivery components -- all integrated on a single printed circuit board (PCB). Nvidia also supplies the switch tray and copper backplane for this system. With the GB300 now in validation and early production phases, ODMs report no significant hurdles, DigiTimes claims. Feedback from partners indicates that component qualification is progressing as planned, and Nvidia is on track to increase output steadily throughout the third quarter. By the fourth quarter of 2025, shipment volumes are expected to ramp up significantly, according to DigiTimes. Wistron, a key supplier of compute boards, has indicated that revenue this quarter will remain flat due to the generational overlap between GB200 and GB300, the report says. Good news, the transition appears to be proceeding smoothly compared to the transition to the current platform, which faced multiple delays due to Nvidia's silicon problems, dense server layouts, and cooling requirements. By now, server ODMs seem to have learnt how to manage all of the challenges applicable to them. Although the GB200 is shipping in high volumes to data centers, it has faced persistent problems with its liquid cooling systems, according to DigiTimes. The primary failures occur with quick-connect fittings, which have shown a tendency to leak despite undergoing factory stress tests. Data center operators have responded by adopting measures like localized shutdowns and extensive leak testing, which means that they essentially prioritize deployment speed and performance over hardware reliability. Beyond GB300, Nvidia is prepping its next-generation codenamed Vera Rubin platform for AI servers. This platform will roll out in two phases. The first phase will replace Grace CPUs with Vera CPUs and Blackwell GPUs with Rubin GPUs, but will retain the current Oberon rack, which will carry the NVL144 name (despite using 72 dual compute chiplet GPU packages). The second phase will involve an all-new Kyber rack with Vera CPUs and Rubin Ultra GPUs with four compute chiplets. As Rubin GPUs are expected to be more power hungry than Blackwell GPUs, the next-generation platform will further increase reliance on liquid cooling. While necessary for performance, this cooling method remains challenging to implement reliably, as it turns out from the DigiTimes report. In the GB200 systems, variability in plumbing setups and water pressure across deployments has made it difficult to fully eliminate leaks, leading to significant post-deployment servicing requirements and labor costs.
[2]
NVIDIA's next-gen GB300 AI servers now in production, will begin shipping in September
TL;DR: NVIDIA's next-gen GB300 "Blackwell Ultra" AI servers have entered production, leveraging the existing GB200 motherboard design to streamline manufacturing. Shipments are set to begin in the second half of 2025, promising smooth supply chain operations and steady volume production through Q4 2025. NVIDIA's next-gen GB300 AI servers have entered production, with the new GB300 "Blackwell Ultra" AI servers to begin shipping in September... on time, and ready to rock and roll. In a new report from DigiTimes picked up by insider @Jukanrosleve on X, we're hearing that NVIDIA's new GB300 "Blackwell Ultra" AI servers have entered production according to supply chain sources. Industry sources add that they expect a smooth production trajectory into the second half of 2025, which is said to be from a strategic shift that's making it easier for AI server manufacturers. NVIDIA decided to reuse the motherboard design from its current GB200 platform -- known as the Bianca board -- for its new GB300 platform. This move has significantly shortened the learning curve for suppliers, many of which were struggling to keep up with NVIDIA's incredibly fast product update cycle in the past. One ODM representative noted: "there are no major issues with the GB300 at this stage. Shipments should proceed smoothly in the second half". One particular industry executive said: "NVIDIA's upgrade pace in AI servers is like a military blitz. Competitors can't even see their taillights, but the supply chain is feeling the strain". In breakneck speed, NVIDIA has shifted from Hopper to the current Blackwell architecture, and soon the Blackwell Ultra architecture, and in 2026 we'll see the introduction of the next-generation Rubin architecture that will also adopt next-gen HBM4 memory. Each generation has its own tweaks and changes, which has been forcing AI server manufacturers struggling to keep up. GB200 dramatically compressed the server layout, which saw multiple delays getting into mass production, but with GB300 reusing the existing infrastructure, suppliers won't need to experience the same headaches. ODMs are now actively testing NVIDIA's new GB300 "Blackwell Ultra" and early production results are "promising". The transition is expected to be seamless, with steady shipments projected through Q3 2025, and volume production in Q4 2025. Compute board supplier Wistron has confirmed that revenue this quarter won't grow sequentially due to the generational overlap, which DigiTimes reports is an indirect confirmation that GB300 production is underway.
[3]
NVIDIA's High-End "Blackwell Ultra" GB300 AI Servers to Begin Shipping by September as Design Changes Ease Pressure on the Supply Chain
NVIDIA's GB300 AI servers are finally expected to see volume shipments commence by September, as the supply chain has adjusted to Team Green's design changes. The "Blackwell Ultra" lineup of AI servers by NVIDIA did see a hiccup when they were released in H1 2025, as Team Green introduced several design changes that the supply chain found difficult to adjust to. This resulted in low-volume shipments, and only NVIDIA's exclusive partners, such as Dell and Microsoft, managed to get the more high-end NVL72 AI clusters. Now, based on a report by DigiTimes, it is revealed that NVIDIA plans to initiate volume production of its GB300 AI clusters by September, allowing a larger market segment to access high-end clusters from the company. For those unaware of what causes the production issues, it was mainly because with the GB300, NVIDIA employed a Cordelia board design, which integrated modular design features and integrated the newer SOCAMM memory design, which hadn't been adopted before. However, due to NVIDIA's short-term product update cycle, followed by the issues with SOCAMM memory, the firm decided to switch to the Bianca architecture, which was also adopted with the GB200. While the markets saw this as a move of vulnerability, it had turned out to be a significant step. It is claimed that the supply chain no longer feels pressure to ramp up GB300 supply to customers since the adjustments are relatively lower compared to the GB200, which has the same fundamentals. NVIDIA's suppliers are currently testing out low-volume GB300 shipments with the Bianca board, and it is expected that volume will catch up in the upcoming quarters, which means that Q4 will serve as the timeline for "Blackwell Ultra" to enter and potentially disrupt the AI industry. GB300 AI servers have already started to witness massive orders, particularly from NVIDIA's "Sovereign AI" initiative, so it is safe to say that the demand is there. Moreover, when you look at how quickly NVIDIA is proceeding with upgrading its architectural advancements, no one can compete with them right now. With Rubin being introduced in the markets by either year-end of the start of 2026, Team Green is currently operating on a six to eight months product cycle, which is one of the fastest ever.
Share
Copy Link
NVIDIA's next-generation GB300 'Blackwell Ultra' AI servers are now in production, with large-scale shipments expected to begin in September 2025. The new servers feature design improvements and reuse elements from the current GB200 platform, easing supply chain pressures.
NVIDIA, the leading AI chip manufacturer, has begun production of its next-generation GB300 'Blackwell Ultra' AI servers. According to industry sources, large-scale shipments are expected to commence in September 2025, marking a significant milestone in the company's AI infrastructure development 12.
A key factor in the smooth rollout of the GB300 servers is NVIDIA's decision to retain the motherboard design from the current GB200 platform, known as the Bianca board 1. This strategic move has significantly reduced the learning curve for suppliers, many of whom had previously struggled to keep pace with NVIDIA's rapid product update cycle 2.
Source: TweakTown
The GB300 platform introduces a more modular approach, with NVIDIA providing the B300 GPU on an SXM Puck module, the Grace CPU in a separate BGA package, and the hardware management controller from Axiado. This allows customers to source remaining components independently, streamlining the production process 1.
Early production of GB300-based servers has already begun, with companies like Dell leading the charge. The validation phase is progressing well, with no significant hurdles reported by Original Design Manufacturers (ODMs) 1. Industry insiders expect a steady increase in output throughout the third quarter of 2025, with shipment volumes ramping up significantly in the fourth quarter 12.
Despite the optimistic outlook for GB300, the current GB200 platform has faced persistent issues with its liquid cooling systems, particularly in quick-connect fittings 1. As AI servers continue to demand more power, reliable liquid cooling implementation remains a challenge for the industry.
Looking ahead, NVIDIA is already preparing its next-generation 'Vera Rubin' platform, which will feature even more powerful GPUs and increased reliance on liquid cooling 1. This rapid pace of innovation underscores NVIDIA's dominance in the AI server market, with one industry executive noting, "NVIDIA's upgrade pace in AI servers is like a military blitz. Competitors can't even see their taillights" 2.
Source: Tom's Hardware
The transition to GB300 is expected to be smoother than previous generations, thanks to improved coordination across the supply chain 13. Compute board supplier Wistron has indicated that revenue this quarter will remain flat due to the generational overlap, indirectly confirming that GB300 production is underway 2.
NVIDIA's "Sovereign AI" initiative has already generated significant orders for GB300 AI servers, indicating strong market demand 3. With the company operating on an unprecedented six to eight-month product cycle, NVIDIA continues to solidify its position as the leader in AI infrastructure 3.
As the AI industry eagerly anticipates the widespread availability of GB300 servers, NVIDIA's ability to navigate supply chain challenges and maintain its rapid innovation pace will be crucial in shaping the future of AI computing infrastructure.
Summarized by
Navi
[1]
Meta, under Mark Zuckerberg's leadership, is making a massive investment in AI, aiming to develop "superintelligence" with a new elite team and billions in infrastructure spending.
2 Sources
Technology
8 hrs ago
2 Sources
Technology
8 hrs ago
Perplexity AI, an Nvidia-backed startup, is negotiating with mobile device manufacturers to pre-install its AI-powered Comet browser on smartphones, aiming to challenge Google's Chrome dominance and expand its user base.
5 Sources
Technology
16 hrs ago
5 Sources
Technology
16 hrs ago
Rep. John Moolenaar, chair of the House Select Committee on China, criticizes the Trump administration's decision to allow Nvidia to resume H20 GPU shipments to China, citing concerns over potential military and AI advancements.
7 Sources
Policy and Regulation
1 day ago
7 Sources
Policy and Regulation
1 day ago
An exploration of how AI tools like ChatGPT are transforming writing, creativity, and education, highlighting both benefits and concerns for writers, students, and educators.
3 Sources
Technology
1 day ago
3 Sources
Technology
1 day ago
OpenAI, the creator of ChatGPT, has announced a $50 million fund to support nonprofit and community organizations in implementing AI for public good, balancing its corporate growth with its founding mission.
3 Sources
Technology
1 day ago
3 Sources
Technology
1 day ago