2 Sources
2 Sources
[1]
NVIDIA, Partners Drive Next-Gen Efficient Gigawatt AI Factories in Buildup for Vera Rubin
More than 50 NVIDIA MGX partners are gearing up for NVIDIA Vera Rubin NVL144; 20-plus companies will join NVIDIA's growing ecosystem of partners supporting 800 VDC data centers to enable future gigawatt AI factories. At the OCP Global Summit, NVIDIA is offering a glimpse into the future of gigawatt AI factories. NVIDIA will unveil specs of the NVIDIA Vera Rubin NVL144 MGX-generation open architecture rack servers, which more than 50 MGX partners are gearing up for along with ecosystem support for NVIDIA Kyber, which connects 576 Rubin Ultra GPUs, built to support increasing inference demands. Some 20-plus industry partners are showcasing new silicon, components, power systems and support for the next-generation, 800-volt direct current (VDC) data centers of the gigawatt era that will support the NVIDIA Kyber rack architecture. Foxconn provided details on its 40-megawatt Taiwan data center, Kaohsiung-1, being built for 800 VDC. CoreWeave, Lambda, Nebius, Oracle Cloud Infrastructure and Together AI are among other industry pioneers designing for 800-volt data centers. In addition, Vertiv unveiled its space-, cost- and energy-efficient 800 VDC MGX reference architecture, a complete power and cooling infrastructure architecture. HPE is announcing product support for NVIDIA Kyber as well as NVIDIA Spectrum-XGS Ethernet scale-across technology, part of the Spectrum-X Ethernet platform. Moving to 800 VDC infrastructure from traditional 415 or 480 VAC three-phase systems offers increased scalability, improved energy efficiency, reduced materials usage and higher capacity for performance in data centers. The electric vehicle and solar industries have already adopted 800 VDC infrastructure for similar benefits. The Open Compute Project, founded by Meta, is an industry consortium of hundreds of computing and networking providers and more focused on redesigning hardware technology to efficiently support the growing demands on compute infrastructure. The Vera Rubin NVL144 MGX compute tray offers an energy-efficient, 100% liquid-cooled, modular design. Its central printed circuit board midplane replaces traditional cable-based connections for faster assembly and serviceability, with modular expansion bays for NVIDIA ConnectX-9 800GB/s networking and NVIDIA Rubin CPX for massive-context inference. The NVIDIA Vera Rubin NVL144 offers a major leap in accelerated computing architecture and AI performance. It's built for advanced reasoning engines and the demands of AI agents. Its fundamental design lives in the MGX rack architecture and will be supported by 50+ MGX system and component partners. NVIDIA plans to contribute the upgraded rack as well as the compute tray innovations as an open standard for the OCP consortium. Its standards for compute trays and racks enable partners to mix and match in modular fashion and scale faster with the architecture. The Vera Rubin NVL144 rack design features energy-efficient 45°C liquid cooling, a new liquid-cooled busbar for higher performance and 20x more energy storage to keep power steady. The MGX upgrades to compute tray and rack architecture boost AI factory performance while simplifying assembly, enabling a rapid ramp-up to gigawatt-scale AI infrastructure. NVIDIA is a leading contributor to OCP standards across multiple hardware generations, including key portions of the NVIDIA GB200 NVL72 system electro-mechanical design. The same MGX rack footprint supports GB300 NVL72 and will support Vera Rubin NVL144, Vera Rubin NVL144 CPX and Vera Rubin CPX for higher performance and fast deployments. The OCP ecosystem is also preparing for NVIDIA Kyber, featuring innovations in 800 VDC power delivery, liquid cooling and mechanical design. These innovations will support the move to rack server generation NVIDIA Kyber -- the successor to NVIDIA Oberon -- which will house a high-density platform of 576 NVIDIA Rubin Ultra GPUs by 2027. The most effective way to counter the challenges of high-power distribution is to increase the voltage. Transitioning from a traditional 415 or 480 VAC three-phase system to an 800 VDC architecture offers various benefits. The transition afoot enables rack server partners to move from 54 VDC in-rack components to 800 VDC for better results. An ecosystem of direct current infrastructure providers, power system and cooling partners, and silicon makers -- all aligned on open standards for the MGX rack server reference architecture -- attended the event. NVIDIA Kyber is engineered to boost rack GPU density, scale up network size and maximize performance for large-scale AI infrastructure. By rotating compute blades vertically, like books on a shelf, Kyber enables up to 18 compute blades per chassis, while purpose-built NVIDIA NVLink switch blades are integrated at the back via a cable-free midplane for seamless scale-up networking. Over 150% more power is transmitted through the same copper with 800 VDC, enabling eliminating the need for 200-kg copper busbars to feed a single rack. Kyber will become a foundational element of hyperscale AI data centers, enabling superior performance, efficiency and reliability for state-of-the-art generative AI workloads in the coming years. NVIDIA Kyber racks offer a way for customers to reduce the amount of copper they use by the tons, leading to millions of dollars in cost savings. In addition to hardware, NVIDIA NVLink Fusion is gaining momentum, enabling companies to seamlessly integrate their semi-custom silicon into highly optimized and widely deployed data center architecture, reducing complexity and accelerating time to market. Intel and Samsung Foundry are joining the NVLink Fusion ecosystem that includes custom silicon designers, CPU and IP partners, so that AI factories can scale up quickly to handle demanding workloads for model training and agentic AI inference. More than 20 NVIDIA partners are helping deliver rack servers with open standards, enabling the future gigawatt AI factories.
[2]
NVIDIA Unveils New Partners & Plans across AI Networking, Compute, OCP | AIM
The company confirmed new NVLink Fusion partnerships with Intel and Samsung Foundry, as well as the expansion of its partnership with Fujitsu. NVIDIA detailed new developments in AI infrastructure at the Open Compute Project (OCP), revealing advances in networking, compute platforms and power systems. The company also revealed new benchmarks for its Blackwell GPUs and plans to introduce 800-volt direct current (DC) power designs for future data centres. Speaking at a press briefing ahead of the OCP Summit, NVIDIA executives said the company aims to support the rapid growth of AI factories by coordinating "from chip to grid". Joe DeLaere, data centre product marketing manager at NVIDIA, said the surge in AI demand requires integrated solutions in networking, compute, power and cooling, and that NVIDIA's contributions will remain open to the OCP community. Meta will integrate NVIDIA's Spectrum-X Ethernet platforms into its AI infrastructure, while Oracle Cloud Infrastructure (OCI) will adopt the same technology for large-scale AI training clusters. NVIDIA said Spectrum-X is explicitly designed for AI workloads, claiming it achieves "95% throughput with zero latency degradation". On performance, NVIDIA highlighted new open-source benchmarks showing a 15-fold gain in inference throughput for its Blackwell GB200 GPUs compared to the previous Hopper generation. "A $5 million investment in Blackwell can generate $75 million in token revenue," the company said, linking performance efficiency directly to AI factory returns. NVIDIA also confirmed that the forthcoming Rubin and Rubin CPX systems will build on the MGX rack platform and are expected to launch in the second half of 2026. A significant focus was the industry move towards 800V DC power delivery, which NVIDIA presented as a way to cut energy losses and support higher rack densities. The company is working with infrastructure providers, including Schneider Electric and Siemens, to develop reference architectures. When asked by AIM about how OCP contributions and Spectrum-X adoption by Meta and Oracle may affect smaller enterprises, NVIDIA said the technology is designed for all scales. "Spectrum-X becomes the infrastructure for AI; it serves enterprise, cloud and the world's largest AI supercomputers," said Gilad Shainer, SVP of marketing at NVIDIA. The company confirmed new NVLink Fusion partnerships with Intel, Samsung Foundry and Fujitsu to expand custom silicon integration within MGX-compatible racks. NVIDIA will also publish a technical white paper on 800V DC design and present full architectural details during the OCP Summit.
Share
Share
Copy Link
NVIDIA showcases advancements in AI computing, networking, and power systems at the Open Compute Project Global Summit, introducing new partnerships and technologies for future gigawatt AI factories.
At the OCP Global Summit, NVIDIA unveiled its ambitious plans for the next generation of AI infrastructure, showcasing advancements in computing, networking, and power systems designed to support the growing demands of AI factories
1
2
. The company's strategy focuses on coordinating 'from chip to grid' to enable the rapid growth of AI capabilities.NVIDIA introduced the specifications for its Vera Rubin NVL144 MGX-generation open architecture rack servers. This new system offers:
1
The Vera Rubin NVL144 is built to meet the demands of advanced reasoning engines and AI agents, with over 50 MGX partners gearing up to support this new architecture.
NVIDIA announced significant partnerships and ecosystem growth:
2
A major focus of NVIDIA's presentation was the industry move towards 800-volt direct current (VDC) power delivery for data centers. This transition offers several benefits:
1
NVIDIA is collaborating with infrastructure providers like Schneider Electric and Siemens to develop reference architectures for 800V DC power systems.
Related Stories
NVIDIA highlighted impressive performance gains with its new technologies:
2
The company also announced that the forthcoming Rubin and Rubin CPX systems, built on the MGX rack platform, are expected to launch in the second half of 2026.
NVIDIA reaffirmed its commitment to the Open Compute Project (OCP) by contributing its upgraded rack and compute tray innovations as open standards. This move allows partners to mix and match components in a modular fashion, enabling faster scaling with the architecture
1
.As AI continues to evolve rapidly, NVIDIA's latest announcements demonstrate its dedication to developing the infrastructure necessary to support the next generation of AI technologies and applications.
Summarized by
Navi
[1]
[2]
Analytics India Magazine
|19 May 2025•Technology
16 Oct 2024•Technology
28 May 2025•Technology