NVIDIA Unveils Next-Gen AI Infrastructure Plans at OCP Global Summit

Reviewed byNidhi Govil

2 Sources

Share

NVIDIA showcases advancements in AI computing, networking, and power systems at the Open Compute Project Global Summit, introducing new partnerships and technologies for future gigawatt AI factories.

News article

NVIDIA's Vision for Future AI Infrastructure

At the OCP Global Summit, NVIDIA unveiled its ambitious plans for the next generation of AI infrastructure, showcasing advancements in computing, networking, and power systems designed to support the growing demands of AI factories

1

2

. The company's strategy focuses on coordinating 'from chip to grid' to enable the rapid growth of AI capabilities.

Vera Rubin NVL144: A Leap in AI Computing

NVIDIA introduced the specifications for its Vera Rubin NVL144 MGX-generation open architecture rack servers. This new system offers:

  • 100% liquid-cooled, modular design
  • Central PCB midplane for faster assembly and serviceability
  • Modular expansion bays for NVIDIA ConnectX-9 800GB/s networking
  • Support for NVIDIA Rubin CPX for massive-context inference

    1

The Vera Rubin NVL144 is built to meet the demands of advanced reasoning engines and AI agents, with over 50 MGX partners gearing up to support this new architecture.

Ecosystem Expansion and Partnerships

NVIDIA announced significant partnerships and ecosystem growth:

  • Meta will integrate NVIDIA's Spectrum-X Ethernet platforms into its AI infrastructure
  • Oracle Cloud Infrastructure (OCI) will adopt Spectrum-X for large-scale AI training clusters
  • New NVLink Fusion partnerships with Intel, Samsung Foundry, and Fujitsu

    2

800V DC Power: The Future of AI Data Centers

A major focus of NVIDIA's presentation was the industry move towards 800-volt direct current (VDC) power delivery for data centers. This transition offers several benefits:

  • Increased scalability and improved energy efficiency
  • Reduced materials usage
  • Higher capacity for performance in data centers

    1

NVIDIA is collaborating with infrastructure providers like Schneider Electric and Siemens to develop reference architectures for 800V DC power systems.

Performance Benchmarks and Future Projections

NVIDIA highlighted impressive performance gains with its new technologies:

  • Blackwell GB200 GPUs show a 15-fold gain in inference throughput compared to the previous Hopper generation
  • Spectrum-X Ethernet platforms achieve '95% throughput with zero latency degradation'

    2

The company also announced that the forthcoming Rubin and Rubin CPX systems, built on the MGX rack platform, are expected to launch in the second half of 2026.

Open Compute Project Contributions

NVIDIA reaffirmed its commitment to the Open Compute Project (OCP) by contributing its upgraded rack and compute tray innovations as open standards. This move allows partners to mix and match components in a modular fashion, enabling faster scaling with the architecture

1

.

As AI continues to evolve rapidly, NVIDIA's latest announcements demonstrate its dedication to developing the infrastructure necessary to support the next generation of AI technologies and applications.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo