NVIDIA Contributes Blackwell Platform Design to Open Compute Project, Advancing AI Infrastructure

4 Sources

Share

NVIDIA has shared key components of its Blackwell accelerated computing platform design with the Open Compute Project (OCP), aiming to promote open, efficient, and scalable data center solutions for AI infrastructure.

News article

NVIDIA's Contribution to Open Compute Project

NVIDIA has made a significant move in the AI infrastructure landscape by contributing key elements of its Blackwell accelerated computing platform design to the Open Compute Project (OCP). This initiative aims to drive the development of open, efficient, and scalable data center technologies

1

2

3

.

Key Components Shared

The contribution includes critical design elements from the GB200 NVL72 system, such as:

  • Rack architecture
  • Compute and switch tray mechanicals
  • Liquid-cooling and thermal environment specifications
  • NVLink cable cartridge volumetrics

    1

    3

These components are essential for efficient data center operations, particularly in supporting high-density compute environments required for advanced AI workloads.

Blackwell Platform Specifications

The GB200 NVL72 system, at the heart of this contribution, is a liquid-cooled appliance featuring:

  • 36 NVIDIA Grace CPUs
  • 72 Blackwell GPUs
  • NVLink domain connecting GPUs into a single massive GPU
  • 130 terabytes-per-second low-latency communications

    2

    3

This system is designed to deliver substantial performance improvements, particularly for large language model inference, boasting 30 times faster performance than its predecessor, the H100 Tensor Core GPU

2

.

Spectrum-X Ethernet Networking Platform

NVIDIA has also expanded support for OCP standards in its Spectrum-X Ethernet networking platform. This includes:

  • Alignment with OCP's Switch Abstraction Interface (SAI) and Software for Open Networking in the Cloud (SONiC) standards
  • Introduction of ConnectX-8 SuperNICs, supporting data speeds up to 800Gb/s
  • Optimization for large-scale AI workloads

    1

    2

    3

Industry Collaboration and Impact

NVIDIA's initiative has garnered support from various industry players:

  • Collaboration with over 40 global electronics manufacturers
  • Partnership with Vertiv to develop a joint reference design for the GB200 NVL72
  • Meta's contribution of its Catalina AI rack architecture, based on NVIDIA's platform, to the OCP

    1

    3

    4

These collaborations aim to accelerate the adoption of open computing standards and simplify AI factory development.

Future Implications

NVIDIA's contribution is expected to have far-reaching effects on the AI infrastructure landscape:

  • Enabling OCP members to build custom designs based on Blackwell GPUs
  • Potentially reducing deployment time for cloud service providers and data centers by up to 50%
  • Accelerating the development and implementation of AI infrastructure across the industry

    3

    4

As the world transitions from general-purpose to accelerated and AI computing, NVIDIA's open hardware initiative is poised to play a crucial role in shaping the future of data center technologies and AI infrastructure.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo