Google Deploys NVIDIA's Blackwell GB200 NVL Racks for AI Cloud Platform

2 Sources

Share

Google has begun deploying NVIDIA's cutting-edge Blackwell GB200 NVL racks to power its AI cloud platform, showcasing liquid-cooled high-performance computing capabilities.

News article

Google Embraces NVIDIA's Blackwell Technology for AI Cloud Computing

Google, the third-largest cloud service provider with a 12% global market share, has announced the deployment of NVIDIA's state-of-the-art Blackwell GB200 NVL racks to enhance its AI cloud platform

1

. This move signifies a major step in Google's strategy to expand its cloud operations and offer faster, more efficient AI services.

Technical Specifications and Performance

The GB200 NVL racks feature NVIDIA's latest high-performance chips, each containing one Grace CPU and one B200 Blackwell-based data center graphics chip. These chips boast an impressive 90 TFLOPS of FP64 performance

1

. While Google has stated that it is using 'custom' GB200 NVL racks, the exact configuration remains undisclosed.

For comparison, NVIDIA's standard GB200 NVL72 configuration includes:

  • 36 Grace CPUs paired with 72 Blackwell GPUs
  • Bandwidth of up to 130 TB/s
  • 3240 TFLOPS of FP64 performance
  • 13.5 TB of HBM3e memory

    2

Liquid Cooling Technology

A notable feature of Google's new AI infrastructure is the implementation of liquid cooling for the GB200 high-performance chips. The company shared images on social media showcasing the sleek, liquid-cooled server racks

2

. This cooling method is crucial for managing the heat generated by these powerful AI processors, enabling optimal performance and energy efficiency.

Industry Trends and Competition

Google is not alone in adopting NVIDIA's Blackwell technology. Other tech giants and manufacturers have also begun integrating GB200 chips into their systems:

  1. Microsoft has similarly deployed GB200-based servers with liquid cooling

    1

    .
  2. Foxconn, a Taiwanese manufacturer, is using GB200 NVL72 racks to construct Taiwan's fastest supercomputer

    2

    .

Implications for AI and Cloud Computing

The adoption of NVIDIA's Blackwell technology by major cloud providers and tech companies signals a significant advancement in AI and cloud computing capabilities. These powerful systems are expected to accelerate various applications, including:

  • Large Language Models (LLMs)
  • Medical research
  • Cloud storage
  • AI workloads

As NVIDIA's Blackwell chips enter full production, we can anticipate more systems leveraging these industry-leading AI processors, potentially revolutionizing the landscape of AI and high-performance computing

1

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo