Curated by THEOUTPOST
On Thu, 17 Oct, 1:01 PM UTC
2 Sources
[1]
Google Leverages NVIDIA Blackwell GB200 NVL Racks For Its AI Cloud Platform, Liquid Cooled Computing
Google is yet another player to use the custom NVIDIA Blackwell GB200 NVL racks for improved computing performance for its Cloud platform. The search engine giant, Google today showed off its first GB200 NVL-based server on X, deployed to power its Cloud platform. The company has a big 12% share in the cloud system market worldwide, and being the third largest cloud service provider, it now wants to expand its operations by offering faster services. The official Google Cloud account on X announced that it has begun deploying NVIDIA's GB200 NVL racks for its cloud platform. It posted a pic, showing a sever with several GB200 NVL racks, which features liquid-cooling for the GB200 high-performance chips. Each GB200 chip contains one Grace CPU and one B200 Blackwell-based data center graphics chip for an incredible 90 TFLOPS of FP64 performance. It should be kept in mind that Google announced that it is using 'custom' GB200 NVL racks and we aren't sure if this brings the same configuration as the GB200 NVL72 with 32 Grace CPUs and 72 B200 GPUs through a 72-GPU NVLink domain. The official NVIDIA GB200 NVL72 brings each rack featuring 36 Grace CPUs paired with 72 Blackwell GPUs, which can offer a bandwidth of up to 130 TB/s. Each rack with 36 Grace CPUs and 72 B200 GPUs can result in a staggering 3240 TFLOPS of FP64 performance and provide up to 13.5 TB of HBM3e memory. Google isn't the first one to deploy the GB200 NVL racks or GB200 chips as several other companies have already started using GB200 chips, including giants such as Microsoft, which posted a similar GB200-based server with liquid cooling. Then there is Foxconn, a Taiwanese manufacturer, which has begun deploying the GB200 NVL72 racks to build the fastest supercomputer in Taiwan. Through the powerful computing performance of GB200 chips, several companies are on their way to upgrading their existing systems for various uses, including LLMs, medical research, cloud storage, AI workloads, etc. NVIDIA's Blackwell is now in full production so expect more systems to utilize the industry-leading AI chips.
[2]
Google shares photos of liquid-cooled NVIDIA Blackwell GB200 NVL racks for AI cloud platform
Google has teased some photos of using NVIDIA's new Blackwell GB200 NVL AI server racks for its AI cloud platform, using liquid-cooled GB200 AI GPUs. Check it out, because it's utterly gorgeous: The official Google Cloud account shared the photo on X, with the US-based search giant showing off its first GB200 NVL-based server, deployed to power its AI cloud platform. Google is now deploying NVIDIA GB200 NVL racks for its AI cloud platform, showing off liquid-cooled GB200 high-performance AI GPUs: each of the GB200 chips feature 1 x Grace CPU and 1 x B200 AI GPU for up to 90 TFLOPs of FP64 compute performance. Google is using custom GB200 NVL racks here, so we don't know what the configuration is exactly -- as the GB200 NVL72 packs 32 x Grace CPUs and 72 x B200 AI GPUs through a 72-GPU NVLink domain. NVIDIA's new Blackwell GB200 NVL72 AI server rack features up to 130TB/sec of bandwidth, with the 36 Grace CPUs and 72 B200 AI GPUs offering a mind-boggling 3240 TFLOPs of FP64 compute performance, and an oh-my-gosh 13.5TB of HBM3E memory. Google isn't the first cloud service provider (CSP) to use Blackwell GB200 AI servers, with Foxconn deploying GB200 NVL72 racks to build the fastest supercomputer in Taiwan alongside NVIDIA. Google is now having some fun with Blackwell.
Share
Share
Copy Link
Google has begun deploying NVIDIA's cutting-edge Blackwell GB200 NVL racks to power its AI cloud platform, showcasing liquid-cooled high-performance computing capabilities.
Google, the third-largest cloud service provider with a 12% global market share, has announced the deployment of NVIDIA's state-of-the-art Blackwell GB200 NVL racks to enhance its AI cloud platform 1. This move signifies a major step in Google's strategy to expand its cloud operations and offer faster, more efficient AI services.
The GB200 NVL racks feature NVIDIA's latest high-performance chips, each containing one Grace CPU and one B200 Blackwell-based data center graphics chip. These chips boast an impressive 90 TFLOPS of FP64 performance 1. While Google has stated that it is using 'custom' GB200 NVL racks, the exact configuration remains undisclosed.
For comparison, NVIDIA's standard GB200 NVL72 configuration includes:
A notable feature of Google's new AI infrastructure is the implementation of liquid cooling for the GB200 high-performance chips. The company shared images on social media showcasing the sleek, liquid-cooled server racks 2. This cooling method is crucial for managing the heat generated by these powerful AI processors, enabling optimal performance and energy efficiency.
Google is not alone in adopting NVIDIA's Blackwell technology. Other tech giants and manufacturers have also begun integrating GB200 chips into their systems:
The adoption of NVIDIA's Blackwell technology by major cloud providers and tech companies signals a significant advancement in AI and cloud computing capabilities. These powerful systems are expected to accelerate various applications, including:
As NVIDIA's Blackwell chips enter full production, we can anticipate more systems leveraging these industry-leading AI processors, potentially revolutionizing the landscape of AI and high-performance computing 1.
NVIDIA introduces the GB200 NVL4, a high-performance AI accelerator featuring four Blackwell GPUs and two Grace CPUs on a single board, offering significant improvements in AI and HPC workloads.
7 Sources
7 Sources
Microsoft Azure becomes the first cloud platform to integrate NVIDIA's cutting-edge Blackwell GB200 AI servers, showcasing a significant leap in cloud computing and AI capabilities.
3 Sources
3 Sources
Supermicro introduces a new liquid-cooled AI supercomputer powered by NVIDIA's GB200 NVL72 platform, offering exascale computing capabilities in a single rack for enhanced energy efficiency in AI data centers.
3 Sources
3 Sources
Hewlett Packard Enterprise announces the shipment of its first NVIDIA Blackwell family-based solution, the GB200 NVL72, designed for large-scale AI deployments with advanced liquid cooling technology.
2 Sources
2 Sources
Nvidia announces the Blackwell Ultra B300 GPU, offering 1.5x faster performance than its predecessor with 288GB HBM3e memory and 15 PFLOPS of dense FP4 compute, designed to meet the demands of advanced AI reasoning and inference.
9 Sources
9 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved