Curated by THEOUTPOST
On Sat, 15 Feb, 4:02 PM UTC
2 Sources
[1]
HPE Ships First NVIDIA Grace Blackwell GB200 NVL72 System with Advanced Liquid Cooling
Hewlett Packard Enterprise announced that it has shipped its first NVIDIA Blackwell family-based solution, the NVIDIA GB200 NVL72. This rack-scale system by HPE is designed to help service providers and large enterprises quickly deploy very large, complex AI clusters with advanced, direct liquid cooling solutions to optimize efficiency and performance. "AI service providers and large enterprise model builders are under tremendous pressure to offer scalability, extreme performance, and fast time-to-deployment," said Trish Damkroger, senior vice president and general manager of HPC & AI Infrastructure Solutions, HPE. "As builders of the world's top three fastest systems with direct liquid cooling, HPE offers customers lower cost per token training and best-in-class performance with industry-leading services expertise." The NVIDIA GB200 NVL72 features shared-memory, low-latency architecture with the latest GPU technology designed for extremely large AI models of over a trillion parameters, in one memory space. GB200 NVL72 offers seamless integration of NVIDIA CPUs, GPUs, compute and switch trays, networking, and software, bringing together extreme performance to address heavily parallelizable workloads, like generative AI (GenAI) model training and inferencing, along with NVIDIA software applications.
[2]
HPE Announces Shipment of Its First NVIDIA Grace Blackwell System
GB200 NVL72 featuring industry-leading direct liquid cooling now available Hewlett Packard Enterprise (NYSE:HPE) announced today that it has shipped its first NVIDIA Blackwell family-based solution, the NVIDIA GB200 NVL72. This rack-scale system by HPE is designed to help service providers and large enterprises quickly deploy very large, complex AI clusters with advanced, direct liquid cooling solutions to optimize efficiency and performance. "AI service providers and large enterprise model builders are under tremendous pressure to offer scalability, extreme performance, and fast time-to-deployment," said Trish Damkroger, senior vice president and general manager of HPC & AI Infrastructure Solutions, HPE. "As builders of the world's top three fastest systems with direct liquid cooling, HPE offers customers lower cost per token training and best-in-class performance with industry-leading services expertise." The NVIDIA GB200 NVL72 features shared-memory, low-latency architecture with the latest GPU technology designed for extremely large AI models of over a trillion parameters, in one memory space. GB200 NVL72 offers seamless integration of NVIDIA CPUs, GPUs, compute and switch trays, networking, and software, bringing together extreme performance to address heavily parallelizable workloads, like generative AI (GenAI) model training and inferencing, along with NVIDIA software applications. "Engineers, scientists and researchers need cutting-edge liquid cooling technology to keep up with increasing power and compute requirements," said Bob Pette, vice president of enterprise platforms at NVIDIA. "Building on continued collaboration between HPE and NVIDIA, HPE's first shipment of NVIDIA GB200 NVL72 will help service providers and large enterprises efficiently build, deploy and scale large AI clusters." With escalating power requirements and data center density dynamics, HPE has five decades of liquid cooling expertise that uniquely positions the company to help customers bring fast deployment and an extensive infrastructure support system for complex liquid-cooled environments. This experience has enabled HPE to deliver eight of the top 15 supercomputers on the Green500 list, which ranks the world's most energy-efficient supercomputers. HPE is recognized as a leader in direct liquid cooling technology, having built seven of the top 10 world's fastest supercomputers. Features of NVIDIA GB200 NVL72 by HPE: 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs interconnected via high-speed NVIDIA NVLink Up to 13.5 TB total HBM3e memory with 576 TB/sec bandwidth HPE direct liquid cooling technology Industry Leading Services and Support: HPE is able to deliver AI solutions at a global scale, with proven ability to support massive, custom AI clusters with superior serviceability including expert on-site support, customized services, sustainability services and more. HPC & AI Custom Support Services are tailored to meet customer needs. With several levels of SLA coverage, HPE provides enhanced incident management with proactive support through dedicated remote engineers, ensuring rapid installation and faster time-to-value. Available services include: Onsite engineering resources: Comprehensive on-site support through highly trained resident engineers who work closely with a customer's IT teams to ensure optimal system performance and availability. Performance and benchmarking engagements: Industry-leading team of experts to fine tune solutions throughout the life of a system. Sustainability services: Energy and emissions reporting, sustainability workshops, and resource monitoring to reduce environmental impact. The newly shipped NVIDIA GB200 NVL72 by HPE is one of a wide array of high-performance computing and supercomputing systems that address every use case for GenAI, scientific discovery, and other compute-intensive workloads. Learn more about our compute and supercomputing systems and other solutions in the NVIDIA AI Computing by HPE portfolio. About Hewlett Packard Enterprise Hewlett Packard Enterprise (NYSE: HPE) is a global technology leader focused on developing intelligent solutions that allow customers to capture, analyze, and act upon data seamlessly. The company innovates across networking, hybrid cloud, and AI to help customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: www.hpe.com.
Share
Share
Copy Link
Hewlett Packard Enterprise announces the shipment of its first NVIDIA Blackwell family-based solution, the GB200 NVL72, designed for large-scale AI deployments with advanced liquid cooling technology.
Hewlett Packard Enterprise (HPE) has announced the shipment of its first NVIDIA Blackwell family-based solution, the NVIDIA GB200 NVL72 12. This rack-scale system represents a significant leap forward in AI infrastructure, designed to meet the growing demands of service providers and large enterprises for scalable, high-performance AI clusters.
A standout feature of the GB200 NVL72 is its advanced direct liquid cooling solution. This technology is crucial for optimizing efficiency and performance in the face of escalating power requirements and data center density dynamics 2. HPE's expertise in liquid cooling, developed over five decades, positions them uniquely to support complex liquid-cooled environments.
The NVIDIA GB200 NVL72 boasts impressive technical specifications:
This shared-memory, low-latency architecture is designed to handle extremely large AI models with over a trillion parameters in one memory space 1. The system integrates NVIDIA CPUs, GPUs, compute and switch trays, networking, and software to deliver extreme performance for heavily parallelizable workloads such as generative AI model training and inferencing 12.
The GB200 NVL72 is poised to address critical challenges in the AI industry. Trish Damkroger, Senior Vice President at HPE, emphasized the system's ability to offer "scalability, extreme performance, and fast time-to-deployment" 1. This is particularly relevant for AI service providers and large enterprise model builders who face increasing pressure to scale their operations efficiently.
HPE's track record in high-performance computing is noteworthy. The company has built seven of the top 10 world's fastest supercomputers and delivered eight of the top 15 supercomputers on the Green500 list, which ranks the world's most energy-efficient supercomputers 2. This experience translates into lower cost per token training and best-in-class performance for customers.
To complement the hardware, HPE offers a range of support services:
These services are designed to ensure rapid installation, faster time-to-value, and optimal system performance throughout the lifecycle of the infrastructure.
The shipment of the GB200 NVL72 marks a significant milestone in the evolution of AI infrastructure. As Bob Pette from NVIDIA noted, this collaboration between HPE and NVIDIA will "help service providers and large enterprises efficiently build, deploy and scale large AI clusters" 2. This development is likely to accelerate advancements in various AI applications, from scientific research to enterprise-scale generative AI implementations.
Reference
[1]
NVIDIA introduces the GB200 NVL4, a high-performance AI accelerator featuring four Blackwell GPUs and two Grace CPUs on a single board, offering significant improvements in AI and HPC workloads.
7 Sources
7 Sources
Hewlett Packard Enterprise introduces new high-performance computing and AI infrastructure, including liquid-cooled supercomputers and AI-optimized servers, to accelerate scientific research and AI development.
3 Sources
3 Sources
Hewlett Packard Enterprise introduces a groundbreaking 100% fanless direct liquid cooling system architecture, aimed at enhancing energy efficiency and cost-effectiveness for large-scale AI deployments.
2 Sources
2 Sources
Google has begun deploying NVIDIA's cutting-edge Blackwell GB200 NVL racks to power its AI cloud platform, showcasing liquid-cooled high-performance computing capabilities.
2 Sources
2 Sources
Supermicro introduces a new liquid-cooled AI supercomputer powered by NVIDIA's GB200 NVL72 platform, offering exascale computing capabilities in a single rack for enhanced energy efficiency in AI data centers.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved