The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 14 Nov, 12:07 AM UTC
3 Sources
[1]
HPE Expands Direct Liquid-Cooled Supercomputing Solutions, Introduces Two AI Systems for Service Providers and Large Enterprises
New portfolio of compute, networking, storage and software advances leadership-class supercomputers and purpose-built AI training solutions New products offer choice of air cooling or HPE's industry-first 100% fanless direct liquid cooling system architectureSupercomputing and large-scale AI servers feature next-generation compute and accelerators from AMD, Intel and NVIDIA, which provide customers with options to best suit performance needs and cost requirementsHPE expands portfolio of servers optimized for high-end AI training and tuning workloads Today, Hewlett Packard Enterprise (NYSE: HPE) announces its new high performance computing (HPC) and artificial intelligence (AI) infrastructure portfolio that includes leadership-class HPE Cray Supercomputing EX solutions and two systems optimized for large language model (LLM) training, natural language processing (NLP) and multi-modal model training. The new supercomputing solutions are designed to help global customers fast-track scientific research and invention. "Service providers and nations investing in sovereign AI initiatives are increasingly turning to high-performance computing as the critical backbone enabling large-scale AI training that accelerates discovery and innovation," said Trish Damkroger, senior vice president and general manager, HPC & AI Infrastructure Solutions at HPE. "Our customers turn to us to fast-track their AI system deployment to realize value faster and more efficiently by leveraging our world-leading HPC solutions and decades of experience in delivering, deploying and servicing fully-integrated systems." End-to-end portfolio of industry-leading HPC solutions: HPE Cray Supercomputing EX HPE leads the way providing some of the fastest and most energy efficient supercomputers in the world. Based on HPE Cray Supercomputing EX systems, HPE's net-new offerings for its entire leadership-class HPC portfolio are designed for research institutions entrusted with solving the world's biggest problems and government entities developing sovereign AI initiatives. The portfolio is based on the industry's first 100% fanless direct liquid cooling system architecture and spans every layer of HPE's supercomputing solutions including compute nodes, networking and storage, which are supplemented by a new software offering. HPE Cray Supercomputing EX4252 Gen 2 Compute Blade - Capable of delivering up to 98,304 cores in a single cabinet, the HPE Cray Supercomputing EX4252 Gen 2 Compute Blade delivers the most powerful one-rack unit system available for supercomputing. Featuring eight 5th Gen AMD EPYC™ processors, this compute blade offers the benefit of CPU density, allowing customers to realize higher-performing compute within the same space. HPE Cray Supercomputing EX4252 Gen 2 Compute Blade will be available Spring 2025.HPE Cray Supercomputing EX154n Accelerator Blade - To drastically reduce the time it takes to complete a supercomputing workload, the HPE Cray Supercomputing EX154n Accelerator Blade can accommodate up to 224 NVIDIA Blackwell GPUs in a single cabinet. Featuring the NVIDIA GB200 Grace Blackwell NVL4 Superchip, each accelerator blade holds four NVIDIA NVLink™-connected Blackwell GPUs unified with two NVIDIA Grace CPUs over NVIDIA NVLink-C2C. General availability for HPE Cray Supercomputing EX154n Accelerator Blade is expected by the end of 2025.HPE Slingshot interconnect 400 - The next generation of HPE's exascale-capable interconnect portfolio offers network interface controllers (NICs), cables and switches at 400 gigabit-per-second speeds. HPE Slingshot interconnect 400 delivers twice the line speed over the previous generation[i] while offering features like automated congestion management and adaptive routing for ultra-low tail latency, allowing customers to run large workloads with significantly less network infrastructure. This version of HPE Slingshot will be available for clusters based on HPE Cray Supercomputing EX systems beginning Fall 2025.HPE Cray Supercomputing Storage Systems E2000 - This high-performance storage system designed for large-scale supercomputers more than doubles the input/output (I/O) performance compared to the previous generation[ii]. HPE Cray Supercomputing Storage Systems E2000 is based on the open source Lustre file system and enables better utilization of both CPU and GPU-based compute nodes by reducing idle time during I/O operations. The HPC storage system will become generally available on HPE Cray Supercomputing EX systems in early 2025.HPE Cray Supercomputing User Services Software - HPE is introducing a new software offering that improves the user experience of running compute-intensive workloads. Available now, HPE Cray Supercomputing User Services Software includes features that help customers optimize system efficiency, regulate power consumption, and flexibly run diverse workloads on supercomputing infrastructure. New HPE ProLiant Compute XD server family optimizes for AI model training and tuning HPE continues the rollout of a new category of servers that enables customers to streamline deployment of large, highly-performant AI clusters. Designed for service providers and large enterprises training their own AI models, HPE ProLiant Compute XD servers leverage the company's expertise in installing and deploying large AI systems. Optional HPE Services are available to support building, customization, integration, validation, and full testing of the solution within HPE's state-of-the-art manufacturing facility to expedite on-site deployment. Only available on HPE ProLiant Compute servers, HPE Integrated Lights-Out (iLO) management technology enables select authorized personnel with out-of-band remote control access to servers, thereby improving security over standard in-band network access. HPE ProLiant Compute XD680 server - Optimized with price-for-performance in mind, the air-cooled HPE ProLiant Compute XD680 server is designed to address demanding AI training, tuning and inferencing workloads. An HPE-designed chassis houses eight Intel® Gaudi® 3 AI accelerators in a single compact node. HPE ProLiant Compute XD680 server with Intel Gaudi 3 will be available in December 2024.HPE ProLiant Compute XD685 server - For customers prioritizing performance, competitive advantage and energy efficiency, a new version of the HPE ProLiant Compute XD685 server will become available with NVIDIA GPUs to accelerate training for large, complex AI models. The server is powered by eight NVIDIA H200 SXM Tensor Core GPUs or NVIDIA Blackwell GPUs in a five rack-unit chassis and leverages HPE's multi-decades expertise in liquid cooling to efficiently cool GPUs, CPUs and switches. The NVIDIA HGX H200 8-GPU version of HPE ProLiant Compute XD685 server will become available in early 2025 and HPE will be time-to-market with NVIDIA Blackwell GPUs. A version of HPE ProLiant Compute XD685 server featuring eight AMD Instinct™ MI325X accelerators and two AMD EPYC™ CPUs was previously announced in October. HPE ProLiant Compute XD servers are part of HPE's comprehensive AI offerings that include HPE Private Cloud AI and HPE ProLiant Compute DL servers. As the needs of customers evolve, HPE continues to push the boundaries of innovation by meeting market demands and demonstrating why HPE is sought after to support the growing number of traditional supercomputing customers that are using AI models to enhance scientific discovery. Find HPE solutions on display at SC24 (booth #2219) in Atlanta, November 17-22 for more information on the next-generation of HPC and large-scale AI systems. About Hewlett Packard Enterprise Hewlett Packard Enterprise (NYSE: HPE) is the global edge-to-cloud company that helps organizations accelerate outcomes by unlocking value from all of their data, everywhere. Built on decades of reimagining the future and innovating to advance the way people live and work, HPE delivers unique, open, and intelligent technology solutions as a service. With offerings spanning Cloud Services, Compute, High Performance Computing & AI, Intelligent Edge, Software, and Storage, HPE provides a consistent experience across all clouds and edges, helping customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: www.hpe.com
[2]
HPE debuts powerful new supercomputer platforms for AI and high-performance computing workloads - SiliconANGLE
HPE debuts powerful new supercomputer platforms for AI and high-performance computing workloads Hewlett Packard Enterprise Co. said today it's updating its portfolio of high-performance computing platforms. The expanded lineup includes a couple of new HPE Cray Supercoming EX systems, plus a pair of new HPE Proliant servers that have been optimized for artificial intelligence workloads, including large language model training and fine-tuning. The new HPE Cray Supercoming EX systems are said to be designed for research institutions tasked with solving some of the world's toughest problems. They're aimed at more traditional HPC workloads, such as sequencing DNA and automating stock trading, as opposed to being focused exclusively on AI workloads. Leveraging expertise from Cray, the supercomputer manufacturing giant HPE acquired in 2019, they're also the first machines in its class to be built using a 100% fanless, direct liquid cooing system architecture that spans every layer of the machine, including the compute nodes, networking and storage. They include HPE Cray Supercomputing EX154n Accelerator Blade, which will launch towards the end of next year and has been built to drastically reduce the time it takes to complete supercomputing jobs. It's also designed to handle AI workloads, and to do this it accommodates up to 224 of Nvidia Corp.'s new Blackwell graphics processing units in a single cabinet. Each accelerator blade comes with an Nvidia Grace Blackwell NVL4 Superchip, holding a total of four NVLink-connected Blackwell GPUs, paired with two Nvidia Grace central processing units over NVLink-C2C. Coming sooner is the new HPE Cray Supercomputing EX4252 Gen 2 Compute Blade, which will be launched next spring and is more of a traditional supercomputing platform in the sense that it's optimized to power a broader range of computing applications. It lacks GPU hardware, which may make it less useful for AI, but in terms of traditional workloads it's a beast, packing up to CPU 98,304 cores in a single cabinet, making it the most powerful one-rack unit system of its kind. With eight 5thGen EPYC CPUs made by Advanced Micro Devices Inc. per core, it offers an extremely high density of CPUs, enabling customers to achieve higher-performance compute in a much smaller space than before. To go with the new Cray Supercomputing EX models, HPE also unveiled the next generation of its exascale-capable interconnect portfolio, bundling network interface controllers, cables and switches that support speeds of up to 400 gigabits-per-second. In addition, there's a new storage system and services software to look forward to. The new network infrastructure is called the HPE Slingshot Interconnect 400, and it's said to deliver twice the line-speed of its previous generation interconnect. It also supports advanced features like automated congestion management and adaptive routing, meaning it can reroute and optimize connectivity on the fly to ensure the lowest latency possible for any given workload it supports. It'll be launched for clusters based on the latest HPE Cray systems in the fall of next year. As for the HPE Cray Supercomputing Storage System E2000, it provides more than double the input/output capacity performance of HPE's predecessor storage systems for supercomputers. Under the hood, it leverages the open-source Lustre file storage system, which helps to reduce the idle time associated with I/O operations. It all adds up to much faster storage reads and writes than before, and should significantly boost the performance of supercomputing operations when it launches early next year. Last but not least is the new HPE Cray Supercomputing User Services Software, which is said to improve the user experience of its supercomputing platforms with new features for optimizing system efficiency, managing power consumption and more. While the Cray supercomputers are optimized for a broader range of HPC workloads, the new HPE ProLiant Compute XD servers are built specifically for those all-important AI workloads that almost every enterprise is eager to embrace these days. Trish Damkroger, senior vice president and general manager of HPC & AI Infrastructure Solutions at HPE, said enterprises and governments are becoming more interested in "sovereign AI initiatives", as these enable them to retain full control over their AI models and training data. But for sovereign AI, those organizations need access to some extremely powerful hardware, which is exactly what the ProLiant Compute XD servers deliver. HPE debuted its first batch of ProLiant Compute servers for AI in March, but the XD models are said to be an entirely new category of machines that are optimized to support the deployment of large, high-performance AI clusters. The company has been working very closely with Nvidia on these machines, fine-tuning them to support the most advanced LLMs. The new models include the HPE ProLiant Compute XD688, the most powerful of the two, is aimed at customers who prioritize performance over costs. It's aimed at AI training and inference, and buyers can choose from either eight Nvidia H200 SXM Tensor Core GPUs or the same number of Nvidia Blackwell GPUs in a five-rack chassis, the company said. It's a liquid cooled system and it will go on sale early next year, at about the same time as the Blackwell GPUs are launched by Nvidia. Of course, customers have more options than just Nvidia's hardware. HPE recently announced a separate edition of the HPE ProLiant Compute XD685 that features eight AMD Instinct MI325X accelerators and two AMD EPYC CPUs, instead of Nvidia's hardware, which is also set to go on sale early next year. As for the air-cooled HPE ProLiant Compute XD680 server, this is an alternative aimed at customers that would prefer to optimize for price performance, while still being able to handle the most demanding AI training, tuning and inference jobs. Instead of Nvidia's GPUs, it's powered by eight of Intel Corp.'s Gaudi 3 AI accelerators, which are squeezed into a single compact node. They'll go on sale sooner, with a launch date slated for next month. Both of the new servers feature HPE's Integrated Lights-Out technology for remote management, enabling select, authorized personnel to access them from any location, providing increased security compared to traditional in-band network access. HPE said the new Proliant XD server class comes with optional services, such as installation, customization, integration and validation, along with full testing within the company's own manufacturing facilities, for customers that want to expedite on-site deployment. "Our customers turn to us to fast-track their AI system deployment to realize value faster and more efficiently, leveraging our decades of experience in delivering, deploying and servicing fully-integrated systems," Damkroger said.
[3]
HPE crams 224 Nvidia Blackwell GPUs into latest Cray EX
Meanwhile, HPE's new ProLiant servers offer choice of Gaudi, Hopper, or Instinct acceleration If you thought Nvidia's 120 kW NVL72 racks were compute dense with 72 Blackwell accelerators, they have nothing on HPE Cray's latest EX systems, which will pack more than three times as many GPUs into a single cabinet. Announced ahead of next week's Super Computing conference in Atlanta, Cray's EX154n platform will support up to 224 Nvidia Blackwell GPUs and 8,064 Grace CPU cores per cabinet. That works out to just over 10 petaFLOPS at FP64 for HPC applications or over 4.4 exaFLOPS of FP64 for sparse AI and machine learning workloads, where precision usually isn't as big a deal. Specifically, each EX154n accelerator blade will feature a pair of 2.7 kW Grace Blackwell Superchips (GB200), each of which is equipped with two Blackwell GPUs and a single 72-core Arm CPU. Those two Superchips will be interconnected by Nvidia's NVL4 reference configuration. At a rack level, the compute alone will consume upwards of 300 kW, so it goes without saying that, just like past EX systems, HPE's Blackwell blades will be liquid cooled. In fact, these systems are completely fanless right down to the all-new Slingshot 400 family of Ethernet NICs, cables, and switches. As the name suggests, Slingshot 400 represents a welcome upgrade over its predecessor, pushing bandwidth from 200 to 400 Gbps, bringing it in line with current-gen Ethernet and InfiniBand networking. HPE's prior-gen Slingshot 200 interconnects have become a mainstay of large-scale supercomputing platforms and are at the heart of the Frontier, Aurora, and Lumi machines to name just a handful. Unfortunately, anyone looking to get their hands on Cray's super-dense Blackwell systems and speedy Slingshot 400 networking will have to wait a while. Neither are expected to ship until late in 2025. If conventional CPU-based HPC is more your thing, Cray's fifth-gen Epyc-based EX4252 Gen 2 compute blades are due out next spring and will pack up to eight 192-core Turin-C processors for a total of 98,304 cores per cabinet. Cray will also begin shipping upgraded E2000 storage systems, which it claims will more than double the I/O performance over prior generations thanks to faster PCIe 5.0-based NVMe storage. HPE expects to start shipping these storage arrays beginning early 2025. While HPE's Cray EX Platforms promise greater density than a typical server or rack, they aren't exactly the kind of systems that can be deployed in your average datacenter. So HPE is also rolling out a pair of new air-cooled ProLiant Compute servers, which make use of its enterprise-focused iLO lights-out management system. These systems will be fairly familiar to anyone who's ever seen an Nvidia HGX platform with both XD680 and XD685 servers boasting support for eight accelerators of your choice. Surprisingly, we aren't limited to just Nvidia and AMD GPUs as you might expect. The XD680 actually comes standard with eight Intel Gaudi3 accelerators totaling 1 TB of HBM2e. As we reported in spring, Gaudi3 is quite competitive with the current crop of accelerators. Each is capable of churning out 1.8 petaFLOPS of dense BF16 performance, giving it an edge in compute-bound workloads over the H100, H200, and AMD's MI300X. Stepping up to HPE's XD685, you have the choice of either eight Nvidia H200s with a combined 1.1 TB of HBM3e or the upcoming Blackwell GPUs - presumably B200 - which should boost memory capacity to 1.5 TB. The former is due out in early 2025, while timing for the Blackwell-based systems remains rather vague. If Nvidia isn't your style, or you need more memory, HPE is also rolling out a version of the system with AMD's newly launched MI325X. That system, announced alongside the accelerator in October, will boast up to 2 TB of HBM3e memory on board and is set to ship in the first quarter of 2025. ®
Share
Share
Copy Link
Hewlett Packard Enterprise introduces new high-performance computing and AI infrastructure, including liquid-cooled supercomputers and AI-optimized servers, to accelerate scientific research and AI development.
Hewlett Packard Enterprise (HPE) has unveiled a new lineup of high-performance computing (HPC) and artificial intelligence (AI) infrastructure solutions, aimed at accelerating scientific research and AI development. The announcement showcases HPE's commitment to advancing supercomputing capabilities for both traditional HPC workloads and emerging AI applications 123.
At the forefront of HPE's new offerings are the HPE Cray Supercomputing EX systems, designed for research institutions and government entities developing sovereign AI initiatives. These systems feature:
HPE Cray Supercomputing EX4252 Gen 2 Compute Blade: Set to launch in Spring 2025, this system can deliver up to 98,304 cores in a single cabinet, featuring eight 5th Gen AMD EPYC processors 12.
HPE Cray Supercomputing EX154n Accelerator Blade: Expected to be available by the end of 2025, this blade can accommodate up to 224 NVIDIA Blackwell GPUs in a single cabinet, utilizing the NVIDIA GB200 Grace Blackwell NVL4 Superchip 13.
HPE is introducing the industry's first 100% fanless direct liquid cooling system architecture, spanning compute nodes, networking, and storage. This innovation is complemented by:
For service providers and enterprises focused on AI model training and tuning, HPE is rolling out new ProLiant Compute XD servers:
To improve the overall user experience and system management, HPE is introducing:
These new offerings position HPE at the forefront of the rapidly evolving AI and HPC landscape. The company's focus on liquid cooling, increased GPU density, and flexible configurations caters to the growing demand for powerful, energy-efficient computing solutions in research and enterprise settings 123.
While some products, like the HPE ProLiant Compute XD688, are set to launch in early 2025, others, such as the Cray Supercomputing EX154n Accelerator Blade, are scheduled for release in late 2025. This staggered release allows HPE to align with the availability of next-generation GPUs and CPUs from partners like NVIDIA and AMD 123.
Reference
[1]
[2]
[3]
Hewlett Packard Enterprise announces the shipment of its first NVIDIA Blackwell family-based solution, the GB200 NVL72, designed for large-scale AI deployments with advanced liquid cooling technology.
2 Sources
2 Sources
Hewlett Packard Enterprise introduces eight new ProLiant Compute Gen12 servers, featuring enhanced security, AI-driven management, and significant performance improvements, aimed at meeting the demands of AI workloads and modern data centers.
5 Sources
5 Sources
Hewlett Packard Enterprise introduces a groundbreaking 100% fanless direct liquid cooling system architecture, aimed at enhancing energy efficiency and cost-effectiveness for large-scale AI deployments.
2 Sources
2 Sources
Hewlett Packard Enterprise (HPE) introduces new hybrid cloud AI solutions at HPE Discover 2024, focusing on private cloud offerings and AI-driven innovations to meet growing enterprise demands.
2 Sources
2 Sources
Supermicro introduces new high-density AI and HPC systems featuring NVIDIA's Blackwell platform, including liquid-cooled SuperClusters and NVIDIA GB200 Grace Blackwell Superchips, aimed at enhancing AI compute density and performance.
3 Sources
3 Sources