HPE adopts AMD's Helios rack architecture with custom Ethernet switch for 2026 AI systems

4 Sources

Share

HPE becomes the first major OEM partner for AMD's Helios rack-scale AI platform, bringing 72-GPU systems to market in 2026. The collaboration features a purpose-built Juniper scale-up Ethernet switch developed with Broadcom, targeting up to 2.9 exaFLOPS per rack. Built on open standards, Helios positions itself as an alternative to Nvidia's GB200 NVL72 platform.

HPE and AMD Forge Path to Open Rack-Scale AI Infrastructure

HPE announced it will become one of the first major OEM partners to adopt AMD's Helios architecture, bringing complete rack-scale AI systems to market in 2026

1

2

. The partnership deepens a collaboration spanning over a decade and positions both companies to challenge Nvidia's dominance in the data center AI market. AMD CEO Lisa Su recently outlined a "very clear path" to capturing double-digit market share, with expectations to generate tens of billions of dollars in annual revenue from Instinct GPUs and related products by 2027

3

.

Source: DT

Source: DT

The AMD's Helios architecture represents a comprehensive rack-scale AI system designed to compete directly with Nvidia's GB200 NVL72 platform. Each rack will pack 72 Instinct MI455X GPUs alongside next-generation EPYC "Venice" CPUs, delivering up to 2.9 exaFLOPS of FP4 performance and 31TB of HBM4 memory

1

3

. Built on the Open Compute Project's Open Rack Wide mechanical standard, Helios embraces an open, standards-driven approach that contrasts sharply with proprietary alternatives.

Source: Tom's Hardware

Source: Tom's Hardware

Custom Juniper Scale-Up Ethernet Switch Powers GPU Interconnect

A defining feature of HPE's implementation involves a purpose-built scale-up Ethernet switch developed through collaboration between HPE's Juniper Networks subsidiary and Broadcom

2

3

. The switch leverages Broadcom's Tomahawk 6 networking chip, which provides 102.4 Tbps of aggregate bandwidth, and implements Ultra Accelerator Link over Ethernet (UALoE) for high-speed GPU interconnect technology

2

. This approach enables 260 TBps of scale-up bandwidth across all 72 GPUs within a single rack

3

.

Rami Rahim, president and general manager of HPE's networking business and former CEO of Juniper Networks, emphasized the significance: "This is an industry first scale-up solution using Ethernet, standard Ethernet. So that means it's 100 percent open standard and avoids proprietary vendor lock-in"

2

. The decision to run Ultra Accelerator Link protocol over standard Ethernet fabric eliminates the need for specialized UALink hardware while maintaining the performance benefits. This strategy directly challenges Nvidia's NVLink-centric approach, where Blackwell GPUs remain within a proprietary NVLink domain for scale-up connectivity

1

.

Targeting Cloud Service Providers and Neoclouds

HPE positions its Helios-based systems primarily toward cloud service providers and emerging neoclouds—smaller cloud providers focused on AI compute infrastructure

2

3

. The rack-scale AI system aims to support trillion-parameter model training and deliver high inference throughput for demanding AI training and inference workloads. With Nvidia's comparable GB200 NVL72 systems reportedly selling for nearly $3.5 million each, pricing will likely remain a significant consideration for potential customers

2

.

AMD continues to focus on "high-touch" engagements with major customers including Meta, OpenAI, Microsoft, and xAI rather than broad channel distribution

3

. Alexey Stolyar, CTO at International Computing Concepts, noted seeing "some momentum" for Instinct-based solutions among neocloud customers, though he cautioned about potential challenges in fully utilizing scale-up capabilities based on experiences with similar platforms

3

.

Supercomputer Deployment Signals HPC Expansion

Beyond commercial AI deployments, the partnership extends into high-performance computing. The High-Performance Computing Center Stuttgart (HLRS) selected HPE's Cray GX5000 platform for its next flagship supercomputer, named Herder, scheduled for delivery in the second half of 2027

1

. The system will feature MI430X GPUs and EPYC "Venice" CPUs across direct liquid-cooled blades, replacing the current Hunter system. HPE highlighted the environmental benefits, noting that waste heat from the GX5000 racks will warm buildings on the University of Stuttgart's Vaihingen campus

1

.

Antonio Neri, president and CEO at HPE, stated: "For more than a decade, HPE and AMD have pushed the boundaries of supercomputing, delivering multiple exascale-class systems and championing open standards that accelerate innovation"

4

. The open rack-scale AI infrastructure approach, unified through AMD's ROCm software ecosystem and incorporating AMD Pensando networking technology, aims to provide flexibility across AI and HPC workloads while potentially reducing total cost of ownership

3

4

. As AMD and HPE prepare for worldwide availability in 2026, the success of this open standards approach will depend on whether customers prioritize flexibility and vendor independence over established ecosystems.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo