AMD unveils Helios rack-scale AI system with 72 MI455X accelerators and 256-core EPYC Venice

7 Sources

Share

AMD revealed its Helios rack-scale AI system at CES, featuring 72 Instinct MI455X accelerators with 31 TB of HBM4 memory and delivering 2.9 FP4 exaflops for AI inference. The system pairs with AMD's 256-core EPYC Venice CPUs and targets hyperscalers like OpenAI, xAI, and Meta. AMD also introduced the MI400-series family, including MI430X for HPC and MI440X for enterprise deployments.

AMD Introduces Helios Rack-Scale AI System at CES

AMD used its CES keynote to unveil Helios, the company's first rack-scale AI system designed to meet the escalating compute demands of generative AI applications. CEO Lisa Su presented the hardware against a backdrop that emphasized the scale of AI's growth, noting that the world used one zettaflop of computing power on AI in 2022 compared to 100 zettaflops in 2025

4

. The Helios AI rack packs 72 Instinct MI455X accelerators with 31 TB of HBM4 memory and aggregate memory bandwidth of 1.4 PB/s, delivering up to 2.9 FP4 exaflops for AI inference and 1.4 FP8 exaflops for AI training

1

. Each rack weighs nearly 7,000 pounds and features 4,600 Zen 6 CPU cores and 18,000 GPU compute units

4

. OpenAI, xAI, and Meta are expected to deploy these systems at scale

2

, positioning AMD to compete directly with Nvidia in the hyperscale AI market.

Source: Lifehacker

Source: Lifehacker

MI455X and the Broader MI400-Series Family

The MI455X accelerator at the heart of Helios represents a significant architectural leap for AMD. Lisa Su revealed the chip package on stage, showing 12 3D-stacked I/O and compute dies fabricated on TSMC's 2nm and 3nm process nodes, fed by what appears to be 12 36 GB HBM4 stacks

2

. Each MI455X promises around 40 petaFLOPS of dense FP4 inference performance or 20 petaFLOPS of FP8 for training, with 432 GB of HBM4 delivering 19.6 TB/s and 3.6 TB/s of interconnect bandwidth for chip-to-chip communications

2

. The broader Instinct MI400X family features compute chiplets produced on TSMC's N2 fabrication process, making them the first GPUs to use this manufacturing technology

1

. For the first time, the MI400X family splits across different subsets of the CDNA 5 architecture, with the MI440X and MI455X optimized for low-precision AI workloads such as FP4, FP8, and BF16, while the MI430X targets both sovereign AI and HPC with full FP32 and FP64 support

1

. This tailored approach allows AMD to eliminate redundant execution logic and improve silicon efficiency.

EPYC Venice Powers Helios Infrastructure

Helios employs AMD's next-generation EPYC Venice CPU, with one Venice processor for every four MI455X GPUs forming a compute node

2

. The most striking detail is Venice's configuration: 256 cores and 512 threads in a single processor package

5

. The chiplet breakdown points to a design using up to eight compute chiplets flanking centralized I/O silicon, with each CCD carrying 32 Zen 6 cores on a 2nm process

5

. Venice features a 16-channel DDR5 memory interface with 32 sub-channels, and the platform is expected to deliver twice the memory bandwidth and GPU bandwidth compared to previous generations

2

. This likely translates to 128 lanes of PCIe 6.0 connectivity and DDR5 8800 memory support

2

. The package uses two server I/O dies rather than one, simplifying physical routing and distributing memory controllers and high-speed interfaces more evenly across the substrate

5

.

Source: Guru3D

Source: Guru3D

Connectivity and Ecosystem Considerations

The MI430X, MI440X, and MI455X AI accelerators are expected to feature Infinity Fabric alongside UALink for scale-up connectivity, making them the first accelerators to support the new interconnect

1

. However, practical UALink adoption depends on ecosystem partners such as Astera Labs, Auradine, Enfabrica, and Xconn delivering UALink switching silicon in the second half of 2026

1

. Without these switches, UALink-based systems may need to rely on UALink-over-Ethernet or traditional mesh configurations. For scale-out connectivity, AMD plans to offer Helios with Ultra Ethernet, leveraging existing networking adapters like AMD's Pensando Pollara 400G and the forthcoming Pensando Vulcano 800G cards

1

.

Enterprise and Sovereign AI Platforms

Beyond Helios, AMD introduced platforms tailored for different deployment scenarios. The MI440X powers AMD's new Enterprise AI platform, a standard rack-mounted server with one EPYC Venice CPU and eight MI440X GPUs designed for on-premises enterprise AI deployments

1

. This system maintains drop-in compatibility with existing data centers in terms of power and cooling requirements. AMD will also offer a sovereign AI and HPC platform based on EPYC Venice-X processors with additional cache and extra single-thread performance, paired with Instinct MI430X accelerators that can process both low-precision AI data and high-precision HPC workloads

1

.

Looking Ahead to MI500 and Competitive Landscape

AMD also teased its next-generation MI500-series accelerators, with Lisa Su claiming a 1,000x uplift in performance over the two-year-old MI300X GPUs

2

. However, AMD clarified these estimates compare an eight-GPU MI300X node to an MI500 rack system with an unspecified number of GPUs

2

. The MI500-series will ship in 2027, pairing TSMC's 2nm process with AMD's CDNA 6 compute architecture and HBM4e memory

2

. To remain competitive with Nvidia, the MI500-series will need to match or exceed Nvidia's Rubin Ultra Kyber racks, which promise 15 exaflops of FP4 compute, 5 exaflops of FP8 for training, 144 TB of HBM4e, and 4.6 PB/s of memory bandwidth

2

. The announcements come as AI infrastructure faces scrutiny over power consumption, environmental impact, and the proliferation of AI-generated content that critics argue spreads misinformation

4

.

Source: The Register

Source: The Register

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved