



6 Sources
6 Sources
[1]

AMD debuts Helios rack-scale AI hardware platform at OCP Global Summit 2025 -- promises easier serviceability and 50% more memory than Nvidia's Vera Rubin
Combining AMD Instinct graphics chips, AMD Epyc CPUs, and AMD Pensando network hardware. AMD has showcased its Helios next-generation AI hardware platform built for the Meta OpenRack Wide form-factor at the Open Compute Project Global Summit in San Jose, California. Built for streamlined scalability within an AI-first data center environment, this is AMD's big pitch to compete directly with Nvidia's rack-based GPU and CPU combinations, like the Blackwell and Grace-powered GB300 NVL72. AMD debuted the Helios platform earlier this year, highlighting its ground-up design using all-AMD hardware. It combines AMD's Epyc server CPUs with Instinct 400 GPUs, AMD Pensando networking interfaces, and AMD ROCm software, giving it what AMD claims is a big advantage for performance and efficiency in the rapidly scaling AI infrastructure industry. To enhance that scalability, AMD has now demonstrated it as compliant with the new Open Rack Wide specification developed by Meta. It was developed to improve power, cooling, and serviceability for AI systems. Supporting this and OPC standards will allow AMD partners to scale up faster and more effectively, AMD claims. "Open collaboration is key to scaling AI efficiently," said Forrest Norrod, executive VP and GM of the Data Center Solutions Group, AMD. "With 'Helios,' we're turning open standards into real, deployable systems -- combining AMD Instinct GPUs, EPYC CPUs, and open fabrics to give the industry a flexible, high-performance platform built for the next generation of AI workloads." Each of the MI450 GPUs deployed as part of the Helios system will have access to up to 432GB of HBM4 memory, with a total bandwidth of 19.6 TB/s. With 72 of those GPUs in each Helios system, it should be able to deliver 1.4 exaFLOPS of FP8 performance, with 31TB of HBM4 memory overall. AMD even took a swipe at Nvidia in the announcement, claiming that this will work out to 50% greater memory capacity than Nvidia's next-generation Vera Rubin will offer. AMD also announced the first customers with this Helios system. In what's described as an "expansion of their long-standing, multi-generation collaboration," AMD has partnered with Oracle for the first deployment of the Helios rack system. It will involve Oracle deploying some 50,000 MI450 GPUs starting in Q3 2026, with more to come in 2027. That shows strong confidence from potential customers already. Although consumers will need to wait for Nvidia and AMD to launch next-generation graphics products designed for gaming, AI developers have a big head start and a better idea of what's coming down the pipe. Although everyone is scrambling for any compute power they can get for AI data center developments, if AMD's hardware couldn't at least offer credible competition to Nvidia's future Vera Rubin designs, it wouldn't be striking deals like this.
[2]

AMD Helios MI450 rack stole the spotlight at OCP Summit 2025
Meta's custom Helios version contrasted with distinct power and network strategy At the recent OCP Summit 2025, AMD's Helios MI450 rack proved to be one of the standout systems on the show floor. ServeTheHome's Patrick Kennedy was there and snapped some photos of the glowing 72-GPU rack that suckered in any engineers and attendees who wandered too close its orbit. "While the show floor was open, there were crowds of folks just staring at this every time I walked by," Kennedy said. The Helios system on display was AMD's reference design and valued at roughly $3 million. At the top of the rack was the management switch and power shelves, followed by a stack of compute trays. The layout follows the OCP ORv3 wide rack standard, with a network switch section placed centrally between upper and lower compute layers. According to Kennedy, EDSFF E1.S SSDs were visible on both sides of the compute trays, reflecting the shift away from 2.5-inch U.2 connectors in the PCIe Gen6 generation. Beneath the trays were more power shelves, feeding the 72 GPUs arranged within the frame. The design appeared to be focused on data center deployment, not just display purposes, with clear attention to power delivery and service access. "AMD has a deal with OpenAI for the MI400 series. It announced a 50,000 GPU deal with Oracle. Furthermore, Meta has a custom rack that at first glance might look similar to AMD's Helios rack, but the swapping of power and scale-out networking in the rack is very different," Kennedy observed. Meta's custom rack, using a Rittal frame, was across from AMD's rack at the OCP Summit. It had four 64-port Ethernet switches at the top instead of power, used more DACs than multimode fiber, and moved power delivery to a sidecar through a horizontal busbar connected at both the top and center of the rack. The contrasting layouts between AMD and Meta show how flexible the Helios concept can be when scaled across operators. "What is clear is that the AMD Helios AI rack has convinced a number of large AI shops to invest in the solution," Kennedy concluded.
[3]

AMD's Helios AI rack sparks hype at OCP Summit
Liquid cooling and Ethernet fabric highlight the system's serviceability focus At the recent 2025 Open Compute Project (OCP) Global Summit in San Jose, AMD presented its "Helios" rack-scale platform, built on Meta's newly introduced Open Rack Wide (ORW) standard. The design was described as an open, double-wide framework that aims to improve power efficiency, cooling, and serviceability for artificial intelligence systems. AMD positions "Helios" as a major step toward open and interoperable data center infrastructure, but how much this openness translates into practical industry-wide adoption remains to be seen. Meta contributed the ORW specification to the OCP community, describing it as a foundation for large-scale AI data centers. The new form factor was developed to address the growing demand for standardized hardware architectures. AMD's "Helios" appears to serve as a test case for this concept, blending Meta's open-rack principles with AMD's own hardware. This collaboration signals a move away from proprietary systems, although reliance on major players like Meta raises questions about the standard's neutrality and accessibility. The "Helios" system is powered by AMD Instinct MI450 GPUs based on the CDNA architecture, alongside EPYC CPUs and Pensando networking. Each MI450 reportedly offers up to 432GB of high-bandwidth memory and 19.6TB/s of bandwidth, providing capacity for data-hungry AI tools. At full scale, a "Helios" rack equipped with 72 of these GPUs is projected to reach 1.4 exaFLOPS in FP8 and 2.9 exaFLOPS in FP4 precision, supported by 31TB of HBM4 memory and 1.4PB/s of total bandwidth. It also provides up to 260TB/s of internal interconnect throughput and 43TB/s of Ethernet-based scaling capacity. AMD estimates up to 17.9 times higher performance than its previous generation and roughly 50% greater memory capacity and bandwidth than Nvidia's Vera Rubin system. AMD claims this represents a leap in capacity for AI training and inference, yet these are engineering estimates and not field results. The rack's open design incorporates OCP DC-MHS, UALink, and Ultra Ethernet Consortium frameworks, allowing for both scale-up and scale-out deployment. It also includes liquid cooling and standards-based Ethernet for reliability. The "Helios" design extends the company's push for openness from chip to rack level, and Oracle's early commitment to deploy 50,000 AMD GPUs suggests commercial interest. Yet broad ecosystem support will determine whether "Helios" becomes a shared industry standard or remains another branded interpretation of open infrastructure. As AMD targets 2026 for volume rollout, the degree to which competitors and partners adopt ORW will reveal whether openness in AI hardware can move beyond concept to genuine practice. "Open collaboration is key to scaling AI efficiently," said Forrest Norrod, executive vice president and general manager, Data Center Solutions Group, AMD. "With 'Helios,' we're turning open standards into real, deployable systems - combining AMD Instinct GPUs, EPYC CPUs, and open fabrics to give the industry a flexible, high-performance platform built for the next generation of AI workloads."
[4]

AMD teases next-gen Helios rack-scale platform: new EPYC + Instinct chips, battles NVIDIA Rubin
TL;DR: AMD's next-gen Helios AI rack, powered by CDNA architecture and 72 Instinct MI450 GPUs, delivers up to 2.9 exaFLOPS performance with 31TB HBM4 memory and 260TB/sec interconnect bandwidth. Designed for scalable AI workloads, it offers 17.9x higher performance and enhanced efficiency for modern data centers. AMD has showcased its next-gen Helios rack-scale AI solution, teaming with Meta and the Open Compute Project community for the most advanced rack-scale reference system from AMD. AMD's next-gen Helios AI rack is powered by the AMD CDNA architecture, with next-gen Instinct MI450 series GPUs rocking up to 432GB of next-gen HBM4 memory, and up to 19.6TB/sec of memory bandwidth. Inside, AMD's new Helios AI rack will house 72 x MI450 series AI GPUs that delivers up to 1.4 exaFLOPS of FP8 and 2.9 exaFLOPS of FP4 performance, with 31 TB of total HBM4 memory and 1.4 PB/s of aggregate bandwidth - a generational leap that enables trillion parameter training and large scale AI inference. Helios also sports up to 260TB/sec of scale-up interconnect bandwidth, backed by 43TB/sec of Ethernet-based scale-out bandwidth, making sure that there is seamless communication between GPUs, nodes, and racks. AMD says that its next-gen Helios AI system delivers up to an incredible 17.9x higher performance over previous-gen racks, and 50% more memory capacity and bandwidth than NVIDIA's next-gen Vera Rubin AI system. Forrest Norrod, executive vice president and general manager, Data Center Solutions Group, explains: "With 'Helios,' we're turning open standards into real, deployable systems - combining AMD Instinct GPUs, EPYC CPUs, and open fabrics to give the industry a flexible, high-performance platform built for the next generation of AI workloads". Purpose-Built for Modern Data Center Realities AI data centers are evolving rapidly, demanding architectures that deliver greater performance, efficiency, and serviceability at scale. "Helios" is purpose-built to meet these needs with innovations that simplify deployment, improve manageability, and sustain performance in dense AI environments. * Higher scale-out throughput and HBM bandwidth compared to previous generations enable faster model training and inference. * Double-wide layout reduces weight density and improves serviceability. * Standards-based Ethernet scale-out ensures multipath resiliency and seamless interoperability. * Backside quick-disconnect liquid cooling provides sustained, efficient thermal performance at high density. Together, these features make the AMD "Helios" Rack a deployable, production-ready system for customers scaling to exascale AI - delivering breakthrough performance with operational efficiency and sustainability.
[5]

AMD Showcases Its "Helios" Rack-Scale Platform Featuring Next-Gen EPYC CPUs & Instinct GPUs; Ready to Target NVIDIA's Dominance
AMD has showcased its Helios 'rack-scale' product at the Open Compute Project (OCP), showcasing the firm's design philosophy for its upcoming AI lineups. For those unaware, AMD announced that it will be ramping up its rack-scale solutions at the Advancing AI 2025 event, and the company did claim that the 'Helios' rack-scale platform will target the likes of NVIDIA's Rubin lineup. Now, at the OCP, the firm showcased a static display of the Helios rack, which is developed on the Open Rack Wide (ORW) specification, introduced by Meta. The firm didn't reveal specific details about Helios, apart from the fact that Team Red is very optimistic that the platform will shape up to be a competitive product. With 'Helios,' we're turning open standards into real, deployable systems -- combining AMD Instinct GPUs, EPYC CPUs, and open fabrics to give the industry a flexible, high-performance platform built for the next generation of AI workloads. - AMD's EVP and GM, Data Center Solutions Group Let's discuss AMD's Helios for a bit. We know that the platform will feature next-gen technologies from the firm, such as the EPYC Venice CPUs and the Instinct MI400 AI accelerators. More importantly, for networking capabilities, the rack will employ AMD's Pensando for scale-out. In particular, at OCP, the firm announced that it will focus on an open stack of technologies. For this aim, Helios will utilize UALink (scale-up) and UEC Ethernet (scale-out). Another interesting inclusion with Helios is the use of quick-disconnect liquid cooling, which enables high densities and simplifies field service. The Helios images shared by Team Red on the occasion of OCP show us a 'double wide' ORW rack, with a central equipment bay and service space along the sides. You can also see horizontal compute sleds, which are another approach from NVIDIA's Kyber, which we discussed earlier, and these sleds account for 70-80% of the rack space. The two fiber-type cables running along are the 'Aqua' and the 'Yellow' runs, which are used for different purposes, but the routing here is defintely top-notch. NVIDIA hasn't seen much competition in the rack-scale segment; however, with AMD's Helios, things are expected to switch up drastically, with Team Red looking to rival NVIDIA's Rubin rack-scale platform. The Helios showcase was defintely a surprise, but it also shows that the AI industry is up for massive competition.
[6]

AMD Showcases "Helios" Rack-Scale AI Platform at OCP Global Summit 2025
AMD showcased a static display of its "Helios," rack scale platform for the first time in public. Developed based on the new Open Rack Wide (ORW) specification, introduced by Meta, "Helios" extends the AMD open hardware philosophy from silicon to system to rack, representing a major step forward in open, interoperable AI infrastructure. Extending AMD leadership in AI and high-performance computing, "Helios" provides the foundation to deliver the open, scalable infrastructure that will power the world's growing AI demands. Designed to meet the demands of gigawatt-scale data centers, the new ORW specification defines an open, double-wide rack optimized for the power, cooling, and serviceability needs of next-generation AI systems. By adopting ORW and OCP standards, "Helios" provides the industry with a unified, standards-based foundation to develop and deploy efficient, high-performance AI infrastructure at scale.
Share
Share
Copy Link
AMD showcases its Helios rack-scale AI hardware platform at OCP Global Summit 2025, promising enhanced performance, efficiency, and serviceability for AI workloads.
AMD has made a significant move in the AI hardware space with the unveiling of its Helios rack-scale platform at the Open Compute Project (OCP) Global Summit 2025 in San Jose, California. This new offering is positioned to directly compete with NVIDIA's rack-based GPU and CPU combinations, such as the Blackwell and Grace-powered GB300 NVL72
1
.
Source: DIGITAL TERMINAL
The Helios platform combines AMD's latest hardware technologies, including:
Each MI450 GPU boasts up to 432GB of HBM4 memory with a total bandwidth of 19.6 TB/s. A fully equipped Helios system, containing 72 GPUs, is capable of delivering:
3
AMD claims this configuration offers 50% greater memory capacity than NVIDIA's next-generation Vera Rubin system and up to 17.9 times higher performance than its previous generation
4
.Helios is built on Meta's newly introduced Open Rack Wide (ORW) standard, which aims to improve power efficiency, cooling, and serviceability for AI systems. The design features:
3

Source: TechRadar
The introduction of Helios represents a significant challenge to NVIDIA's dominance in the AI hardware market. AMD has already secured a major partnership with Oracle, which plans to deploy 50,000 MI450 GPUs starting in Q3 2026, with more to follow in 2027
1
.Forrest Norrod, Executive VP and GM of the Data Center Solutions Group at AMD, emphasized the importance of open collaboration in scaling AI efficiently. He stated, "With 'Helios,' we're turning open standards into real, deployable systems -- combining AMD Instinct GPUs, EPYC CPUs, and open fabrics to give the industry a flexible, high-performance platform built for the next generation of AI workloads"
5
.
Source: Wccftech
Related Stories
The Helios system garnered significant attention at the OCP Summit, with ServeTheHome's Patrick Kennedy noting large crowds gathering around the display. The reference design on show was valued at approximately $3 million
2
.While AMD's Helios platform shows promise, its success will ultimately depend on broad ecosystem support and whether it can truly establish itself as an industry standard rather than just another branded interpretation of open infrastructure
3
.Summarized by

Navi
[1]
[3]
[4]
17 May 2025•Technology

13 Jun 2025•Technology

05 Sept 2025•Technology
