HPE expands AI portfolio with NVIDIA Vera Rubin systems and Blackwell servers at GTC 2026

4 Sources

Share

HPE unveiled a major expansion of its NVIDIA AI Computing by HPE portfolio at GTC 2026, introducing Vera Rubin-based systems, expanded Private Cloud AI capabilities, and new Blackwell servers. The announcements include rack-scale infrastructure for trillion-parameter models, air-gapped configurations for sovereign deployments, and scaling to 128 GPUs as enterprises shift from AI pilots to production-ready deployments.

HPE and NVIDIA Deepen Partnership with Production-Ready AI Infrastructure

Hewlett Packard Enterprise used NVIDIA Corp.'s GTC conference to roll out a comprehensive expansion of its AI portfolio, introducing new systems designed to help enterprises move AI from experimental pilots into production at scale

1

. The announcements span enterprise AI infrastructure, supercomputing, edge computing, and storage, reflecting what HPE executives describe as a fundamental shift in how organizations approach AI deployment

2

.

Source: SiliconANGLE

Source: SiliconANGLE

"The AI race is fundamentally about speed, scale, and trust," said Antonio Neri, president and CEO of HPE. "Our industry leadership across cloud, networking, and AI enables organizations to operationalize AI securely, efficiently, and at an unprecedented scale"

3

. Jensen Huang, founder and CEO of NVIDIA, added that the partnership is "setting a new standard for enterprise AI infrastructure" by building AI factories and AI grids that embed intelligence into every workflow

3

.

Vera Rubin Architecture Powers Next-Generation AI Servers

Among the most significant announcements is the introduction of systems built on NVIDIA's Vera Rubin architecture. The NVIDIA Vera Rubin NVL72 by HPE is a rack-scale system designed for models exceeding one trillion parameters, featuring 72 NVIDIA Rubin GPUs, 36 NVIDIA Vera CPUs, sixth-generation NVIDIA NVLink networking, NVIDIA ConnectX-9 SuperNIC, and NVIDIA BlueField-4 DPUs

4

. The platform delivers up to a 10-fold reduction in inference token cost and a fourfold reduction in the number of GPUs required to train mixture-of-experts models compared to prior NVIDIA Blackwell platforms

1

.

HPE also unveiled the HPE Cray Supercomputing GX240 Compute Blade, described as the industry's first liquid-cooled compute blade featuring up to 16 NVIDIA Vera CPUs

1

. The system can scale to 640 NVIDIA Vera CPUs with 56,320 ARM cores per rack

2

. The new HPE Compute XD700, an Open Compute Project-inspired AI server based on the NVIDIA HGX Rubin NVL8 liquid-cooled platform, supports up to 128 Rubin GPUs per rack, doubling the density of prior generations

1

.

Source: CRN

Source: CRN

HPE Private Cloud AI Scales to Meet Enterprise Demands

HPE is expanding HPE Private Cloud AI, its turnkey enterprise AI factory co-engineered with NVIDIA, with new scaling options and air-gapped configurations for regulated industries

1

. The system can now scale to 128 GPUs through networked expansion racks, up from 64 GPUs, providing enterprises with additional headroom for expanding AI workloads

2

.

The new air-gapped configuration ensures that data remains private and is not exposed to the internet, targeting sovereign, finance, defense, healthcare, and other regulated industries. "This is about sovereign, finance, defense, health care, regulated industries -- more and more they are looking for solutions to have right-sized AI but actually be able to grow that in a completely private environment that is actually not connected to the public internet," said Dale Brown, global head of growth for AI solution sales at HPE

2

.

HPE Private Cloud AI now supports NVIDIA's AI-Q agentic AI blueprint and Omniverse digital twin software, along with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs across all configurations

3

. Industry leaders such as the Ryder Cup, Danfoss, and the Dallas Cowboys are leveraging the platform to accelerate transformative AI initiatives

3

.

From Pilots to Production: Addressing Enterprise AI Economics

The expansion reflects a broader shift in enterprise AI strategy, according to HPE executives. "The winners are not just optimizing around isolated projects," Brown said in a briefing. "They're standardizing how AI operates inside their enterprise"

1

. This shift is driven by growing concerns about AI economics and return on investment. Enterprises increasingly want predictable paths from infrastructure spending to business outcomes rather than proofs of concept

1

.

HPE is also broadening support for NVIDIA Blackwell GPUs across its portfolio. NVIDIA RTX Pro 4500 Blackwell Server Edition GPUs are now available in more ProLiant server models aimed at edge deployments, small language models, vector databases, and analytics

1

. Brown noted that edge deployments reflect a pragmatic view of enterprise AI, where retail customers seeking loss prevention solutions don't need a full-scale turnkey AI factory but can use hardened patterns built on larger systems deployed in a small footprint

1

.

Storage and Networking Enhancements Complete the Stack

On the storage side, HPE said its Alletra Storage MP X10000 is the first object-based platform to be certified by NVIDIA at the foundation level, with performance validated for configurations of up to 128 GPUs

1

. HPE asserted that AI infrastructure bottlenecks increasingly lie in the data pipeline and inference context layer, not just in raw computing

1

.

The company added NVIDIA Quantum-X800 InfiniBand networking to its HPE Cray Supercomputing GX5000 platform, providing 144 ports of 800 Gb/s connectivity per port

4

. The portfolio also supports multi-tenancy through NVIDIA Multi-Instance GPU, NVIDIA Mission Control software, Red Hat Enterprise Linux, and OpenShift integration along with HPE services for data center design, deployment, and operations

1

.

The $29 billion technology company has seen revenue surge 14.5% over the last twelve months to $35.7 billion, reflecting strong demand for secure and scalable AI infrastructure

4

. Organizations using HPE AI infrastructure include Argonne National Laboratory, HLRS, Hudson River Trading, and the Korea Institute of Science and Technology Information

4

. The HPE Cray Supercomputing GX240 Compute blade and NVIDIA Quantum-X800 InfiniBand networking will be available in 2027, while the NVIDIA Vera Rubin NVL72 system will be available in December 2026

4

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo