HPE expands AI portfolio with Nvidia Blackwell and Vera Rubin systems for enterprise deployment

5 Sources

Share

Hewlett Packard Enterprise unveiled a major expansion of its AI infrastructure at Nvidia's GTC conference, introducing new systems built on Blackwell and upcoming Vera Rubin GPUs. The company is scaling its Private Cloud AI service to 128 GPUs and launching air-gapped configurations for sovereign deployments. HPE's Alletra Storage MP X10000 became the first object storage platform to achieve Nvidia-Certified Storage validation.

HPE and Nvidia Deepen Partnership with New AI Infrastructure

Hewlett Packard Enterprise has announced a significant expansion of its Nvidia-based AI portfolio at the GTC 2026 conference, introducing new systems built on NVIDIA Blackwell and the upcoming Vera Rubin GPU architectures

1

2

. The announcement reflects a strategic push to help enterprises move AI from experimental pilots into production environments with repeatable, governed systems that deliver measurable returns. HPE president and CEO Antonio Neri emphasized that "the AI race is fundamentally about speed, scale, and trust," positioning the company's industry leadership across cloud, networking, and AI as enabling organizations to operationalize AI securely and efficiently

1

.

Source: CRN

Source: CRN

Private Cloud AI Scales to Meet Enterprise Demands

HPE is expanding its Private Cloud AI turnkey system, co-engineered with Nvidia, to support up to 128 GPUs through networked expansion racks, doubling the previous 64 GPU capacity

3

. Dale Brown, HPE's global head of growth for AI solution sales, explained that enterprises are no longer satisfied with isolated AI experiments and instead want standardized systems that can deliver predictable paths from infrastructure spending to business outcomes

2

. The system now includes support for NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs across all configurations and incorporates the latest NVIDIA AI-Q blueprint for agentic AI and Omniverse blueprint for digital twins

4

.

Air-Gapped Configuration Addresses Sovereignty Concerns

Addressing growing demand for secure, fully isolated deployments, HPE introduced an air-gapped configuration of its Private Cloud AI system

3

. This configuration ensures that sensitive data remains private and is not exposed to external networks, targeting sovereign, finance, defense, healthcare, and other regulated industries

3

. The company is building and installing the supercomputer for the European Union AI Factory, HammerHAI, which will allow organizations to scale AI initiatives while adhering to regional data sovereignty and compliance requirements

1

. Dr. Bastian Koller, Managing Director of the High Performance Computing Center at Stuttgart University, noted that HammerHAI will offer an alternative to commercial cloud AI services where data sovereignty was difficult to ensure

1

.

Vera Rubin Systems Target Trillion-Parameter Models

HPE unveiled the Nvidia Vera Rubin NVL72 by HPE, a rack-scale system designed for models exceeding 1 trillion parameters, featuring support for 72 Nvidia Rubin GPUs

3

. The Nvidia Rubin platform delivers up to a 10-fold reduction in inference token cost and a fourfold reduction in the number of GPUs required to train mixture-of-experts models compared to prior Blackwell platforms

2

. The company also introduced the HPE Compute XD700, an Open Compute Project-inspired AI server based on the Nvidia HGX Rubin NVL8 liquid-cooled AI platform, supporting up to 128 Rubin GPUs per rack and doubling the density of prior generations

5

.

Supercomputing Gets Liquid-Cooled Vera CPU Blade

The HPE Cray Supercomputing GX240 Compute Blade represents what the company believes is the industry's first liquid-cooled compute blade featuring up to 16 Nvidia Vera CPUs

2

. The system can scale to 40 blades with 640 Nvidia Vera CPUs and 56,320 ARM cores per rack

5

. HPE also added Nvidia Quantum-X800 InfiniBand networking to its HPE Cray Supercomputing GX5000 platform, providing 144 ports of 800 Gb/s connectivity per port

5

. These supercomputing offerings target model builders, service providers, and sovereign AI initiatives.

Storage Breakthrough with Nvidia Certification

HPE's Alletra Storage MP X10000 became the first object storage platform to achieve Nvidia-Certified Storage validation at the Foundation level

1

. Nvidia has validated and benchmarked the array's performance for workloads of up to 128 GPUs, conducted functional tests for enterprise-grade availability and reliability, and confirms that the storage layer efficiently feeds data to accelerated computing resources

1

. HPE demonstrated a 17x improvement in index build time and 8x improvement in total end-to-end pipeline transport using a single Nvidia H100 and accelerated remote direct memory access (RDMA)

1

. The company is evolving the X10000 to support the new Nvidia STX rackscale reference architecture with Vera Rubin accelerators, BlueField-4 DPUs, Spectrum-X networking, and ConnectX NICs

1

.

Edge Deployments and Multi-Workload AI Solutions

HPE is broadening support for Nvidia Blackwell GPUs across its portfolio, with NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs available in ProLiant server models aimed at edge deployments, small language models, vector databases, and analytics

2

. Brown explained that edge deployments reflect a pragmatic view of enterprise AI, noting that retail customers focused on loss prevention don't need a full-scale turnkey AI Factory in each location

2

. New Nvidia co-designed multi-workload AI solutions simplify deployment of use cases for autonomous edge intelligence, retail shopping assistance, video search and summarization, and biomedical research

4

.

AI Grid Connects Distributed Inference Sites

HPE introduced an AI Grid, an end-to-end offering built on an Nvidia reference architecture to connect AI factories and distributed inference clusters across regional and far-edge sites

1

. The AI Grid enables service providers to deploy and operate thousands of distributed inference sites, turning AI installations into a single intelligent system and converting existing sites with power and connectivity into RAN-ready AI grids

1

. This infrastructure approach addresses the shift toward more operational, distributed, and tightly governed AI adoption that HPE executives are observing in the market.

Market Context and Customer Adoption

The announcements come as HPE, a $29 billion technology company, has seen revenue surge 14.5% over the last twelve months to $35.7 billion, reflecting strong demand for AI infrastructure

5

. Industry leaders including the Ryder Cup, Danfoss, and the Dallas Cowboys are leveraging HPE Private Cloud AI to accelerate transformative AI initiatives

4

. Organizations using HPE AI infrastructure also include Argonne National Laboratory, HLRS, Hudson River Trading, and the Korea Institute of Science and Technology Information

5

. Support for RTX PRO 4500 Blackwell Server Edition GPUs across the ProLiant Compute server portfolio will roll out in Q1 and Q2 2026, while the HPE Cray Supercomputing GX240 Compute blade will be available in 2027, and the Nvidia Vera Rubin NVL72 system will arrive in December 2026

1

5

.

Source: SiliconANGLE

Source: SiliconANGLE

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo