4 Sources
4 Sources
[1]
HPE broadens AI factory with new Blackwell servers, private cloud upgrades and Rubin infrastructure - SiliconANGLE
HPE broadens AI factory with new Blackwell servers, private cloud upgrades and Rubin infrastructure Hewlett Packard Enterprise Co. used Nvidia Corp.'s GTC conference today to roll out a broad expansion of its artificial intelligence portfolio, introducing new systems for enterprise, sovereign and high-performance computing deployments as positions itself as a full-stack supplier for organizations moving AI from pilots into production. The announcements span private cloud systems, edge servers, large-scale AI factory infrastructure and storage. HPE said enterprises are no longer satisfied with isolated AI experiments and instead want repeatable, governed systems that can deliver measurable returns. "The winners are not just optimizing around isolated projects," Dale Brown, HPE's global head of growth for AI solution sales, said in a briefing. "They're standardizing how AI operates inside their enterprise." Brown said that shift is being driven by growing anxiety about the economics of AI. "It's a little bit more fragile how the return on investment is showing up," he said, adding that enterprises increasingly want predictable paths from infrastructure spending to business outcomes, rather than proofs of concept. "The real shift is from one-off projects to how to [repeat successful projects] again and again and again," he said. In response, HPE is expanding the Private Cloud AI turnkey system co-engineered with Nvidia, with new scaling options, air-gapped configurations for regulated industries and support for Nvidia's AI-Q agentic AI blueprint and Omniverse digital twin software. The system can now scale to 128 graphics processing units through networked expansion racks. It's positioned both for sovereign and regulated use cases such as finance, defense and healthcare. Brown said the design goal is to make enterprise AI easier to operationalize for organizations that may lack deep in-house expertise. "We've removed that level of friction of designing it, integrating it, testing it, deploying it, onboarding people," he said. "All of that is in Private Cloud AI." The company is also broadening support for Nvidia Blackwell GPUs across its portfolio. Nvidia's RTX Pro 6000 Blackwell Server Edition GPUs are now standardized across HPE AI factory configurations and RTX Pro 4500 Blackwell Server Edition GPUs are available in more ProLiant server models aimed at edge deployments, small language models, vector databases and analytics. Brown said edge deployments reflect a pragmatic view of enterprise AI. "If I'm a retail customer and I want to talk about loss prevention, I don't need a full-scale turnkey AI factory in a retail location," he said. A better solution is to use hardened and repeatable patterns built on larger systems and deployed in a small footprint. HPE also unveiled new infrastructure aimed at model builders, service providers and sovereign AI initiatives. Among them is the HPE Cray Supercomputing GX240 Compute Blade, which the company believes is "the industry's first liquid-cooled compute blade featuring up to 16 Nvidia Vera CPUs," said Chris Davidson, vice president of HPC and AI customer solutions. The company is also introducing the Nvidia Vera Rubin NVL72 by HPE, a rack-scale system built for models with more than a trillion parameters. The new HPE Compute XD700 is an Open Compute Project-inspired AI server based on the Nvidia HGX Rubin NVL8 liquid-cooled AI platform. Davidson said the server continues themes that run across today's launch: density, liquid cooling and time-to-value. The Nvidia Rubin platform is said to deliver up to a 10-fold reduction in inference token cost and a fourfold reduction in the number of GPUs required to train mixture-of-experts models compared to prior Blackwell platforms. The design allows up to 128 Rubin GPUs per rack and doubles the density of prior generations, Davidson said, translating into "more training, more inference and more performance per dollar and per watt." HPE is pairing the hardware introductions with software and services meant to make shared AI infrastructure easier to operate. The company said its AI Factory portfolio will support multi-tenancy models through Nvidia Multi-Instance GPU, Nvidia Mission Control software, Red Hat Enterprise Linux and OpenShift integration along with HPE services for data center design, deployment and operations. The strategy is to support customers "anywhere along their AI journey," Davidson said. On the storage side, HPE said its Alletra Storage MP X10000 is the object-based platform to be certified by Nvidia at the foundation level, with performance validated for configurations of up to 128 GPUs. HPE asserted that AI infrastructure bottlenecks increasingly lie in the data pipeline and inference context layer, not just in raw computing. HPE executives said they see AI adoption becoming more operational, more distributed and more tightly governed. Brown said customers are under pressure from boards and executives to use AI, but want "low-risk decisions" and detailed, use-case-specific guidance. HPE's response is a sprawling but increasingly integrated set of systems, software and services meant to cover everything from an air-gapped enterprise rack to a sovereign supercomputing cluster.
[2]
Nvidia GTC 2026: HPE Unveils Vera Rubin Systems, Expands Private Cloud AI Portfolio
HPE made a major push at Nvidia GTC 2026, introducing a set of new AI systems and services built around Nvidia's Vera Rubin architecture. HPE made a major push at Nvidia GTC 2026, rolling out a broad set of new AI systems and services built around Nvidia's Vera Rubin architecture alongside expanded capabilities for its HPE Private Cloud AI cloud service -- including a new air‑gapped configuration for sovereign and regulated environments. Among the highlights are new HPE Cray Supercomputing systems based on Nvidia Vera CPUs, a rack‑scale Vera Rubin NVL72 system by HPE, a new Open Compute Project-inspired AI server, and expanded GPU scaling for HPE Private Cloud AI. Among the new Vera Rubin-based offerings are a new HPE Cray Supercomputing GX240 compute blade for the Cray Supercomputing GX5000 based on the Nvidia Vera CPU. The liquid-cooled HPE Cray Supercomputing GX240 starts with 16 Nvidia Vera CPUs but scales up to a mind-boggling 640 Nvidia Vera CPUs with 56,320 ARM cores per rack. Designed for trillion-parameter AI models and beyond, the new Nvidia Vera Rubin NVL72 by HPE features support for 72 Nvidia Rubin GPUs, delivering up to a 10X reduction in inference token costs and a 4X reduction in the number of GPUs needed to train models compared with Nvidia Blackwell systems. The new HPE Compute XD700 -- billed as an Open Compute Project-inspired AI server -- is aimed at providing superior rack density and performance, supporting up to128 Rubin GPUs per rack, HPE said. HPE is also launching an air-gapped configuration of its popular HPE Private Cloud AI turnkey cloud service for sovereign deployments. The new air-gapped AI configuration ensures that data remains private and is not exposed to the internet, said Dale Brown, global head of growth AI Solutions for HPE. "This is about sovereign, finance, defense, health care, regulated industries -- more and more they are looking for solutions to have right-sized AI but actually be able to grow that in a completely private environment that is actually not connected to the public internet," said Brown. HPE is also expanding its Private Cloud AI turnkey cloud service aimed at enterprise customers to support up to 128 GPUs, up from 64 GPUs. The support for additional CPUs allows enterprise customers to scale their enterprise solutions as they fine-tune their AI solutions. The additional GPU support delivers the ability for enterprise customers to "grow their AI solutions as they see fit," providing "headroom" for expanding AI solutions, said Brown. Ultimately, the HPE-Nvidia partnership is all about "removing friction, repeatable patterns, speeding time to value with proven best tools and techniques between our companies," he said. "We are super proud of how we have co-engineered these solutions consistently again and again." Here are the new HPE products and services unveiled at Nvidia GTC 2026.
[3]
HPE accelerates secure, scalable production-ready AI through new innovations with NVIDIA
New AI servers, turnkey systems, services, and software -- powered by NVIDIA Blackwell and Rubin acceleration -- deliver secure, predictable, repeatable outcomes HPE Alletra Storage MP X10000 becomes first NVIDIA-Certified Storage object-based platform, validating it delivers the performance, security and scale required by AI HPE today announced a significant expansion of the NVIDIA AI Computing by HPE portfolio, redefining how enterprises deploy, operationalize, and scale AI. Through its deep partnership and co-engineering with NVIDIA, HPE delivers an advanced portfolio of integrated and validated systems that speed time to value for AI while addressing scale, security, and governance requirements. "The AI race is fundamentally about speed, scale, and trust," said Antonio Neri, president and CEO, HPE. "Our industry leadership across cloud, networking, and AI enables organizations to operationalize AI securely, efficiently, and at an unprecedented scale. Together with NVIDIA, HPE delivers turnkey AI factories and networks that transform AI ambitions into real enterprise value." "NVIDIA and HPE are setting a new standard for enterprise AI infrastructure," said Jensen Huang, founder and CEO, NVIDIA. "HPE's leadership across private cloud, networking, and secure on-prem systems uniquely positions them to make AI a core enterprise capability. Together, we are building AI factories and AI grids -- foundational infrastructure to embed intelligence into every workflow." HPE expands secure, repeatable, and efficient AI architectures to simplify enterprise adoption While foundation models are driving massive scale with datacenters, enterprises face the challenge of securely operationalizing AI across the organization. HPE is addressing this challenge with a comprehensive refresh of security-focused AI solutions with NVIDIA for predictable, repeatable AI success for enterprises. HPE is expanding HPE Private Cloud AI, its turnkey enterprise AI factory co-engineered with NVIDIA, to deliver greater performance, scalability, and flexibility for enterprise inferencing. Industry leaders such as the Ryder Cup, Danfoss, and the Dallas Cowboys are leveraging HPE Private Cloud AI to accelerate transformative AI initiatives across their organizations. * New network expansion racks enable HPE Private Cloud AI deployments to scale up to 128 GPUs for customers to run larger, more demanding AI workloads with the same consistent operational experience. * To meet increasing demand for secure, fully isolated or sovereign deployments, the large HPE Private Cloud AI system is now available in an air-gapped configuration, ensuring sensitive data is not exposed to external networks. * HPE ProLiant Compute DL380a Gen12 servers and HPE Private Cloud AI systems based on the DL380a are being certified for Fortanix Confidential AI, a joint solution leveraging NVIDIA Confidential Computing, that enable secure on-premises deployments for AI models and processing of sensitive data without exposure. * CrowdStrike delivers agentic security for HPE Private Cloud AI, providing AI-powered threat detection and response to protect AI infrastructure, models and the AI agents operating across enterprise environments, enabling organizations to operationalize agentic AI securely and at scale. * HPE Private Cloud AI delivers a pre-configured hardware and software stack featuring the latest NVIDIA AI Enterprise software and blueprints, including the updated NVIDIA AI‑Q blueprint for AI agents and new NVIDIA Omniverse blueprint for digital twins. The latest NVIDIA AI-Q blueprint enables developers to build fully customizable AI agents that they own, inspect and control. * HPE is updating HPE Private Cloud AI, the latest HPE ProLiant servers and HPE AI factories to support the latest NVIDIA Nemotron open models -- part of the NVIDIA Agent Toolkit -- to simplify deployment of secure, on‑prem and sovereign infrastructure and quickly deliver scalable, production‑ready outcomes. * NVIDIA RTX™ PRO 6000 Blackwell Server Edition GPUs are available across all configurations of HPE Private Cloud AI and HPE's AI factory solutions. HPE adds new AI solutions for retail, medical research, and manufacturing New NVIDIA co-designed multi-workload solutions simplify deployment of AI use cases for autonomous edge intelligence, retail shopping assistance, video search and summarization, biomedical research. Each of these solutions combine HPE ProLiant Compute servers with NVIDIA accelerated computing, NVIDIA Spectrum-X Ethernet networking, BlueField DPUs, and Connect-X NICs - based on NVIDIA Enterprise Reference Architectures. The solutions also incorporate NVIDIA AI Enterprise software, NVIDIA CUDA-X libraries, blueprints, confidential computing, Multi‑Instance GPU (MIG), and virtual GPU (vGPU) technologies with HPE chip‑to‑cloud security and AI‑driven automation through HPE Compute Ops Management. To complement these multi-workload solutions, HPE is adding the new NVIDIA RTX PRO 4500 Blackwell Server Edition GPU to HPE ProLiant servers for edge deployments, small-language models, vector databases, and data analytics workloads. HPE is developing new solutions built on NVIDIA RTX 4500 Blackwell GPUs, including integrating the NVIDIA Retail Shopping Assistant Blueprint to streamline deployment across the retail sector. HPE is also expanding the portfolio of HPE ProLiant Compute servers that feature the NVIDIA RTX PRO 6000 Blackwell Server Edition GPU. HPE advances AI networking and AI at scale with innovations at NVIDIA GTC 2026 In separate announcements at NVIDIA GTC, HPE is also introducing networking solutions focused on enabling service providers, sovereigns, and large enterprises to connect distributed AI deployments using HPE Juniper Networking routers along with coherent optics. HPE also expanded its at-scale and sovereign AI factories for service providers, sovereigns, and large enterprises, along with a new generation of systems built on the NVIDIA Vera Rubin architecture and designed to run the most demanding AI workloads. HPE enhances AI data pipelines with expanded NVIDIA integration As AI infrastructure moves into production, data pipelines and specifically inference context have emerged as a critical performance bottleneck. HPE is working closely with NVIDIA to accelerate every stage of the AI data lifecycle, from ingest and vectorization to inference and recovery. HPE continues to evolve the HPE Alletra MP X10000 with the goal to centralize intelligent data handling and optimize how AI workloads ingest, process, and deliver data. HPE is the first vendor to achieve NVIDIA-Certified Storage validation for object-based systems at the Foundation level with the X10000. This certification signifies that NVIDIA has validated and benchmarked the performance for loads of up to 128 GPUs, conducted functional tests for enterprise grade availability and reliability, and that the storage layer efficiently feeds data to accelerated computing resources to deliver faster model training, lower latency inference, and better overall utilization. To further expand the collaboration with NVIDIA, HPE will be supporting the new NVIDIA STX rack-scale reference architecture to develop new AI storage solutions powered by NVIDIA Vera Rubin, BlueField-4, Spectrum-X networking, Connect-X NICs, and NVIDIA AI software. New HPE services, financing, and Unleash AI ecosystem solutions simplify AI adoption * HPE Services is offering a new agents hub to ensure structured and scalable enterprise adoption of agentic AI. Within the hub, HPE is developing and validating agents powered by NVIDIA Nemotron™ models to establish reusable patterns to helps organizations adopt and operationalize intelligent agents. * To optimize utilization and ROI across shared HPE AI factories, HPE Services and Protopia AI are creating a blueprint for trustworthy, multi-tenant AI factories that enables full use of sensitive data in regulated and sovereign environments without exposure. * HPE Financial Services is making it easier to advance AI and modernization projects with a new 90/9 Advantage financing program, requiring no payments for the first 90 days, followed by monthly lease payments of 1 percent for the next 9 months. The offer is available across the networking, hybrid cloud, and compute server portfolios. Availability HPE support for NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs across the HPE ProLiant Compute server portfolio will roll out in Q1 and Q2 2026. HPE Private Cloud AI with air-gapped deployment, support for NVIDIA RTX PRO 600 Blackwell Server Edition GPUs across each configuration, and NVIDIA AI-Q and Omniverse blueprints is available now. The new network expansion racks for HPE Private Cloud AI for scaling up to 128 GPUs will be available in July. The HPE and Protopia secure blueprint for trustworthy AI factories is planned for Q2 2026. Fortanix support with HPE ProLiant DL380a Gen12 systems is planned for Q3 2026.
[4]
HPE expands AI portfolio with NVIDIA Vera, Blackwell platforms By Investing.com
SAN JOSE, Calif. - Hewlett Packard Enterprise (NYSE:HPE) announced additions to its NVIDIA AI Computing portfolio on Monday, introducing new compute and networking offerings for AI factories and supercomputers. The $29 billion technology company has seen revenue surge 14.5% over the last twelve months to $35.7 billion, reflecting strong demand for AI infrastructure. HPE will offer one of the first NVIDIA Vera CPU compute blades through its HPE Cray Supercomputing GX240 Compute blade, featuring up to 16 NVIDIA Vera CPUs per blade. The system can scale to 40 blades with 640 NVIDIA Vera CPUs and 56,320 NVIDIA Olympus Arm-compatible cores per rack, according to a press release statement. The company added NVIDIA Quantum-X800 InfiniBand networking to its HPE Cray Supercomputing GX5000 platform, providing 144 ports of 800 Gb/s connectivity per port. HPE introduced the NVIDIA Vera Rubin NVL72 rack-scale system, designed for models exceeding 1 trillion parameters. The system includes 36 NVIDIA Vera CPUs, 72 NVIDIA Rubin GPUs, sixth generation NVIDIA NVLink networking, NVIDIA ConnectX-9 SuperNIC, and NVIDIA BlueField-4 DPUs. The HPE Compute XD700, an Open Compute Project-inspired AI server built on NVIDIA HGX Rubin NVL8, supports up to 128 Rubin GPUs per rack. HPE stated this provides double the GPU density compared to the previous generation HPE ProLiant Compute XD685. NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs are now available across HPE's AI factories. The portfolio received NVIDIA Cloud Partner program endorsement and will support multi-tenancy through NVIDIA Multi-Instance GPU enabled by SUSE Virtualization and SUSE Rancher Prime Suite. HPE AI Factory supports Red Hat Enterprise Linux and OpenShift integration with NVIDIA AI Enterprise solutions. Organizations using HPE AI infrastructure include Argonne National Laboratory, HLRS, Hudson River Trading, and the Korea Institute of Science and Technology Information. The HPE Cray Supercomputing GX240 Compute blade and NVIDIA Quantum-X800 InfiniBand networking will be available in 2027. The NVIDIA Vera Rubin NVL72 system will be available in December 2026. In other recent news, Hewlett Packard Enterprise (HPE) reported its fiscal Q1 2026 earnings, with earnings per share (EPS) of $0.65, exceeding analyst expectations of $0.58 by 12.07%. The company achieved a revenue of $9.3 billion, aligning with forecasts and marking an 18% increase compared to the previous year. Analysts from Bernstein SocGen maintained a Market Perform rating with a $24.00 price target, citing concerns related to AI servers. UBS also reiterated a Neutral rating with a $25.00 price target, noting improved orders and networking revenue despite slower server growth. Piper Sandler adjusted its price target to $23 from $24, maintaining a Neutral rating due to supply chain constraints impacting HPE's capacity to meet demand. The firm highlighted a strategic shift towards higher-margin products, such as networking equipment. These developments reflect the company's current market position and strategic adjustments amidst supply challenges. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
Share
Share
Copy Link
HPE unveiled a major expansion of its NVIDIA AI Computing by HPE portfolio at GTC 2026, introducing Vera Rubin-based systems, expanded Private Cloud AI capabilities, and new Blackwell servers. The announcements include rack-scale infrastructure for trillion-parameter models, air-gapped configurations for sovereign deployments, and scaling to 128 GPUs as enterprises shift from AI pilots to production-ready deployments.
Hewlett Packard Enterprise used NVIDIA Corp.'s GTC conference to roll out a comprehensive expansion of its AI portfolio, introducing new systems designed to help enterprises move AI from experimental pilots into production at scale
1
. The announcements span enterprise AI infrastructure, supercomputing, edge computing, and storage, reflecting what HPE executives describe as a fundamental shift in how organizations approach AI deployment2
.
Source: SiliconANGLE
"The AI race is fundamentally about speed, scale, and trust," said Antonio Neri, president and CEO of HPE. "Our industry leadership across cloud, networking, and AI enables organizations to operationalize AI securely, efficiently, and at an unprecedented scale"
3
. Jensen Huang, founder and CEO of NVIDIA, added that the partnership is "setting a new standard for enterprise AI infrastructure" by building AI factories and AI grids that embed intelligence into every workflow3
.Among the most significant announcements is the introduction of systems built on NVIDIA's Vera Rubin architecture. The NVIDIA Vera Rubin NVL72 by HPE is a rack-scale system designed for models exceeding one trillion parameters, featuring 72 NVIDIA Rubin GPUs, 36 NVIDIA Vera CPUs, sixth-generation NVIDIA NVLink networking, NVIDIA ConnectX-9 SuperNIC, and NVIDIA BlueField-4 DPUs
4
. The platform delivers up to a 10-fold reduction in inference token cost and a fourfold reduction in the number of GPUs required to train mixture-of-experts models compared to prior NVIDIA Blackwell platforms1
.HPE also unveiled the HPE Cray Supercomputing GX240 Compute Blade, described as the industry's first liquid-cooled compute blade featuring up to 16 NVIDIA Vera CPUs
1
. The system can scale to 640 NVIDIA Vera CPUs with 56,320 ARM cores per rack2
. The new HPE Compute XD700, an Open Compute Project-inspired AI server based on the NVIDIA HGX Rubin NVL8 liquid-cooled platform, supports up to 128 Rubin GPUs per rack, doubling the density of prior generations1
.
Source: CRN
HPE is expanding HPE Private Cloud AI, its turnkey enterprise AI factory co-engineered with NVIDIA, with new scaling options and air-gapped configurations for regulated industries
1
. The system can now scale to 128 GPUs through networked expansion racks, up from 64 GPUs, providing enterprises with additional headroom for expanding AI workloads2
.The new air-gapped configuration ensures that data remains private and is not exposed to the internet, targeting sovereign, finance, defense, healthcare, and other regulated industries. "This is about sovereign, finance, defense, health care, regulated industries -- more and more they are looking for solutions to have right-sized AI but actually be able to grow that in a completely private environment that is actually not connected to the public internet," said Dale Brown, global head of growth for AI solution sales at HPE
2
.HPE Private Cloud AI now supports NVIDIA's AI-Q agentic AI blueprint and Omniverse digital twin software, along with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs across all configurations
3
. Industry leaders such as the Ryder Cup, Danfoss, and the Dallas Cowboys are leveraging the platform to accelerate transformative AI initiatives3
.Related Stories
The expansion reflects a broader shift in enterprise AI strategy, according to HPE executives. "The winners are not just optimizing around isolated projects," Brown said in a briefing. "They're standardizing how AI operates inside their enterprise"
1
. This shift is driven by growing concerns about AI economics and return on investment. Enterprises increasingly want predictable paths from infrastructure spending to business outcomes rather than proofs of concept1
.HPE is also broadening support for NVIDIA Blackwell GPUs across its portfolio. NVIDIA RTX Pro 4500 Blackwell Server Edition GPUs are now available in more ProLiant server models aimed at edge deployments, small language models, vector databases, and analytics
1
. Brown noted that edge deployments reflect a pragmatic view of enterprise AI, where retail customers seeking loss prevention solutions don't need a full-scale turnkey AI factory but can use hardened patterns built on larger systems deployed in a small footprint1
.On the storage side, HPE said its Alletra Storage MP X10000 is the first object-based platform to be certified by NVIDIA at the foundation level, with performance validated for configurations of up to 128 GPUs
1
. HPE asserted that AI infrastructure bottlenecks increasingly lie in the data pipeline and inference context layer, not just in raw computing1
.The company added NVIDIA Quantum-X800 InfiniBand networking to its HPE Cray Supercomputing GX5000 platform, providing 144 ports of 800 Gb/s connectivity per port
4
. The portfolio also supports multi-tenancy through NVIDIA Multi-Instance GPU, NVIDIA Mission Control software, Red Hat Enterprise Linux, and OpenShift integration along with HPE services for data center design, deployment, and operations1
.The $29 billion technology company has seen revenue surge 14.5% over the last twelve months to $35.7 billion, reflecting strong demand for secure and scalable AI infrastructure
4
. Organizations using HPE AI infrastructure include Argonne National Laboratory, HLRS, Hudson River Trading, and the Korea Institute of Science and Technology Information4
. The HPE Cray Supercomputing GX240 Compute blade and NVIDIA Quantum-X800 InfiniBand networking will be available in 2027, while the NVIDIA Vera Rubin NVL72 system will be available in December 20264
.Summarized by
Navi
[1]
[3]
1
Technology

2
Technology

3
Business and Economy
