Red Hat launches AI Enterprise platform to accelerate hybrid cloud AI deployments at scale

3 Sources

Share

Red Hat introduced Red Hat AI Enterprise, a unified AI platform designed to deploy and manage AI models, agents and applications across hybrid cloud environments. Alongside Red Hat AI 3.3 and the co-engineered Red Hat AI Factory with NVIDIA, the new metal-to-agent stack addresses enterprise challenges in scaling AI projects beyond the pilot phase through integrated lifecycle management.

Red Hat tackles enterprise AI scaling challenges with new platform

Red Hat

1

announced Red Hat AI Enterprise, a unified AI platform built to deploy and manage AI models, AI agents and applications across hybrid cloud deployments. The IBM unit positions this launch as a direct response to a persistent enterprise problem: too many organizations remain trapped in the pilot phase, unable to scale AI projects due to fragmented tools and inconsistent infrastructure. Red Hat AI Enterprise unifies model and application lifecycles, allowing IT operations teams to manage AI as a standardized enterprise system rather than isolated experiments.

Building the metal-to-agent AI infrastructure stack

The new platform forms part of what Red Hat calls a comprehensive "metal-to-agent" development stack, integrating underlying Linux and Kubernetes infrastructure with advanced inference and agentic capabilities. At its core sits Red Hat OpenShift, the company's hybrid cloud application platform, ensuring developers work with familiar tools and frameworks. Red Hat AI Enterprise delivers faster, scalable and cost-effective AI inference powered by the vLLM inference engine and llm-d distributed inference framework. The platform supports any AI model across any environment, whether cloud-based or on-premises, with integrated observability and lifecycle management to drive AI lifecycle governance and mitigate risk.

Source: SiliconANGLE

Source: SiliconANGLE

Red Hat AI Factory with NVIDIA targets production readiness

Red Hat

3

also unveiled the Red Hat AI Factory with NVIDIA, a co-engineered software platform combining Red Hat AI Enterprise and NVIDIA AI Enterprise. This collaboration arrives as enterprise AI spending heads toward over $1 trillion by 2029, driven largely by agentic AI applications. The AI Factory streamlines management of both traditional infrastructure and complex AI computing stacks, handling tasks like provisioning underlying infrastructure for AI workloads and optimizing performance. Organizations gain instant access to dozens of pre-configured models, including IBM's Granite family, NVIDIA Nemotron, and NVIDIA Cosmos open models delivered as NVIDIA NIM microservices. The platform maximizes infrastructure usage through intelligent GPU orchestration, enabling on-demand access to GPUs with automatic checkpointing to protect long-running jobs.

Red Hat AI 3.3 expands model ecosystem and hardware support

Alongside Red Hat AI Enterprise, the company released Red Hat AI 3.3, bringing significant updates across its entire AI portfolio. The release expands the model ecosystem with validated, production-ready compressed versions of Mistral-Large-3, Nemotron-Nano and Apertus-8B-Instruct, plus deployment support for frontier models like Ministral 3 and DeepSeek-V3.2 with sparse attention. A technology preview of Models-as-a-Service allows IT teams to provide self-service access to privately hosted models via an API gateway, promoting scalable AI adoption within enterprises. Red Hat expanded hardware support with a technology preview of generative AI support on Intel CPUs for more cost-effective small language model inference, plus hardware certification for NVIDIA's Blackwell Ultra and AMD MI325X accelerators. The new Red Hat AI Python Index delivers hardened, enterprise-grade versions of critical tools, enabling teams to move from fragmented experimentation to repeatable, security-focused production pipelines.

Security and operational rigor drive enterprise adoption

Red Hat AI Vice President Joe Fernandes emphasized that AI needs operationalization as a core component of enterprise software stacks rather than standalone silos. Built on Red Hat Enterprise Linux, the platform inherits advanced security and compliance capabilities from the start, reducing risk and mitigating downtime. NVIDIA DOCA microservices create a zero-trust architecture, delivering AI runtime security across infrastructure. Chief Technology Officer Chris Wright stated the stable, high-performance foundation enables customers to own their AI strategy and scale with the same rigor they apply to core IT platforms. The Red Hat AI Factory with NVIDIA is supported on AI factory infrastructure from leading systems manufacturers, including Cisco, Dell Technologies, Lenovo and Supermicro, providing organizations architectural control from datacenter to public cloud. This approach addresses the enterprise AI landscape's rapid evolution from simple chat interfaces toward high-density, autonomous agentic workflows requiring deeper integration across the entire technology stack.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo