Nvidia unveils 88-core Vera CPU to challenge Intel and AMD in AI-driven data center market

Reviewed byNidhi Govil

5 Sources

Share

Nvidia launched its Vera CPU at GTC 2026, featuring 88 custom Olympus cores designed specifically for agentic AI and reinforcement learning workloads. The chip delivers 50% faster performance and twice the efficiency compared to traditional CPUs. Nvidia also introduced a liquid-cooled rack system housing 256 Vera CPUs, claiming 6X gains in CPU throughput and positioning itself as a direct competitor to Intel and AMD in the data center CPU market.

Nvidia Enters Direct CPU Competition with Vera Launch

Nvidia announced comprehensive details about its Vera CPU at GTC 2026 in San Jose, California, marking the company's formal entry into direct competition with Intel and AMD in the traditional CPU market

1

. The 88-core Vera CPUs are purpose-built for agentic AI and reinforcement learning, delivering results with twice the efficiency and 50% faster performance than traditional rack-scale CPUs

4

. This strategic move comes after Meta revealed plans to deploy multiple generations of Nvidia CPU-only systems across its infrastructure, signaling broader market acceptance beyond GPU-focused deployments

1

.

Source: SiliconANGLE

Source: SiliconANGLE

Custom Olympus Cores Drive Performance Gains

The Vera CPU features 88 custom Nvidia-designed Olympus cores with 144 threads, representing a substantial upgrade from the first-generation Grace CPU's 72 cores

1

. Nvidia claims a remarkable 1.5X improvement in instructions per cycle throughput, significantly outpacing the typical single-digit or low-teens percentage increases seen with competing architectures

1

. The Olympus cores utilize Spatial Multithreading technology, which physically isolates pipeline components rather than time-slicing resources, allowing both threads to truly run simultaneously on a single core

1

. This approach delivers consistent, predictable performance ideal for multi-tenant AI data centers running numerous jobs concurrently

4

.

Revolutionary Memory Bandwidth and AI Infrastructure

Nvidia equipped the Vera CPU with LPDDR5X memory delivering up to 1.2 TB/s of bandwidth per socket, approximately twice the bandwidth of general-purpose CPUs while consuming half the power

2

. For comparison, Intel's top 6900P processors achieve 825 GB/s with 8800 MT/s MRDIMMs, while AMD's Turin processors reach between 560 and 600 GB/s

2

. The chip also features faster NVLink-C2C interconnects enabling data transfer at up to 900 GB/s in either direction, advertised as 1.8 TB/s bidirectional bandwidth

2

. All 88 cores operate in a single domain without latency-inducing NUMA complexities, delivering dramatic improvements in latency, predictability, memory bandwidth, and ease of programmability

1

.

Liquid-Cooled Rack Architecture Delivers Massive Scale

Nvidia introduced a new liquid-cooled rack system integrating 256 Vera CPUs along with 64 BlueField-4 data processing units, providing more than 22,500 CPU cores and 400 TB of memory

2

. This rack-scale architecture claims a 6X gain in CPU throughput and twice the performance in advanced AI workloads, particularly for agentic AI frameworks that require extensive orchestration and data movement

1

. Ian Buck, VP of Hyperscale and HPC at Nvidia, explained that agents require CPUs for tool calling, SQL queries, and code compilation, making fast CPUs essential to prevent bottlenecks

2

.

Source: NVIDIA

Source: NVIDIA

Strategic Positioning for Agentic AI Era

Jensen Huang, founder and CEO of Nvidia, emphasized the strategic shift: "The CPU is no longer simply supporting the model; it's driving it. With breakthrough performance and energy efficiency, Vera unlocks AI systems that think faster and scale further"

3

. As AI evolves from simple chatbots toward autonomous agents capable of reasoning, using third-party tools, and executing code, the underlying AI infrastructure requirements are changing

5

. Nvidia designed the Vera CPU to excel at AI "thinking" tasks including compilers, analytics pipelines, and orchestration services that coordinate GPUs

5

. Early adopter Redpanda reported that Vera delivered up to 5.5X lower latency when running Apache Kafka-compatible workloads compared to other benchmarked systems

4

.

Source: Interesting Engineering

Source: Interesting Engineering

Broad Industry Adoption and Market Implications

Leading hyperscalers collaborating with Nvidia to deploy Vera include Alibaba, CoreWeave, Meta, and Oracle Cloud Infrastructure, alongside system makers Dell Technologies, HPE, Lenovo, and Supermicro

4

. ByteDance, Lambda, Nebius, and NScale have also committed to deploying the chips in their data centers

2

. The chips are in full production and will be available to Nvidia's partners in the second half of this year

1

. Nvidia will offer its NVL8 HGX systems, traditionally using x86 processors from Intel, with Vera CPUs for the Rubin generation, demonstrating the company's confidence in competing head-to-head with established CPU manufacturers

2

. This broad ecosystem support establishes Vera as a potential new standard for AI workloads, democratizing access to advanced AI infrastructure for developers, startups, and enterprises

4

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo