Meta expands Nvidia partnership with multiyear deal for millions of AI chips and standalone CPUs

Reviewed byNidhi Govil

8 Sources

Share

Meta commits to deploy millions of Nvidia chips over the next few years in a deal worth tens of billions of dollars. The expanded Meta Nvidia partnership includes current Blackwell GPUs, future Vera Rubin systems, and marks Meta as the first major hyperscaler to deploy Nvidia Grace CPUs as standalone processors at scale, signaling a strategic shift toward AI inference workloads.

Meta Becomes First Hyperscaler to Deploy Nvidia Grace CPUs at Scale

Meta has become the first major hyperscaler to deploy Nvidia Grace CPUs as standalone processors at scale, marking a significant expansion of the Meta Nvidia partnership announced Tuesday. The multiyear deal will see Meta deploy millions of Nvidia chips across its massive data center buildout, including current Blackwell GPUs, upcoming Vera Rubin CPUs, and standalone Grace processors for AI workloads that don't require graphics processing units

1

. While financial terms weren't disclosed, Nvidia indicated the deal would contribute tens of billions of dollars to its bottom line, with analysts estimating that at over $3.5 million per rack, a million GPUs alone could represent approximately $48 billion in value

1

.

Source: SiliconANGLE

Source: SiliconANGLE

Meta accounts for about 9% of Nvidia's revenue and is widely believed to be among four customers that made up 61% of the chipmaker's revenue in its most recent fiscal quarter

2

3

. Mark Zuckerberg stated the expanded partnership continues Meta's push "to deliver personal superintelligence to everyone in the world," a vision he announced in July

4

.

Source: Japan Times

Source: Japan Times

Strategic Shift Toward AI Inference and Agentic Workloads

The deployment of standalone CPUs represents a strategic shift in AI infrastructure, with Meta using Nvidia Grace CPUs to power general purpose and agentic AI workloads alongside AI inference tasks. Ian Buck, Nvidia's VP and General Manager of Hyperscale and HPC, explained that Grace delivers twice the performance per watt on backend workloads compared to traditional processors

1

2

. The Nvidia Grace CPUs feature 72 Arm Neoverse V2 cores clocked at up to 3.35 GHz, with the Grace-CPU Superchip offering up to 960 GB of LPDDR5x memory and 1 TB of memory bandwidth

1

.

Source: The Register

Source: The Register

The next-generation Vera Rubin CPUs, planned for deployment by Meta in 2027, boost core counts to 88 custom Arm cores and add support for simultaneous multi-threading and confidential computing functionality

1

. Meta will leverage this confidential computing capability for private processing and AI features in WhatsApp encrypted messaging service

1

.

Nvidia Encroaches on Intel and AMD Territory

Meta's adoption of standalone CPUs from Nvidia marks a direct challenge to Intel and AMD, which have traditionally dominated the data center CPU market. This move runs counter to the broader industry trend toward custom Arm CPUs like Amazon's Graviton or Google's Axion

1

. Buck emphasized that Grace CPUs excel at data manipulation and machine learning tasks, with Vera continuing down that path as "an excellent data center-only CPU for those high-intensity data processing back-end operations"

3

.

The announcement sent Nvidia and Meta shares up more than 1% in late trading, while AMD shares fell more than 3%

2

5

. Ben Bajarin, chief executive and principal analyst at Creative Strategies, noted that "Meta doing this at scale is affirmation of the soup-to-nuts strategy that Nvidia's putting across both sets of infrastructure: CPU and GPU"

4

.

Massive Capex Drives AI Infrastructure Spending

The deal aligns with Meta's projected capex of $115 billion to $135 billion for 2026, with Bajarin estimating "a good portion of Meta's capex to go toward this Nvidia build-out"

1

4

. Meta has committed to spend $600 billion in the U.S. by 2028 on data centers and supporting infrastructure, with plans for 30 data centers, 26 of which will be U.S.-based

4

. The company is building gigawatt-sized facilities in Louisiana, Ohio, and Indiana, including the 1-gigawatt Prometheus site in New Albany, Ohio, and the 5-gigawatt Hyperion site in Richland Parish, Louisiana

4

.

Hyperscalers are on track to spend $650 billion this year on compute power and AI infrastructure

5

. The timing proves strategic for Meta, as it locks in scarce next-generation compute at a time when Blackwell GPUs remain back-ordered and rivals scramble for supply

5

.

Diversification Strategy Amid In-House Chip Development

Despite this expanded partnership, Meta isn't exclusively an Nvidia shop. The company maintains a significant fleet of AMD Instinct GPUs in its datacenters and was directly involved in designing AMD's Helios rack systems, due out later this year

1

. Meta was also reportedly considering using Google's Tensor Processing Unit chips for AI work

3

5

. Buck acknowledged that companies should test alternatives but argued that only Nvidia offers the breadth of components, systems, and software needed to lead in AI . The agreement also includes Nvidia's Spectrum-X network technology

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo