Meta signs multibillion-dollar deal for millions of AWS Graviton chips to power AI workloads

Reviewed byNidhi Govil

14 Sources

Share

Meta has struck a multiyear, multibillion-dollar agreement with Amazon to deploy hundreds of thousands of AWS Graviton processors across its data centers. The deal marks a strategic shift as Meta diversifies its AI infrastructure beyond traditional GPUs, leveraging ARM-based CPUs for compute-intensive agentic AI tasks. This partnership strengthens Amazon's position in the AI chip market while Meta continues its aggressive spending to meet expanding AI demands.

Meta Secures Hundreds of Thousands of AWS Graviton Chips

Meta has signed a multibillion-dollar deal with Amazon to deploy hundreds of thousands of AWS Graviton processors across its infrastructure, marking one of the largest commitments to Amazon's homegrown AI chips

1

. The multiyear agreement, announced Friday, will see Meta utilize tens of millions of Graviton 5 CPU cores across its 32 data centers over the next three years

2

. While Amazon hasn't disclosed the exact value, the arrangement positions Meta among the top five Graviton customers and represents a significant win for AWS as cloud providers compete intensely for AI infrastructure deals

5

.

Source: Wccftech

Source: Wccftech

ARM-Based CPUs Target Agentic AI Workloads

The AWS Graviton represents a strategic pivot in AI infrastructure planning. Unlike GPU (Graphics Processing Units) that dominate model training, these ARM-based CPUs excel at CPU-intensive workloads that emerge after models are trained

1

. Agentic AI workloads demand different computational capabilities, including real-time reasoning, code generation, search functionality, and the coordination required to manage agents through multi-step tasks. "As we scale the infrastructure behind Meta's AI ambitions, diversifying our compute sources is a strategic imperative," said Santosh Janardhan, Meta's head of infrastructure

2

. The Graviton 5 chips feature 192 cores and a cache five times larger than previous generations, reducing communication delays between cores by 33% while delivering 25% better performance

2

.

Source: The Register

Source: The Register

Meta Moves to Diversify Chip Suppliers

This partnership reflects Meta's broader strategy to diversify chip suppliers and reduce dependence on any single vendor. The social media giant has recently signed deals worth a combined $48 billion with CoreWeave and Nebius for Nvidia GPU access

5

. Meta also maintains a six-year, $10 billion deal with Google Cloud announced last August, though the company had primarily been an AWS customer that also used Microsoft Azure

1

. Beyond external partnerships, Meta is developing in-house silicon with four iterations of its MTIA chip for AI purposes and an expanded partnership with Broadcom to design and build those chips . The company has also agreed to spend billions on AI chips and hardware from Nvidia and AMD, plus a multibillion-dollar deal for tensor processing units from Alphabet .

Amazon Showcases Chip Capabilities Amid Intense Competition

The timing of this announcement carries strategic significance. AWS revealed the Meta deal just as the Google Cloud Next conference wrapped up, a pointed reminder to competitors that Amazon remains a formidable force in AI infrastructure

1

. "The GPUs are useless if you don't have the CPUs next to them," said Nafea Bshara, AWS vice president and co-founder of Annapurna Labs, Amazon's chip unit . Most CPUs Amazon has deployed in data centers in recent years have been Graviton processors, a remarkable shift for a company once heavily reliant on Intel hardware. Amazon CEO Andy Jassy recently stated that the company's silicon unit was on pace to generate $20 billion in sales annually, and executives are considering selling the chips to other companies for use in their server farms .

Source: TechCrunch

Source: TechCrunch

Expanding AI Demands Drive Infrastructure Shifts

The Meta agreement allows Amazon to showcase a major customer as validation for its homegrown CPUs, which compete with Nvidia's new Vera CPU—also ARM-based and designed for agentic AI workloads

1

. The difference lies in distribution: Nvidia sells its AI chips and systems to enterprises and cloud providers including AWS, while AWS only sells access to its chips through its cloud service. Earlier this month, Amazon CEO Andy Jassy took aim at Nvidia and Intel in his annual shareholder letter, emphasizing that enterprises want better price-performance ratios for AI infrastructure

1

. Analysts at Counterpoint Research predict that by 2029, ARM-based CPUs will account for 90% of the AI ASIC server CPU market, with x86 architectures rapidly transitioning toward proprietary ARM-based designs

3

. This shift accelerated with Nvidia's Grace CPUs launch in 2023, which replaced x86-based parts from Intel and AMD in many GPU systems. AWS and Google have followed suit, swapping Intel CPUs for their own ARM-based processors in AI rack systems

3

. For post-model training and inference tasks, Graviton delivers what AWS claims is the best performance for a given price while using 60% less energy

5

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo