Google and Meta strike multibillion-dollar AI chip deal as tech giants race to scale AI infrastructure

Reviewed byNidhi Govil

6 Sources

Share

Meta has signed a multibillion-dollar agreement to rent Google's Tensor Processing Units through Google Cloud for developing new AI models. The multi-year deal allows Meta to diversify its AI hardware beyond Nvidia and AMD as the company projects $135 billion in AI infrastructure spending by 2026. Meta is also in talks to purchase TPUs outright for its own data centers.

Meta Signs Multibillion-Dollar Deal to Rent Google's AI Chips

Meta Platforms has secured a multibillion-dollar AI chip deal with Google to rent Tensor Processing Units (TPUs) through Google Cloud for developing new AI models, according to reports from The Information

1

. The multi-year agreement marks a significant expansion in Meta's strategy to diversify AI hardware as the social media giant races to build infrastructure capable of supporting its ambitious artificial intelligence initiatives

2

.

Source: SiliconANGLE

Source: SiliconANGLE

The deal enables Meta to train new AI models using Google's custom processors, which serve as a compelling Nvidia alternative in the increasingly competitive AI chip market

2

. Meta is also in discussions with Google to purchase millions of TPUs outright for installation in its own data centers as early as next year, though no agreement has been reached yet

3

.

Google's TPUs Emerge as Viable Alternative in AI Hardware Market

Google launched its most advanced TPUs, called Ironwood TPUs, in November, offering customers the ability to scale up to 9,216 chips into a single server pod linked by high-speed interconnects providing up to 9.6 terabits per second of bandwidth

2

. According to Google, Ironwood delivers more than 118-times the FP8 ExaFLOPS of its nearest competitor and four-times better performance for training and inference than Trillium, its previous generation TPU

2

.

Source: Reuters

Source: Reuters

TPU sales have become a crucial growth engine for Google Cloud as the company seeks to prove its AI investments generate tangible returns

5

. Google has historically restricted TPUs to internal use and select cloud customers, but is now pushing to make them a viable alternative to Nvidia's market-leading GPUs

5

. The company aims to capture as much as 10% of Nvidia's data center revenue in the coming years

2

.

Meta's Multipronged Strategy to Diversify AI Hardware

The Google agreement represents the latest component of Meta's multipronged silicon strategy as the company projects AI infrastructure spending to reach as much as $135 billion in 2026

3

.

Source: Finextra Research

Source: Finextra Research

On February 24, Meta and AMD announced a multiyear deal valued at up to $60 billion over five years for up to six gigawatts of AMD's Instinct GPUs, including an equity warrant that could give Meta as much as 10% of AMD's stock

3

.

A week earlier, Meta expanded its partnership with Nvidia to deploy millions of Blackwell and next-generation Vera Rubin processors

3

. Meta CEO Mark Zuckerberg stated the AMD deal represents an important step for Meta to diversify its compute supply and deliver personal superintelligence

3

.

Morningstar analysts described Meta's approach as leveraging Nvidia for frontier model training, AMD for inference needs, Google's TPUs for possible Llama workloads, and Meta's in-house MTIA chips for core recommendation algorithms

3

. By playing different chipmakers against one another, Meta can potentially secure favorable prices while pairing optimal silicon with each type of AI workload

2

.

Implications for AI Hardware Competition and Infrastructure Buildout

The multibillion-dollar deal matters because it signals intensifying competition in the AI chip market as large enterprises lavish billions on advanced processors to power AI workloads

2

. Google has also signed an agreement with an unidentified large investment firm to fund a joint venture that would lease TPUs to other customers, underscoring a broader push to challenge Nvidia's dominance

5

.

In December, Google collaborated with Meta on a project called "TorchTPU" to ensure full compatibility between TPUs and the PyTorch software framework, laying groundwork for today's partnership

3

. Google first developed TPUs more than a decade ago for internal AI workloads, and in October 2025 signed a deal with Anthropic worth tens of billions of dollars for access to up to one million TPUs

3

.

Meta has plans for 30 data centers, with 26 located in the United States, to support its massive compute requirements

3

. The scale of Meta's operations demands several alternatives, according to analysts quoted in recent reports

3

. Questions remain about what the Google deal means for Meta's own in-house chips, as the company has reportedly been experiencing technical challenges with next-generation MTIA chips optimized for AI model training that were expected to launch this year

2

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo