Google's custom AI chips draw billions from Meta and Anthropic, challenging Nvidia's dominance

Reviewed byNidhi Govil

10 Sources

Share

Google's tensor processing units are emerging as a credible alternative to Nvidia's GPUs, with Meta reportedly negotiating a multibillion-dollar deal to rent and purchase TPUs. The shift could unlock a $900 billion revenue opportunity for Alphabet Inc. while reshaping the AI hardware supply landscape and forcing the industry to reconsider its dependence on a single chip supplier.

Google's TPUs Emerge as Serious Challenge to Nvidia

Google's tensor processing units are no longer just internal tools. They're becoming a focal point for companies seeking alternatives to Nvidia's dominance in AI chips. Meta is reportedly in advanced discussions to rent Google Cloud TPUs during 2026 and transition to direct purchases in 2027, a move that could represent a multibillion-dollar agreement

4

. Anthropic has already committed to spending tens of billions on Google's custom AI hardware, sending Alphabet Inc. stock on a rally that contributed to its 31% fourth-quarter surge

3

5

. These developments signal that major AI players are actively diversifying away from their traditional reliance on GPUs, creating a seismic shift in the data-center market.

Source: Bloomberg

Source: Bloomberg

Why Tensor Processing Units Outperform for AI Model Development

The technical advantage of TPUs lies in their specialized design. While Nvidia's GPUs were originally developed for computer graphics and gaming, tensor processing units were built exclusively around matrix multiplication—the core calculation needed for training and running large AI models

1

. This focus allows TPUs to handle AI workloads with greater efficiency, potentially saving tens or hundreds of millions of dollars compared to general-purpose chips. Google's seventh-generation TPU, called Ironwood, now powers the company's Gemini AI system and protein-modeling AlphaFold

1

. Independent comparisons show that TPU v5p pods can outperform high-end Nvidia systems on workloads tuned for Google's software ecosystem

2

. When chip architecture, model structure, and software stack align this closely, improvements in speed and efficiency become natural rather than forced.

Source: The Conversation

Source: The Conversation

New Revenue Stream Could Reach $900 Billion

Investors are increasingly confident that Google's cloud-computing business could transform into something much larger if the company aggressively pursues third-party chip sales. Gil Luria, head of technology research at DA Davidson, estimates that TPUs could capture 20% of the artificial intelligence market over a few years, potentially creating a $900 billion business

5

. Morgan Stanley analyst Brian Nowak sees signs of a "budding TPU sales strategy," with expectations of roughly five million TPUs to be purchased in 2027—up 67% from previous estimates—and seven million in 2028, representing a 120% increase

5

. Every 500,000 TPU chips sold to a third-party data center could add approximately $13 billion to Alphabet's 2027 revenue and 40 cents to its earnings per share. The possibility of this shift caused immediate market reactions, with Alphabet's valuation climbing close to the $4 trillion mark while Nvidia's stock declined by several percentage points as investors weighed the implications

4

.

Source: Benzinga

Source: Benzinga

AI Hardware Supply Constraints Shape Strategic Decisions

The scale of demand for AI tools has created intense competition for supply, making hardware diversification a strategic necessity rather than a preference. Data center operators continue to report shortages in GPUs and memory modules, with prices projected to rise through next year

4

. Organizations that rely exclusively on GPUs face high costs and increasing competition for availability. By developing and depending on its own hardware, Google gains more control over pricing, availability, and long-term strategy

2

. Meta is also exploring broader hardware options, including interest in RISC-V-based processors from Rivos, suggesting a wider move to diversify its compute base

4

. Most hyperscalers have their own internal chip development programs, partly because GPU costs skyrocketed when demand outstripped supply

1

. Amazon already uses its own Trainium chips to train AI models, demonstrating that the shift toward custom accelerators extends beyond Google.

What This Means for the Competitive Landscape

The existence of credible alternatives pressures Nvidia to move faster, refine its offerings, and appeal to customers who now see more than one viable path forward. Estimates from Google Cloud executives suggest a successful deal could allow Google to capture a meaningful share of Nvidia's data-center revenue, which exceeds $50 billion in a single quarter this year

4

. However, Nvidia retains significant advantages. Many organizations depend heavily on CUDA and the large ecosystem of tools and workflows built around it, making migration to alternative architectures a substantial engineering undertaking

2

. GPUs continue to offer unmatched flexibility for diverse workloads and will remain essential in many contexts. Yet the conversation around hardware has shifted. Companies building cutting-edge AI models increasingly want specialized chips tuned to their exact needs and greater control over the systems that support them. The rapid evolution of AI workloads means device relevance can change dramatically, which explains why companies continue to diversify their compute strategies and explore multiple architectures

4

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo