Meta scraps advanced AI training chip, pivots to AMD and Nvidia after design roadblocks

2 Sources

Share

Meta has abandoned its most advanced in-house AI training chip after hitting significant design roadblocks, according to The Information. The company is now shifting focus to a simpler version while committing over $100 billion to partnerships with AMD, Nvidia, and Google for external AI chips to power its massive infrastructure expansion.

Meta abandons most advanced AI training chip amid development hurdles

Meta has scrapped its most advanced in-house AI training chip after encountering significant chip design challenges, forcing the company to pivot toward a simpler version and lean more heavily on external partners

1

. The setback underscores the formidable difficulty of developing custom silicon capable of rivaling industry leaders and raises serious questions about Meta's ability to reduce its reliance on external suppliers for critical AI infrastructure

2

.

Source: Seeking Alpha

Source: Seeking Alpha

The company's internal AI chip development efforts "hit roadblocks," according to sources familiar with the matter, forcing the abandonment of its most ambitious training chip project

1

. Meta faces design complexities, internal skepticism, a lack of stable training software, potential for costly delays, and difficulties scaling the production of more advanced chips, making in-house solutions less viable currently

2

.

History of setbacks in Meta Training and Inference Accelerator program

This isn't the first stumble for Meta's chip ambitions. The Meta Training and Inference Accelerator program has a documented history of setbacks

1

. The company previously scrapped an earlier inference chip after it underperformed in small-scale testing and pivoted in 2022 to billions of dollars' worth of Nvidia GPUs

1

. While Meta eventually deployed an MTIA chip for inference tasks on Facebook and Instagram, the training chip has proven far more elusive

1

.

Meta Chief Product Officer Chris Cox described the company's chip development journey last year as a "walk, crawl, run situation"

1

. The company had begun testing its first in-house AI training chip manufactured by TSMC after completing a tape-out, but analyst Jeff Pu noted in January that Meta appeared to be scaling back its in-house ASIC program, turning to AMD instead of its own chips or Google TPUs

1

.

Massive partnerships signal shift in strategy and reliance on external partners

Scrapping advanced chip design has prompted Meta to secure massive deals with established chip manufacturers. On February 24, Meta announced a multiyear agreement with AMD worth more than $100 billion for up to six gigawatts of MI450 GPUs, with shipments scheduled to begin in the second half of this year

1

. AMD issued Meta a performance-based warrant for up to 160 million shares of its common stock under the deal

1

.

A week earlier, Meta expanded its partnership with Nvidia for millions of next-generation Vera Rubin GPUs and Grace CPUs, a deal likely worth tens of billions of dollars

1

. Meta also signed a deal to rent Google TPUs for developing new AI models

1

. These multi-billion-dollar deals and partnerships will provide Meta access to leading AI chips, enabling it to power next-generation AI infrastructure through established, scalable technologies rather than its own unfinished chips

2

.

Implications for Meta's $135 billion AI infrastructure expansion

Meta has committed up to $135 billion in capital expenditures for 2026 to build out AI infrastructure, with plans to expand across more than 30 data centers

1

. Meta's decision to shift from in-house advanced chips to renting and partnering for established AI chips highlights its current difficulty matching Nvidia's capabilities, forcing greater reliance on competitors' products

2

. Meta co-developed its MTIA chips with Broadcom, which also partners with Google on its TPUs

1

.

For tech observers, the key question becomes whether Meta will continue pursuing internal chip development or fully embrace a hybrid model that balances modest custom silicon efforts with strategic partnerships. The company's ability to execute on its AI ambitions now depends heavily on the performance and availability of chips from Nvidia, AMD, and Google—a position that could affect both costs and competitive positioning as the AI race intensifies.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo