2 Sources
2 Sources
[1]
Meta scraps advanced AI training chip after design roadblocks
Meta scrapped its most advanced in-house AI training chip due to design challenges and is shifting to a simpler version, according to The Information. The setback highlights the difficulty of building custom silicon to rival Nvidia and raises questions about Meta's ability to reduce its dependence on external suppliers. The company's internal chip design efforts "hit roadblocks," forcing the abandonment of its most ambitious training chip. The Meta Training and Inference Accelerator program has a history of setbacks. Meta scrapped an earlier inference chip after it underperformed in small-scale testing and pivoted in 2022 to billions of dollars' worth of Nvidia GPUs. The company eventually deployed an MTIA chip for inference tasks on Facebook and Instagram, but the training chip has proven more elusive. Meta began testing its first in-house AI training chip manufactured by TSMC after completing a tape-out. Analyst Jeff Pu noted in January that Meta appeared to be scaling back its in-house ASIC program, turning to AMD instead of its own chips or Google's TPUs. Meta announced a multiyear agreement with AMD on February 24 worth more than $100 billion for up to six gigawatts of MI450 GPUs. Shipments are scheduled to begin in the second half of this year. AMD issued Meta a performance-based warrant for up to 160 million shares of its common stock under the deal. A week earlier, Meta expanded its partnership with Nvidia for millions of next-generation Vera Rubin GPUs and Grace CPUs. That deal is likely worth tens of billions of dollars. Meta also signed a deal to rent Google TPUs for developing new AI models. Meta has committed up to $135 billion in capital expenditures for 2026 to build out AI infrastructure. The company plans to expand across more than 30 data centers. Meta co-developed its MTIA chips with Broadcom, which also partners with Google on its TPUs. Meta Chief Product Officer Chris Cox described the company's chip development journey last year as a "walk, crawl, run situation." The company scrapped an earlier inference chip after it underperformed in small-scale testing. The Financial Times previously reported that Meta experienced technical challenges with its next-generation training chips.
[2]
Meta's internal chip design efforts face hurdles
Meta Platforms (META) is facing issues with AI chips being developed internally and has discarded its most advanced chip, shifting focus to a less complicated version, The Information reported, citing people with knowledge of the matter. Last week, the company scrapped Meta's decision to shift from in-house advanced chips to renting and partnering for established AI chips highlights its current difficulty matching Nvidia's capabilities, forcing greater reliance on competitors' products. Meta faces design complexities, internal skepticism, a lack of stable training software, potential for costly delays, and difficulties scaling the production of more advanced chips, making in-house solutions less viable currently. Meta's multi-billion-dollar deals and partnerships with these companies will provide access to leading AI chips, enabling it to power next-generation AI infrastructure through established, scalable technologies rather than its own unfinished chips.
Share
Share
Copy Link
Meta has abandoned its most advanced in-house AI training chip after hitting significant design roadblocks, according to The Information. The company is now shifting focus to a simpler version while committing over $100 billion to partnerships with AMD, Nvidia, and Google for external AI chips to power its massive infrastructure expansion.
Meta has scrapped its most advanced in-house AI training chip after encountering significant chip design challenges, forcing the company to pivot toward a simpler version and lean more heavily on external partners
1
. The setback underscores the formidable difficulty of developing custom silicon capable of rivaling industry leaders and raises serious questions about Meta's ability to reduce its reliance on external suppliers for critical AI infrastructure2
.
Source: Seeking Alpha
The company's internal AI chip development efforts "hit roadblocks," according to sources familiar with the matter, forcing the abandonment of its most ambitious training chip project
1
. Meta faces design complexities, internal skepticism, a lack of stable training software, potential for costly delays, and difficulties scaling the production of more advanced chips, making in-house solutions less viable currently2
.This isn't the first stumble for Meta's chip ambitions. The Meta Training and Inference Accelerator program has a documented history of setbacks
1
. The company previously scrapped an earlier inference chip after it underperformed in small-scale testing and pivoted in 2022 to billions of dollars' worth of Nvidia GPUs1
. While Meta eventually deployed an MTIA chip for inference tasks on Facebook and Instagram, the training chip has proven far more elusive1
.Meta Chief Product Officer Chris Cox described the company's chip development journey last year as a "walk, crawl, run situation"
1
. The company had begun testing its first in-house AI training chip manufactured by TSMC after completing a tape-out, but analyst Jeff Pu noted in January that Meta appeared to be scaling back its in-house ASIC program, turning to AMD instead of its own chips or Google TPUs1
.Scrapping advanced chip design has prompted Meta to secure massive deals with established chip manufacturers. On February 24, Meta announced a multiyear agreement with AMD worth more than $100 billion for up to six gigawatts of MI450 GPUs, with shipments scheduled to begin in the second half of this year
1
. AMD issued Meta a performance-based warrant for up to 160 million shares of its common stock under the deal1
.A week earlier, Meta expanded its partnership with Nvidia for millions of next-generation Vera Rubin GPUs and Grace CPUs, a deal likely worth tens of billions of dollars
1
. Meta also signed a deal to rent Google TPUs for developing new AI models1
. These multi-billion-dollar deals and partnerships will provide Meta access to leading AI chips, enabling it to power next-generation AI infrastructure through established, scalable technologies rather than its own unfinished chips2
.Related Stories
Meta has committed up to $135 billion in capital expenditures for 2026 to build out AI infrastructure, with plans to expand across more than 30 data centers
1
. Meta's decision to shift from in-house advanced chips to renting and partnering for established AI chips highlights its current difficulty matching Nvidia's capabilities, forcing greater reliance on competitors' products2
. Meta co-developed its MTIA chips with Broadcom, which also partners with Google on its TPUs1
.For tech observers, the key question becomes whether Meta will continue pursuing internal chip development or fully embrace a hybrid model that balances modest custom silicon efforts with strategic partnerships. The company's ability to execute on its AI ambitions now depends heavily on the performance and availability of chips from Nvidia, AMD, and Google—a position that could affect both costs and competitive positioning as the AI race intensifies.
Summarized by
Navi
[2]
12 Mar 2025•Technology

01 Oct 2025•Technology

Yesterday•Business and Economy

1
Business and Economy

2
Technology

3
Policy and Regulation
