Curated by THEOUTPOST
On Wed, 12 Mar, 12:05 AM UTC
15 Sources
[1]
Meta is reportedly testing in-house chips for AI training | TechCrunch
Meta is reportedly testing an in-house chip for training AI systems, a part of a strategy to reduce its reliance on hardware makers like Nvidia. According to Reuters, Meta's chip, which is designed to handle AI-specific workloads, was manufactured in partnership with Taiwan-based firm TSMC. The company is piloting a "small deployment" of the chip and plans to scale up production if the test is successful. Meta has deployed custom AI chips before, but only to run models -- not train them. As Reuters notes, several of the company's chip design efforts have been canceled or otherwise scaled back after failing to meet internal expectations. Meta expects to spend $65 billion on capital expenditure this year, much of which will go toward Nvidia GPUs. If the company manages to reduce even a fraction of that cost by shifting to in-house chips, it'd be a big win for the social media giant.
[2]
Meta is reportedly testing its first RSIC-V based AI chip for AI training
A Broadcom-designed processor. Image is for illustrative purposes only. (Image credit: Meta) Meta was one of the first companies to build its RISC-V-based chips for AI inference several years ago to cut costs and reduce reliance on Nvidia. Reuters reports that the company went one step further and designed (presumably with Broadcom's assistance) its in-house accelerator for AI training. If the chip meets Meta's goals, it may reduce its reliance on high-end Nvidia AI GPUs -- such as H100/H200 and B100/B200 -- for training advanced large-language models. Meta and Broadcom have taped out Meta's first AI training accelerator with TSMC; the latter produced the first working samples of these chips, and the partners have successfully brought up the unit, according to the report. By now, Meta has started with a limited deployment of the accelerator, assessing its performance before scaling up production and deployment. It is unclear whether Meta's engineers are running benchmarks on the new chip; it has already been deployed to make some useful work. The chip's specifications are unknown, though typically, AI training chips use a design known as a systolic array. This architecture consists of a structured network of identical processing elements (PEs) arranged in rows and columns. Each unit handles computations involving matrices or vectors, and data flows sequentially through the network. Since the processor is designed for AI training -- which means processing vast amounts of data -- expect the processor to feature HBM3 or HBM3E memory. Considering that we are dealing with a bespoke processor, Meta defined its supported data formats and instructions to optimize die size, power consumption, and performance. As for performance, the accelerator has to offer competitive performance-per-watt characteristics with Nvidia's up-to-date AI GPUs, such as H200, B200, and possibly next-generation B300. The chip is the latest addition to Meta's Meta Training and Inference Accelerator (MTIA) program. The program has faced various setbacks, including when development was halted at similar stages. For example, discontinued its internal inference processor after it failed to meet its performance and power targets during limited deployment tests. This failure led Meta to shift its strategy in 2022, placing large orders for Nvidia GPUs to meet its immediate AI processing requirements. Since then, Meta has become one of Nvidia's largest customers, acquiring tens of thousands of GPUs. These units have been critical in training AI models for recommendations, advertisements, and the Llama Foundation model series. Also, the green company's GPUs have been employed for inference processes, supporting interactions for over three billion daily users across Meta's platforms, according to Reuters. Despite these challenges, Meta has continued advancing its custom silicon program. Last year, Meta began using an MTIA chip for inference tasks, and looking ahead, Meta's leadership has outlined plans to start using its custom chips for AI training by 2026. The plan is to gradually increase usage if the chip meets performance and power targets, which is a critical component of Meta's long-term goal to design more customized hardware solutions for its data center operations. One interesting thing to note is that MTIA's accelerators for inference use open-source RISC-V cores. This enables Meta to customize instruction set architecture as it wishes to meet its requirements at its cadence, but on the other hand, it does not need to pay royalties to any third party. It is unclear whether MTIA's training accelerator is also based on the RISC-V ISA, but this is possible. If this is true, Meta might have developed one of the industry's highest-performing RISC-V-based chips.
[3]
Exclusive: Meta begins testing its first in-house AI training chip
NEW YORK, March 11 (Reuters) - Facebook owner Meta (META.O), opens new tab is testing its first in-house chip for training artificial intelligence systems, a key milestone as it moves to design more of its own custom silicon and reduce reliance on external suppliers like Nvidia (NVDA.O), opens new tab, two sources told Reuters. The world's biggest social media company has begun a small deployment of the chip and plans to ramp up production for wide-scale use if the test goes well, the sources said. The push to develop in-house chips is part of a long-term plan at Meta to bring down its mammoth infrastructure costs as the company places expensive bets on AI tools to drive growth. Meta, which also owns Instagram and WhatsApp, has forecast total 2025 expenses of $114 billion to $119 billion, including up to $65 billion in capital expenditure largely driven by spending on AI infrastructure. One of the sources said Meta's new training chip is a dedicated accelerator, meaning it is designed to handle only AI-specific tasks. This can make it more power-efficient than the integrated graphics processing units (GPUs) generally used for AI workloads. Meta is working with Taiwan-based chip manufacturer TSMC (2330.TW), opens new tab to produce the chip, this person said. The test deployment began after Meta finished its first "tape-out" of the chip, a significant marker of success in silicon development work that involves sending an initial design through a chip factory, the other source said. A typical tape-out costs tens of millions of dollars and takes roughly three to six months to complete, with no guarantee the test will succeed. A failure would require Meta to diagnose the problem and repeat the tape-out step. Meta and TSMC declined to comment. The chip is the latest in the company's Meta Training and Inference Accelerator (MTIA) series. The program has had a wobbly start for years and at one point scrapped a chip at a similar phase of development. However, Meta last year started using an MTIA chip to perform inference, or the process involved in running an AI system as users interact with it, for the recommendation systems that determine which content shows up on Facebook and Instagram news feeds. Meta executives have said they want to start using their own chips by 2026 for training, or the compute-intensive process of feeding the AI system reams of data to "teach" it how to perform. As with the inference chip, the goal for the training chip is to start with recommendation systems and later use it for generative AI products like chatbot Meta AI, the executives said. "We're working on how would we do training for recommender systems and then eventually how do we think about training and inference for gen AI," Meta's Chief Product Officer Chris Cox said at the Morgan Stanley technology, media and telecom conference last week. Cox described Meta's chip development efforts as "kind of a walk, crawl, run situation" so far, but said executives considered the first-generation inference chip for recommendations to be a "big success." Meta previously pulled the plug on an in-house custom inference chip after it flopped in a small-scale test deployment similar to the one it is doing now for the training chip, instead reversing course and placing orders for billions of dollars worth of Nvidia GPUs in 2022. The social media company has remained one of Nvidia's biggest customers since then, amassing an arsenal of GPUs to train its models, including for recommendations and ads systems and its Llama foundation model series. The units also perform inference for the more than 3 billion people who use its apps each day. The value of those GPUs has been thrown into question this year as AI researchers increasingly express doubts about how much more progress can be made by continuing to "scale up" large language models by adding ever more data and computing power. Those doubts were reinforced with the late-January launch of new low-cost models from Chinese startup DeepSeek, which optimize computational efficiency by relying more heavily on inference than most incumbent models. In a DeepSeek-induced global rout in AI stocks, Nvidia shares lost as much as a fifth of their value at one point. They subsequently regained most of that ground, with investors wagering the company's chips will remain the industry standard for training and inference, although they have dropped again on broader trade concerns. Reporting by Katie Paul in New York and Krystal Hu in San Francisco; Editing by Richard Chang Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Disrupted Krystal Hu Thomson Reuters Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.
[4]
Meta is reportedly testing its first in-house AI training chip
Breaking: A Big Tech company is ramping up its AI development. (Whaaat??) In this case, the protagonist of this now-familiar tale is Meta, which Reuters reports is testing its first in-house chip for AI training. The idea is to lower its gargantuan infrastructure costs and reduce its dependence on NVIDIA (a company that apparently brings out Mark Zuckerberg's "adult language" side). If all goes well, Meta hopes to use it for training by 2026. Meta has reportedly kicked off a small-scale deployment of the dedicated accelerator chip, which is designed to specialize in AI tasks (and is, therefore, more power-efficient than general-purpose NVIDIA GPUs). The deployment began after the company completed its first "tape-out," the phase in silicon development where a complete design is sent for a manufacturing test run. The chip is part of the Meta Training and Inference Accelerator (MTIA) series, the company's family of custom in-house silicon focused on generative AI, recommendation systems and advanced research. Last year, the company started using an MTIA chip for inference, a predictive process that happens behind the scenes in AI models. Meta began using the inference one for its Facebook and Instagram news feed recommendation systems. Reuters reports that it plans to start using the training silicon for that as well. The long-term plan for both chips is said to begin with recommendations and eventually use them for generative products like the Meta AI chatbot. The company is one of NVIDIA's biggest customers after placing orders for billions of dollars' worth of GPUs in 2022. That was a pivot for Meta after it bailed on a previous in-house inference silicon that failed a small-scale test deployment -- much like the one it's doing now for the training chip.
[5]
Meta Is Ready to Rock Nvidia's Boat With Its In-House AI Chip
Gradually, then suddenly, the big tech companies are replacing Nvidia's pricey AI chips. The adage goes, "your arbitrage is my opportunity," and could be used to sum up Meta's push into building an in-house chip for AI training tasks. Reuters reports the company recently began a small deployment of the chips after successfully building them in a test with Taiwan's TSMC (sorry Intel). Meta is already using its chips for inference or tailoring content to specific users after the AI model has already been developed and trained. It wants to use them for training models by 2026. From the article: Even if consumer applications of generative AI, like chatbots, end up being an overhyped bubble, Meta can deploy the technology to improve content recommendations and ad targeting. The vast majority of Meta's revenue comes from advertising, and even small improvements in targeting capabilities can produce billions of dollars in new revenue as advertisers see better results. Despite some flops and lackluster results from the Reality Labs division, Meta has managed to build out strong hardware teams over the years and has seen some success with its Ray-Ban AI glasses. However, executives have warned teams internally that their hardware efforts still have not had the world-changing impact they are hoping for. Meta's VR headsets sell in the low millions annually. CEO Mark Zuckerberg has long sought to build out its own hardware platforms so it can reduce its reliance on Apple and Google. Major tech companies have paid billions of dollars to Nvidia since 2022 in order to stock up on its much sought-after GPUs that have become the industry standard for AI processing. While the company has some competitors, like AMD, Nvidia has been lauded for offering not just chips themselves but the CUDA software toolkit for developing AI applications on them. Late last year, Nvidia reported that nearly 50% of its revenue in one quarter came from just four companies. All of these companies have sought to build chips so they can cut out the middleman and drive down costs, and they can wait many years for a return. There is only so long that investors will tolerate heavy spending before they demand that Meta show it is paying off. Amazon has its own Inferentia chips, while Google has been developing the Tensor Processing Units (TPUs) for years. Nvidia's concentration in just a few customers that are building their own processors, along with the rise of efficient AI models like China's DeepSeek, have raised some concerns about whether Nvidia can keep up its growth forever, though CEO Jensen Huang has said he is optimistic that data center providers will spend $1 trillion over the next five years building out infrastructure, which could see his company continue to grow into the 2030s. And, of course, most companies will not be able to develop chips like Meta can.
[6]
TSMC emerges as key partner for Meta as it helps Facebook's mothership wean off dependence on Nvidia
Meta's shift to custom silicon aims to reduce its dependence on Nvidia hardware Like many of Nvidia's highest spending customers, Meta is looking to slash its reliance on the GPU maker's expensive AI hardware by making its own silicon. In 2024, the social media giant began advertising for engineers to help build its own state-of-the-art machine learning accelerators, and now, according to an exclusive report from Reuters, Meta is at the testing stage for its first in-house chip designed for training AI systems. Sources told Reuters that following its first tape-out of the chip, Meta has started a limited deployment, and if testing goes well, it plans to scale production for wider use. According to Reuters, "Meta's new training chip is a dedicated accelerator, meaning it is designed to handle only AI-specific tasks. This can make it more power-efficient than the integrated graphics processing units (GPUs) generally used for AI workloads." Taiwan-based chipmaker TSMC produced the silicon for Meta as part of the Facebook owner's Meta Training and Inference Accelerator (MTIA) program, something which Reuters points out has had "a wobbly start for years and at one point scrapped a chip at a similar phase of development." In 2023, Meta unveiled its first-generation in-house AI inference accelerator designed to power the ranking and recommendation systems for Facebook and Instagram, and then in April 2024 it debuted a new version that doubled the compute and memory bandwidth. At the 2024 Hot Chips symposium, Meta revealed that its inference chip was built on TSMC's 5nm process, with the processing elements on RISC-V cores. Like a growing number of tech firms, Facebook has thrown its weight behind RISC-V in order to recognize its AI ambitions, and although the Reuters report doesn't provide any details on the technical aspects of Meta's new AI training chip, it seems a fair bet that it too will be based on the open source RISC-V architecture. The Reuters article does note that Meta executives say they want to start using their own chips for training by next year.
[7]
Meta in Talks with TSMC to Launch its First In-House Own Chip
The company aims to use in-house chips by 2026 for both training and inference tasks, Reuters reported. Meta has started testing its first in-house chip for training its AI systems, according to Reuters. The move is part of the company's plan to reduce its reliance on chip suppliers like NVIDIA and lower its AI infrastructure costs. The chip is part of the Meta Training and Inference Accelerator (MTIA) series. If tests go well, Meta plans to increase production and use the chip more widely. The company is working with Taiwan's Taiwan Semiconductor Manufacturing Company (TSMC) to manufacture it. The report suggests that Meta's AI-related spending is a major part of its projected $114 billion to $119 billion expenses for 2025, including up to $65 billion in capital expenditures. The new chip is a dedicated AI accelerator, designed specifically for AI tasks. This makes it more efficient than general-purpose GPUs typically used for AI training. Meta has previously struggled with its chip programme. It scrapped an earlier inference chip after poor test results and went back to buying billions of dollars' worth of NVIDIA GPUs in 2022. However, Meta did deploy a custom chip last year for AI inference on recommendation systems for Facebook and Instagram. Executives revealed that they aim to use in-house chips by 2026 for both training and inference tasks. Last month, reports surfaced that OpenAI was working on developing its own custom AI chips to lessen its dependence on NVIDIA. The company was nearing completion of the design for its first in-house chip, which it plans to send to TSMC for fabrication in the coming months.
[8]
Meta partners with TSMC to test its first homegrown AI training chips - SiliconANGLE
Meta partners with TSMC to test its first homegrown AI training chips Meta Platforms Inc. is forging ahead with its plans to reduce its reliance on Nvidia Corp.'s graphics processing units, and is currently testing its first in-house artificial intelligence training chip. According to Reuters, the test process involves manufacturing an initial small batch of chips, and if it proves to be successful, the company will look to ramp up production quickly. The in-house chip test is part of an effort by Meta to try and rein in its spending at a time when it's investing heavily in the AI infrastructure it believes will be necessary to stay at the forefront of the industry. The company, which owns Instagram and WhatsApp as well as Facebook, has said it will spend around $114 billion to $119 billion in 2025, with up to $65 billion of that amount going on capital expenditures, which are primarily directed at AI infrastructure. By making its own chips for AI training, Meta would not need to buy so many expensive GPUs from Nvidia and other suppliers. Several big technology companies, including cloud giants like Amazon Web Services Inc. and Google Cloud, already mass produce their own AI processors. OpenAI is trying to do the same, and two months earlier it said it aims to finalize its chip design later this year. An anonymous source told Reuters that Meta's new chip is a dedicated AI accelerator that's purpose-built for training large language models. This should mean it's more power-efficient than Nvidia's general-purpose GPUs. The company is working with Taiwan Semiconductor Manufacturing Co. to manufacture its new chip. The test comes following the successful completion of Meta's first "tape-out", which is a significant step that involves sending the initial design of the chip to a manufacturing partner, in order to assess that it is feasible. The tape-out phase is extremely expensive, with costs typically running into tens of millions of dollars, and it often takes between three and six months to complete. While neither Meta nor TSMC were inclined to comment, Reuter's sources say that the new chip is part of Meta's Meta Training and Inference Accelerator series of chips, which have seen mixed success to date. The social media giant was forced to scrap an earlier MTIA chip design during the development process, but last year it managed to deploy its first processors, designed specifically for inference tasks. That chip is now powering Meta's AI-based recommendation systems, which determine the content that appears in user's Facebook and Instagram feeds. When Meta abandoned its first MTIA chip in 2022, it had no option but to double-down on Nvidia's GPUs, and it has ordered billions of dollars' worth of those chips since then. The GPUs are used for both training and inference, as well as recommendations and ads. If the latest test is successful, Meta wants to start using its in-house chips to train its next-generation Llama LLMs. That will enable it to scale back on its GPU purchases. The company's multi-billion dollar investments in AI infrastructure have come under heavy scrutiny recently. Some AI researchers have questioned whether or not throwing more data and computing power at LLMs will lead to meaningful progress or not, and such doubts have gained traction with the debut of Chinese startup DeepSeek Ltd.'s DeepSeek R-1 reasoning model, which was reportedly built at a much lower cost, using less advanced GPUs. The arrival of DeepSeek sparked a big drop in the value of Nvidia's stock, and the market has since become even more volatile amid broader trade concerns.
[9]
Meta just tested its own AI chip: Is Nvidia in trouble?
Meta Platforms, Inc., the owner of Facebook, has begun testing its first in-house chip designed for training artificial intelligence systems, marking a pivotal development as the company aims to create more of its own silicon and decrease dependence on external suppliers like Nvidia, according to two sources cited by Reuters. The social media giant has initiated a small deployment of this new chip and plans to expand its production for widespread use if initial tests prove successful. This move is part of Meta's long-term strategy to reduce operational costs, particularly as it invests heavily in AI tools for growth. The company has forecasted total expenses for 2025 to be between $114 billion and $119 billion, which includes up to $65 billion in capital expenditures primarily driven by AI infrastructure investments. One source indicated that Meta's new training chip functions as a dedicated accelerator, designed specifically for AI tasks, which can enhance its power efficiency compared to standard graphics processing units that are typically employed for AI workloads. Meta is partnering with Taiwan-based chip manufacturer TSMC (2330.TW) for the production of this chip. The testing phase commenced after Meta completed its first "tape-out" of the chip -- a critical step in silicon design that involves submitting an initial design to a chip fabrication facility. This phase typically costs tens of millions of dollars and can take three to six months, with no guarantee of success. Should the testing fail, Meta would need to diagnose the issue and initiate another tape-out. The chip represents the latest iteration in Meta's Meta Training and Inference Accelerator (MTIA) series, which has experienced setbacks, including the abandonment of a previous chip during a similar stage of development. However, last year Meta began utilizing an MTIA chip for inference processes that govern content recommendations on Facebook and Instagram. How Singapore became a hotspot for smuggled Nvidia AI chips Company executives have expressed intentions to implement their proprietary chips by 2026 for training, which involves processing vast amounts of data to "teach" the AI systems. Initially, the training chip will focus on recommendation systems, with future applications in generative AI products, such as the chatbot Meta AI. Chris Cox, Chief Product Officer, stated that the company is exploring training for recommender systems and the potential for training and inference in generative AI, characterizing the chip development process as a "walk, crawl, run situation." He noted that the first-generation inference chip is considered a significant success. Previously, Meta abandoned an in-house custom inference chip after an unsuccessful small-scale test, leading the company to order billions of dollars' worth of Nvidia GPUs in 2022. Since then, Meta has remained one of Nvidia's largest customers, acquiring an extensive array of GPUs for training its models, including those used for recommendation algorithms and ad systems, as well as its Llama foundation model series. These GPUs also facilitate inference for over 3 billion daily users of its applications. This year has seen scrutiny of the value of these GPUs as AI researchers question the efficacy of merely scaling large language models by increasing data and computational power. Concerns intensified after the January release of competitively priced models by the Chinese startup DeepSeek, which emphasize computational efficiency through enhanced inference use over traditional models. Nvidia experienced a significant drop, losing up to a fifth of its value during a sell-off in AI stocks spurred by DeepSeek's innovations, although shares later recovered as investors predicted Nvidia's chips would remain essential for training and inference, despite subsequent declines tied to broader trade issues. Meta's latest chip development comes amid broader industry trends, with major players like OpenAI reportedly finalizing in-house designs, collaborating with companies like Broadcom and TSMC. Sources indicate that TSMC is responsible for producing the test batches for Meta, while other industry reports suggest Meta may be working with Broadcom on the tape-out process for its new AI training accelerator. The development of the MTIA series has been ongoing for years, having experienced initial challenges that led to previous chip designs being discarded. Last year, Meta began employing an MTIA chip for user interaction processes tied to its AI systems. The urgency for custom silicon solutions is evident as the company aims to have these systems operational for AI training by next year, though it remains unclear if the new chip will utilize an open-source RISC-V architecture like previous MTIA hardware.
[10]
Exclusive-Meta Begins Testing Its First In-House AI Training Chip
NEW YORK (Reuters) - Facebook owner Meta is testing its first in-house chip for training artificial intelligence systems, a key milestone as it moves to design more of its own custom silicon and reduce reliance on external suppliers like Nvidia, two sources told Reuters. The world's biggest social media company has begun a small deployment of the chip and plans to ramp up production for wide-scale use if the test goes well, the sources said. The push to develop in-house chips is part of a long-term plan at Meta to bring down its mammoth infrastructure costs as the company places expensive bets on AI tools to drive growth. Meta, which also owns Instagram and WhatsApp, has forecast total 2025 expenses of $114 billion to $119 billion, including up to $65 billion in capital expenditure largely driven by spending on AI infrastructure. One of the sources said Meta's new training chip is a dedicated accelerator, meaning it is designed to handle only AI-specific tasks. This can make it more power-efficient than the integrated graphics processing units (GPUs) generally used for AI workloads. Meta is working with Taiwan-based chip manufacturer TSMC to produce the chip, this person said. The test deployment began after Meta finished its first "tape-out" of the chip, a significant marker of success in silicon development work that involves sending an initial design through a chip factory, the other source said. A typical tape-out costs tens of millions of dollars and takes roughly three to six months to complete, with no guarantee the test will succeed. A failure would require Meta to diagnose the problem and repeat the tape-out step. Meta and TSMC declined to comment. The chip is the latest in the company's Meta Training and Inference Accelerator (MTIA) series. The program has had a wobbly start for years and at one point scrapped a chip at a similar phase of development. However, Meta last year started using an MTIA chip to perform inference, or the process involved in running an AI system as users interact with it, for the recommendation systems that determine which content shows up on Facebook and Instagram news feeds. Meta executives have said they want to start using their own chips by 2026 for training, or the compute-intensive process of feeding the AI system reams of data to "teach" it how to perform. As with the inference chip, the goal for the training chip is to start with recommendation systems and later use it for generative AI products like chatbot Meta AI, the executives said. "We're working on how would we do training for recommender systems and then eventually how do we think about training and inference for gen AI," Meta's Chief Product Officer Chris Cox said at the Morgan Stanley technology, media and telecom conference last week. Cox described Meta's chip development efforts as "kind of a walk, crawl, run situation" so far, but said executives considered the first-generation inference chip for recommendations to be a "big success." Meta previously pulled the plug on an in-house custom inference chip after it flopped in a small-scale test deployment similar to the one it is doing now for the training chip, instead reversing course and placing orders for billions of dollars worth of Nvidia GPUs in 2022. The social media company has remained one of Nvidia's biggest customers since then, amassing an arsenal of GPUs to train its models, including for recommendations and ads systems and its Llama foundation model series. The units also perform inference for the more than 3 billion people who use its apps each day. The value of those GPUs has been thrown into question this year as AI researchers increasingly express doubts about how much more progress can be made by continuing to "scale up" large language models by adding ever more data and computing power. Those doubts were reinforced with the late-January launch of new low-cost models from Chinese startup DeepSeek, which optimize computational efficiency by relying more heavily on inference than most incumbent models. In a DeepSeek-induced global rout in AI stocks, Nvidia shares lost as much as a fifth of their value at one point. They subsequently regained most of that ground, with investors wagering the company's chips will remain the industry standard for training and inference, although they have dropped again on broader trade concerns. (Reporting by Katie Paul in New York and Krystal Hu in San Francisco; Editing by Richard Chang)
[11]
Meta Might Be Testing In-House Chips for AI Training
The company reportedly wants to reduce its reliance on Nvidia GPUs Meta has reportedly begun testing its first in-house chipsets that will be used to train artificial intelligence (AI) models. As per the report, the company has deployed a limited number of processors to test the performance and sustainability of the custom chipsets, and based on how well the tests go, it will begin large-scale production of the said hardware. These processors are said to be part of the Menlo Park-based tech giant's Meta Training and Inference Accelerator (MTIA) family of chipsets. According to a Reuters report, the tech giant developed these AI chipsets in collaboration with the chipmaker Taiwan Semiconductor Manufacturing Company (TSMC). Meta reportedly completed the tape-out or the final stage of the chip design process recently, and has now begun deploying the chips at a small scale. This is not the first AI-focused chipset for the company. Last year, it unveiled Inference Accelerators or processors that are designed for AI inference. However, Meta did not have any in-house hardware accelerators to train its Llama family of large language models (LLMs). Citing unnamed sources within the company, the publication claimed that Meta's larger vision behind developing in-house chipsets is to bring down the infrastructure costs of deploying and running complex AI systems for internal usage, consumer-focused products, and developer tools. Interestingly, in January, Meta CEO Mark Zuckerberg announced that the company's expansion of the Mesa Data Center in Arizona, USA was finally complete and the division began running operations. It is likely that the new training chipsets are also being deployed at this location. The report stated that the new chipsets will first be used with Meta's recommendation engine that powers its various social media platforms, and later the use case will be expanded to generative AI products such as Meta AI. In January, Zuckerberg revealed in a Facebook post that the company plans to invest as much as $65 billion (roughly Rs. 5,61,908 crore) in 2025 on projects relating to AI. The expenses also accounted for the expansion of the Mesa Data Center. It also includes hiring more employees for its AI teams.
[12]
Meta begins testing its first in-house AI training chip
Meta's push to develop its own chips is part of a long-term strategy to reduce its huge infrastructure costs, as the company makes significant investments in AI tools to drive growth. The company has started a small-scale trial of the chip and plans to increase production for wider use if the test is successful.Facebook owner Meta is testing its first in-house chip for training artificial intelligence systems, a key milestone as it moves to design more of its own custom silicon and reduce reliance on external suppliers like Nvidia, two sources told Reuters. The world's biggest social media company has begun a small deployment of the chip and plans to ramp up production for wide-scale use if the test goes well, the sources said. The push to develop in-house chips is part of a long-term plan at Meta to bring down its mammoth infrastructure costs as the company places expensive bets on AI tools to drive growth. Meta, which also owns Instagram and WhatsApp, has forecast total 2025 expenses of $114 billion to $119 billion, including up to $65 billion in capital expenditure largely driven by spending on AI infrastructure. One of the sources said Meta's new training chip is a dedicated accelerator, meaning it is designed to handle only AI-specific tasks. This can make it more power-efficient than the integrated graphics processing units (GPUs) generally used for AI workloads. Meta is working with Taiwan-based chip manufacturer TSMC to produce the chip, this person said. The test deployment began after Meta finished its first "tape-out" of the chip, a significant marker of success in silicon development work that involves sending an initial design through a chip factory, the other source said. A typical tape-out costs tens of millions of dollars and takes roughly three to six months to complete, with no guarantee the test will succeed. A failure would require Meta to diagnose the problem and repeat the tape-out step. Meta and TSMC declined to comment. The chip is the latest in the company's Meta Training and Inference Accelerator (MTIA) series. The program has had a wobbly start for years and at one point scrapped a chip at a similar phase of development. However, Meta last year started using an MTIA chip to perform inference, or the process involved in running an AI system as users interact with it, for the recommendation systems that determine which content shows up on Facebook and Instagram news feeds. Meta executives have said they want to start using their own chips by 2026 for training, or the compute-intensive process of feeding the AI system reams of data to "teach" it how to perform. As with the inference chip, the goal for the training chip is to start with recommendation systems and later use it for generative AI products like chatbot Meta AI, the executives said. "We're working on how would we do training for recommender systems and then eventually how do we think about training and inference for gen AI," Meta's Chief Product Officer Chris Cox said at the Morgan Stanley technology, media and telecom conference last week. Cox described Meta's chip development efforts as "kind of a walk, crawl, run situation" so far, but said executives considered the first-generation inference chip for recommendations to be a "big success." Meta previously pulled the plug on an in-house custom inference chip after it flopped in a small-scale test deployment similar to the one it is doing now for the training chip, instead reversing course and placing orders for billions of dollars worth of Nvidia GPUs in 2022. The social media company has remained one of Nvidia's biggest customers since then, amassing an arsenal of GPUs to train its models, including for recommendations and ads systems and its Llama foundation model series. The units also perform inference for the more than 3 billion people who use its apps each day. The value of those GPUs has been thrown into question this year as AI researchers increasingly express doubts about how much more progress can be made by continuing to "scale up" large language models by adding ever more data and computing power. Those doubts were reinforced with the late-January launch of new low-cost models from Chinese startup DeepSeek, which optimize computational efficiency by relying more heavily on inference than most incumbent models. In a DeepSeek-induced global rout in AI stocks, Nvidia shares lost as much as a fifth of their value at one point. They subsequently regained most of that ground, with investors wagering the company's chips will remain the industry standard for training and inference, although they have dropped again on broader trade concerns.
[13]
Meta Taps Taiwan Semiconductor To Build AI Chip, Aims To Cut Nvidia Dependence By 2026 - Meta Platforms (NASDAQ:META)
Find out which stock just plummeted to the bottom of the new Benzinga Rankings. Updated daily -- spot the biggest red flags before it's too late. Meta Platforms Inc META is testing its first in-house chip for training artificial intelligence systems. This marks its efforts to reduce reliance on suppliers like Nvidia Corp NVDA and bring down its infrastructure costs to offset the impact of its bets on AI tools to drive growth. Meta tapped Taiwan Semiconductor Manufacturing Co TSM to produce the chip, Reuters reported, citing unnamed sources familiar with the matter. Also Read: Microchip Launches New PIC32A Chips For AI, Auto, Medical, and Industrial Applications Meta told Reuters they want to use their chips by 2026 for training. In 2024, Meta started using a Meta Training and Inference Accelerator (MTIA) chip to perform inference. As with the inference chip, the training chip aims to start with recommendation systems and later use them for generative AI products like chatbot Meta AI. The Facebook and Instagram parent company has begun deploying a small chip and plans to ramp up production for wide-scale use if the test goes well. Meta expects total 2025 expenses of $114 billion -- $119 billion, including up to $65 billion in capital expenditures driven by AI infrastructure spending. Meta previously scrapped an in-house custom inference chip. Since then, Meta has remained one of Nvidia's leading customers. The AI chip market was worth $61.45 billion in 2023 and is expected to reach $621.15 billion by 2032, growing at a CAGR of 29.4% over the forecast period 2024-2032. Big Tech giants remain engaged in their endeavors to develop in-house chips to reduce dependence on suppliers. Reportedly, OpenAI is nearly ready to finalize the design for its first proprietary AI chip, which could reduce the company's reliance on Nvidia. OpenAI is on track to achieve its goal of commercializing at Taiwan Semiconductor by 2026. Apple Inc AAPL reportedly collaborated with Taiwan Semiconductor to launch in-house Wi-Fi and Bluetooth chips by 2025. Price Action: META stock is up 1% at $604.00 at last check Tuesday. Also Read: Ciena Q1 Earnings: Revenue and EPS Beat Estimates, CEO Eyes AI, Cloud Network Tailwinds Photo: Shutterstock METAMeta Platforms Inc $604.321.06% Stock Score Locked: Want to See it? Benzinga Rankings give you vital metrics on any stock - anytime. Reveal Full Score Edge Rankings Momentum86.85 Growth72.56 Quality- Value40.27 Price Trend Short Medium Long Overview AAPLApple Inc $219.56-3.48% NVDANVIDIA Corp $108.841.74% TSMTaiwan Semiconductor Manufacturing Co Ltd $169.74-0.53% Market News and Data brought to you by Benzinga APIs
[14]
Meta Has Kicked Off Minor Deployment Of Its In-House AI Chip As Company Aims To Reduce Its Massive Infrastructure Cost; First Tape-Out Successful Using TSMC's Technology
AI infrastructure costs alone for Meta are said to increase to $65 billion, with total expenditure forecasted to be between $114 billion and $119 billion. To curb this rising sum, the social media giant started developing its first in-house AI chip, with the company displaying show progress in this area, according to the latest report. Apparently, a small deployment of the silicon will kick off in the future, allowing Meta to reduce its reliance on NVIDIA and its pricey GPUs for training artificial intelligence. The small deployment plan could lead to a full-scale use case if all tests go well. According to unnamed sources, Reuters reports that Meta's new AI chip is a dedicated accelerator, meaning that its sole purpose will be to tackle artificial intelligence-related tasks. In addition to reducing its bill, which is currently happening by purchasing ludicrously expensive graphics processors from NVIDIA, Meta can substantially allay its infrastructure's power consumption as its AI chip will be more power efficient thanks to being designed to handle specific tasks. TSMC is expected to undertake the production of this custom silicon, but the report does not specify which of the Taiwanese semiconductor firm's manufacturing processes will be utilized. However, the details state that Meta had successfully finished its first tape-out of the AI chip, which can cost millions and up to six months for the process to complete. Even then, there is no guarantee that the chip will work according to the company's requirements, forcing it to isolate and diagnose the problem and repeat the tape-out process, adding further to its development costs. There was a time when Meta decided not to pursue the development of its custom AI chip, likely due to development complications, but it appears that the company has managed to scale these hurdles. Executives hope to start leveraging the silicon's capabilities by 2026, with its intended goal to train Meta's systems, then later move on to generative AI products such as the AI chatbot. NVIDIA continues to benefit thanks to increased GPU sales, with Meta as one of its most lucrative customers. Unfortunately, experts are concerned about how much progress can be attained in scaling up LLMs by increasing raw GPU power. The transition to custom AI chips could also reduce the space required to house and cool this hardware, so let us wait and see how long Meta comes up with the first unit.
[15]
Facebook owner Meta begins testing its first in-house AI training chip
Image credit: Getty Images Facebook owner Meta is testing its first in-house chip for training artificial intelligence systems, a key milestone as it moves to design more of its own custom silicon and reduce reliance on external suppliers like Nvidia, two sources told Reuters. The world's biggest social media company has begun a small deployment of the chip and plans to ramp up production for wide-scale use if the test goes well, the sources said. The push to develop in-house chips is part of a long-term plan at Meta to bring down its mammoth infrastructure costs as the company places expensive bets on AI tools to drive growth. Meta, which also owns Instagram and WhatsApp, has forecast total 2025 expenses of $114bn to $119bn, including up to $65 billion in capital expenditure largely driven by spending on AI infrastructure. Read-Meta to invest up to $65bn in AI Infrastructure, CEO Mark Zuckerberg reveals One of the sources said Meta's new training chip is a dedicated accelerator, meaning it is designed to handle only AI-specific tasks. This can make it more power-efficient than the integrated graphics processing units (GPUs) generally used for AI workloads. Meta is working with Taiwan-based chip manufacturer TSMC to produce the chip, this person said. The test deployment began after Meta finished its first "tape-out" of the chip, a significant marker of success in silicon development work that involves sending an initial design through a chip factory, the other source said. A typical tape-out costs tens of millions of dollars and takes roughly three to six months to complete, with no guarantee the test will succeed. A failure would require Meta to diagnose the problem and repeat the tape-out step. Meta and TSMC declined to comment. The chip is the latest in the company's Meta Training and Inference Accelerator (MTIA) series. The program has had a wobbly startfor years and at one point scrapped a chip at a similar phase of development. However, Meta last year started using an MTIA chip to perform inference, or the process involved in running an AI system as users interact with it, for the recommendation systems that determine which content shows up on Facebook and Instagram news feeds. Meta executives have said they want to start using their own chips by 2026 for training, or the compute-intensive process of feeding the AI system reams of data to "teach" it how to perform. As with the inference chip, the goal for the training chip is to start with recommendation systems and later use it for generative AI products like chatbot Meta AI, the executives said. "We're working on how would we do training for recommender systems and then eventually how do we think about training and inference for gen AI," Meta's Chief Product Officer Chris Cox said at the Morgan Stanley technology, media and telecom conference last week. Cox described Meta's chip development efforts as "kind of a walk, crawl, run situation" so far, but said executives considered the first-generation inference chip for recommendations to be a "big success." Meta previously pulled the plug on an in-house custom inference chip after it flopped in a small-scale test deployment similar to the one it is doing now for the training chip, instead reversing course and placing orders for billions of dollars worth of Nvidia GPUs in 2022. The social media company has remained one of Nvidia's biggest customers since then, amassing an arsenal of GPUs to train its models, including for recommendations and ads systems and its Llama foundation model series. The units also perform inference for the more than 3 billion people who use its apps each day. The value of those GPUs has been thrown into question this year as AI researchers increasingly express doubtsabout how much more progress can be made by continuing to "scale up" large language models by adding ever more data and computing power. Those doubts were reinforced with the late-January launch of new low-cost models from Chinese startup DeepSeek, which optimise computational efficiency by relying more heavily on inference than most incumbent models. In a DeepSeek-induced global rout in AI stocks, Nvidia shares lost as much as a fifth of their value at one point. They subsequently regained most of that ground, with investors wagering the company's chips will remain the industry standard for training and inference, although they have dropped again on broader trade concerns.
Share
Share
Copy Link
Meta has begun testing its first in-house chip for AI training, aiming to reduce reliance on Nvidia and cut infrastructure costs. The move marks a significant step in Meta's custom silicon development efforts.
Meta, the parent company of Facebook, Instagram, and WhatsApp, has taken a significant step in its artificial intelligence (AI) infrastructure development by beginning tests on its first in-house chip designed for AI training 1. This move is part of Meta's long-term strategy to reduce its reliance on external suppliers like Nvidia and bring down its massive infrastructure costs 3.
The chip, developed under Meta's Training and Inference Accelerator (MTIA) program, is a dedicated accelerator designed specifically for AI-related tasks 3. Manufactured in partnership with Taiwan Semiconductor Manufacturing Company (TSMC), the chip is currently undergoing a small-scale deployment to assess its performance 13.
Key features of the new chip include:
Meta's venture into custom silicon development has seen both successes and setbacks:
The company aims to start using its custom chips for AI training by 2026, gradually increasing usage if performance and power targets are met 2.
Meta's push for in-house chip development comes amid soaring infrastructure costs. The company expects to spend up to $65 billion on capital expenditure in 2025, largely driven by AI infrastructure 13. By developing its own chips, Meta hopes to:
Meta's move into custom AI chip development reflects a broader trend among tech giants:
This trend, coupled with the emergence of efficient AI models like China's DeepSeek, has raised questions about the long-term growth prospects of established chip suppliers 5.
If successful, Meta's custom AI chips could significantly impact the company's AI capabilities and financial performance. The chips are expected to be used initially for recommendation systems and eventually for generative AI products like the Meta AI chatbot 34.
As Meta and other tech giants continue to invest in custom silicon, the landscape of AI hardware could see substantial changes in the coming years, potentially reshaping the dynamics of the AI industry.
Reference
OpenAI is finalizing the design of its first in-house AI chip, aiming to reduce reliance on Nvidia. The chip, set for TSMC production using 3nm technology, is expected to enter mass production by 2026.
21 Sources
21 Sources
OpenAI is working on its first custom AI chip for inference tasks, partnering with Broadcom and TSMC. The company is also diversifying its chip supply by adding AMD alongside NVIDIA GPUs to meet growing infrastructure demands.
18 Sources
18 Sources
Arm, known for licensing chip designs, is set to produce its first in-house chip, with Meta as its inaugural customer. This move marks a significant shift in Arm's business model and could reshape the semiconductor industry landscape.
16 Sources
16 Sources
ByteDance, the parent company of TikTok, faces conflicting reports about its AI chip development plans. While some sources claim the company is working on custom AI chips, ByteDance has officially denied these rumors, emphasizing cost optimization efforts instead.
3 Sources
3 Sources
Meta Platforms faces stiff competition from Chinese AI startup DeepSeek, raising questions about the future of AI chip demand and Meta's position in the AI race.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved