2 Sources
2 Sources
[1]
Broadcom reveals its mystery $10 billion customer is Anthropic
A Broadcom sign is pictured as the company prepares to launch new optical chip tech to fend off Nvidia in San Jose, California, U.S., September 5, 2025. Broadcom revealed during a September earnings call that it had signed a customer that had placed a $10 billion order for custom chips. At the time, Broadcom didn't say who it was, but on Thursday, CEO Hock Tan revealed that the mystery customer was AI lab Anthropic, which placed an order for the latest Google tensor processing units. "We received a $10 billion order to sell the latest TPU Ironwood racks to Anthropic," said Tan, speaking on Broadcom's fourth-quarter earnings call on Thursday. He also said Anthropic had placed an additional $11 billion order with Broadcom in the company's latest quarter. While Broadcom typically doesn't disclose its large customers, Tan's September remark drew significant investor attention amid the AI infrastructure boom. A Broadcom official told CNBC in October that the mystery customer wasn't OpenAI, which has its own agreement to purchase chips from the chipmaker. Broadcom makes custom chips called ASICs, which some experts believe are more efficient for certain artificial intelligence algorithms than the market-dominating chips from Nvidia. Broadcom helps make Google's TPUs, and last month, the search company bragged that it trained its state-of-the-art Gemini 3 model entirely on its TPUs. The chipmaker calls its custom AI chips XPUs, and on Thursday, Tan said his company was delivering entire server racks -- not just chips -- to Anthropic, which is Broadcom's fourth XPU customer. Broadcom on Thursday also said that it has secured a fifth customer for its custom chip business. That customer placed a $1 billion order during the fourth quarter, but once again, Broadcom did not reveal the customer.
[2]
Broadcom Reveals $21 Billion Google TPUs Order from Anthropic | AIM
Broadcom disclosed during its Q4 2025 earnings call that it received a $10 billion order in the previous quarter to supply Google's latest Tensor Processing Units (TPUs) to Anthropic. "In Q4, we received an additional $11 billion order from this same customer for delivery in late 2026," said Hock Tan, CEO of Broadcom. This brings Anthropic's total TPU orders to $21 billion. Furthermore, the company revealed a $73 billion backlog of AI product orders, which are expected to be shipped over the next six quarters (18 months). TPUs are specialised accelerators developed by Google for AI workloads. Now in their seventh generation, TPUs are available to customers through Google Cloud and power many of Google's internal systems, including training and deployment of the Gemini family of models. Google designs the TPU architecture, while Broadcom converts those designs into manufacturable silicon and handles volume production. The relationship mirrors Google's long-standing strategy of controlling key AI hardware design while relying on semiconductor partners for fabrication expertise. Anthropic, a long-term user of TPUs, recently announced plans to significantly scale its infrastructure. The company intends to deploy one million TPUs, backed by more than one gigawatt of new compute capacity coming online in 2026. This represents one of the largest dedicated AI compute buildouts in the industry. Several other companies have also confirmed their use of TPUs, including Meta, Cohere, Apple and Ilya Sutskever's new startup, Super Safe Intelligence (SSI). A report from The Information indicates that Meta is evaluating the deployment of TPUs in its data centres starting in 2027. The growing adoption of TPUs stems from their power efficiency and tight optimisation for AI training and inference, creating increasing competitive pressure on NVIDIA's GPU dominance. Broadcom said it now has five TPU/XPU (custom AI accelerator) customers -- with Google and Anthropic named on the call. Reports and industry analysis indicate that Meta and ByteDance are also among its custom AI chip customers, though Broadcom has not publicly confirmed the full roster. The rise of TPUs over the years, thanks to their power efficiency and being fine-tuned to specifically handle AI workloads, poses a challenge to NVIDIA's dominance with GPUs. According to new analysis from SemiAnalysis, TPU v7 demonstrates that although it has roughly 10% lower peak floating-point operations per second (FLOPs) and memory bandwidth than NVIDIA's GB200 platform, it still delivers a stronger performance-per-total-cost-of-ownership (TCO) profile. SemiAnalysis estimates that Google's internal cost to deploy Ironwood is about 44% lower than deploying an equivalent NVIDIA system. Even when priced for external customers, TPUv7 offers an estimated 30% lower TCO than NVIDIA's GB200, and roughly 41% lower TCO than the upcoming GB300. SemiAnalysis notes that if Anthropic achieves around 40% machine-fraction utilisation (MFU) on TPUs -- a realistic figure given the company's compiler and systems expertise -- the effective training cost per FLOP could be 50-60% lower than what GB300-class GPU clusters are expected to deliver.
Share
Share
Copy Link
Broadcom CEO Hock Tan disclosed that Anthropic placed a $21 billion order for Google's Tensor Processing Units, including $10 billion for the latest TPU Ironwood racks. The AI lab plans to deploy one million TPUs backed by over one gigawatt of compute capacity in 2026, marking one of the largest AI infrastructure buildouts in the industry.
Broadcom CEO Hock Tan ended months of speculation during the company's fourth-quarter earnings call on Thursday, revealing that Anthropic is the mystery customer behind massive chip orders totaling $21 billion. The AI lab initially placed a $10 billion order in September for Google TPUs, specifically the latest TPU Ironwood racks, followed by an additional $11 billion order in the fourth quarter for delivery in late 2026
1
2
. The disclosure came after Broadcom's September earnings call sparked significant investor attention amid the AI infrastructure boom, with the company initially declining to name the customer.Broadcom manufactures custom AI chips called ASICs and XPUs, which experts believe offer superior efficiency for certain artificial intelligence algorithms compared to Nvidia's market-dominating GPUs. The company converts Google's TPU architecture designs into manufacturable silicon and handles volume production, while Google maintains control over the hardware design
2
. Tan emphasized that Broadcom is delivering entire server racks to Anthropic, not just chips, making the AI lab Broadcom's fourth XPU customer1
. The chipmaker also secured a fifth customer during the fourth quarter with a $1 billion order, though the company has not disclosed this customer's identity.Anthropicintends to deploy one million TPUs backed by more than one gigawatt of new compute capacity coming online in 2026, representing one of the largest dedicated AI compute buildouts in the industry
2
. This ambitious expansion reflects the growing demand for specialized AI accelerators as companies scale their machine learning operations. Tensor Processing Units are now in their seventh generation and are available to customers through Google Cloud, powering many of Google's internal systems including the training and deployment of Gemini models. Google recently highlighted that it trained its state-of-the-art Gemini 3 model entirely on its TPUs1
.Related Stories
Analysis from SemiAnalysis reveals compelling economics behind Anthropic's TPU investment. Although TPU v7 has roughly 10% lower peak floating-point operations per second and memory bandwidth than NVIDIA GB200, it delivers a stronger performance-per-total-cost-of-ownership profile
2
. Google's internal deployment cost for Ironwood is estimated at about 44% lower than an equivalent Nvidia system. Even when priced for external customers, TPU v7 offers approximately 30% lower total-cost-of-ownership than NVIDIA GB200 and roughly 41% lower than the upcoming GB300. SemiAnalysis notes that if Anthropic achieves around 40% machine-fraction utilization on TPUs, the effective training cost per FLOP could be 50-60% lower than what GB300-class GPU clusters are expected to deliver2
.Broadcom disclosed a $73 billion AI product backlog expected to ship over the next six quarters, signaling robust demand for custom AI chips
2
. Several major companies have confirmed their use of Google TPUs, including Meta, Cohere, Apple, and Ilya Sutskever's Super Safe Intelligence. A report from The Information indicates that Meta is evaluating TPU deployment in its data centers starting in 20272
. The growing adoption stems from TPUs' power efficiency and tight optimization for AI workloads, creating increasing competitive pressure on Nvidia's GPU dominance. Industry analysis suggests that Meta and ByteDance are also among Broadcom's custom AI chip customers, though the company has not publicly confirmed the full roster.Summarized by
Navi
11 Dec 2025•Business and Economy

27 Dec 2024•Technology

22 Oct 2025•Technology

1
Technology

2
Technology

3
Technology
