4 Sources
4 Sources
[1]
OpenAI extends chip spending spree with multibillion-dollar Broadcom deal
OpenAI has agreed to purchase 10 gigawatts of computer chips from US semiconductor giant Broadcom, the latest in a string of deals by the artificial intelligence start-up worth hundreds of billions of dollars. The mammoth chip order means OpenAI could spend another $350bn to $500bn on top of the roughly $1tn of chip and data centre deals it has signed in recent months as it races to secure the computing power to run services such as ChatGPT. In September, OpenAI agreed to buy chips with a total power consumption of 10GW from Nvidia, and last week announced it would buy a further 6GW of chips from rival chip designer AMD. OpenAI also recently struck a data centre deal with Oracle that will cost $300bn over five years. The deals have bound some of the world's biggest tech groups to OpenAI's fortunes, despite questions about how OpenAI will fund them as the cost dwarfs the start-up's existing revenues. They also commit OpenAI to an ambitious infrastructure development project as deploying the chips will require building vast new data centres. Under the latest deal, OpenAI has co-designed custom chips with Broadcom specifically for running its own AI models, marking the first time the start-up has produced its own AI chips. When completed, the deals would bring OpenAI's total access to computing capacity to over 26GW -- equivalent to about 26 nuclear reactors. In a podcast announcing the deal, OpenAI chief executive Sam Altman said his company had been working with Broadcom for 18 months to develop the custom chips, which would give it a "gigantic amount of computing infrastructure". He described the race to develop AI infrastructure as "the biggest joint industrial project in human history". The chips have been designed specifically for inference, the process whereby an AI responds to users' requests, Altman said. Inference is expected to become a far greater portion of the technology's needs as demand for AI computing progresses beyond training or creating models. The Financial Times first revealed in September that OpenAI was working with Broadcom to mass-produce its own chips next year with Broadcom. Hock Tan, chief executive of Broadcom, said: "This is like the railroad or the internet. [AI] is becoming a critical utility over time for 8bn people globally." He added: "But it cannot be done with just one party, it needs a lot of partnerships and collaboration across an ecosystem." Unlike its deals with Nvidia and AMD, OpenAI has not secured any financial incentive from Broadcom for buying vast quantities of its chips. Nvidia agreed to invest $100bn in OpenAI, while AMD gave the start-up warrants entitling it to buy up to 10 per cent of the chip group for just a cent a share. OpenAI's deal with Broadcom may cost less than $500bn as prices for chips come down over time due to greater competition in the market, which has so far been dominated by Nvidia. Each gigawatt of AI computing capacity costs about $50bn to deploy at today's prices, according to OpenAI executives. This includes roughly $35bn on chips and a further $15bn on the infrastructure to run them. Broadcom's chips are expected to cost less than Nvidia's. The Broadcom and AMD deals are intended to reduce OpenAI's reliance on Nvidia, help it control costs and bring chip prices down over time, according to people close to the deals. Altman has said for months that a shortage of processors had slowed down his company's progress in releasing new versions of ChatGPT.
[2]
OpenAI partners with Broadcom to build custom AI chips, adding to Nvidia and AMD deals
OpenAI and Broadcom said Monday that they're jointly building and deploying 10 gigawatts of custom artificial intelligence accelerators as part of a broader effort across the industry to scale AI infrastructure. Broadcom shares soared 12% premarket following news of the deal. They didn't disclose financial terms. While the companies have been working together for 18 months, they're now going public with plans to develop and deploy racks of OpenAI-designed chips starting late next year. OpenAI has announced massive deals in recent weeks with Nvidia, Oracle and Advanced Micro Devices, as it tries to secure the capital and compute needs necessary for its historically ambitious AI buildout plans. "These things have gotten so complex you need the whole thing," OpenAI CEO Sam Altman said in a podcast with OpenAI and Broadcom executives that the companies released along with the news. The systems include networking, memory and compute -- all customized for OpenAI's workloads and built on Broadcom's Ethernet stack. By designing its own chips, OpenAI can bring compute costs down and stretch its infrastructure dollars further. Industry estimates peg the cost of a 1-gigawatt data center at roughly $50 billion, with $35 billion of that typically allocated to chips -- based on Nvidia's current pricing.
[3]
OpenAI taps Broadcom to build its first AI processor in latest chip deal - The Economic Times
OpenAI has partnered with Broadcom to produce its first in-house artificial intelligence processors, the latest chip tie-up for the ChatGPT maker as it races to secure the computing power needed to meet surging demand for its services. The companies said on Monday that OpenAI would design the chips, which Broadcom will develop and deploy starting in the second half of 2026. They will roll out 10 gigawatts' worth of custom chips, whose power consumption is roughly equivalent to the needs of more than 8 million US households or five times the electricity produced by the Hoover Dam. The agreement is the latest in a string of massive AI chip investments that have highlighted the technology industry's surging appetite for computing power as it races to build systems that meet or surpass human intelligence. OpenAI last week unveiled a 6-gigawatt AI chip supply deal with AMD that includes an option to buy a stake in the chipmaker, days after disclosing that Nvidia plans to invest up to $100 billion in the startup and provide it with data-center systems with at least 10 gigawatts of capacity. "Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential," OpenAI CEO Sam Altman said in a statement. Financial details of the agreement were not disclosed and it was not immediately clear how OpenAI would fund the deal. Custom chip boom The tie-up with Broadcom, first reported by Reuters last year, places OpenAI among cloud-computing giants such as Alphabet-owned Google and Amazon.com that are developing custom chips to meet surging AI demand and reduce dependence on Nvidia's costly processors that are limited in supply. The approach is not a sure bet. Similar efforts by Microsoft and Meta have run into delays or failed to match the performance of Nvidia chips, according to media reports, and analysts believe custom chips do not pose a threat to Nvidia's dominance in the short term. The rise of custom chips has, however, turned Broadcom - long known for its networking hardware - into one of the biggest winners of the generative AI boom, with its stock price rising nearly sixfold since the end of 2022. The company unveiled a blockbuster $10 billion custom AI chip order in September from an unnamed new customer that some analysts and market watchers speculated was OpenAI. Broadcom and OpenAI said on Monday that the deployment of the new custom chips would be completed by the end of 2029, building on their existing co-development and supply agreements. The new systems will be scaled entirely using Broadcom's Ethernet and other networking gear, giving the company an edge over smaller rivals such as Marvell Technology and challenging Nvidia's InfiniBand networking solution.
[4]
OpenAI taps Broadcom to build its first AI processor in latest chip deal
(Reuters) -OpenAI has partnered with Broadcom to produce its first in-house artificial intelligence processors, the latest chip tie-up for the ChatGPT maker as it races to secure the computing power needed to meet surging demand for its services. Shares of Broadcom rose more than 12% in premarket trading. The companies said on Monday that OpenAI would design the chips, which Broadcom will develop and deploy starting in the second half of 2026. They will roll out 10 gigawatts' worth of custom chips, whose power consumption is roughly equivalent to the needs of more than 8 million U.S. households or five times the electricity produced by the Hoover Dam. The agreement is the latest in a string of massive AI chip investments that have highlighted the technology industry's surging appetite for computing power as it races to build systems that meet or surpass human intelligence. OpenAI last week unveiled a 6-gigawatt AI chip supply deal with AMD that includes an option to buy a stake in the chipmaker, days after disclosing that Nvidia plans to invest up to $100 billion in the startup and provide it with data-center systems with at least 10 gigawatts of capacity. "Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential," OpenAI CEO Sam Altman said in a statement. Financial details of the agreement were not disclosed and it was not immediately clear how OpenAI would fund the deal. CUSTOM CHIP BOOM The tie-up with Broadcom, first reported by Reuters last year, places OpenAI among cloud-computing giants such as Alphabet-owned Google and Amazon.com that are developing custom chips to meet surging AI demand and reduce dependence on Nvidia's costly processors that are limited in supply. The approach is not a sure bet. Similar efforts by Microsoft and Meta have run into delays or failed to match the performance of Nvidia chips, according to media reports, and analysts believe custom chips do not pose a threat to Nvidia's dominance in the short term. The rise of custom chips has, however, turned Broadcom - long known for its networking hardware - into one of the biggest winners of the generative AI boom, with its stock price rising nearly six-fold since the end of 2022. The company unveiled a blockbuster $10 billion custom AI chip order in September from an unnamed new customer that some analysts and market watchers speculated was OpenAI. Broadcom and OpenAI said on Monday that the deployment of the new custom chips would be completed by the end of 2029, building on their existing co-development and supply agreements. The new systems will be scaled entirely using Broadcom's Ethernet and other networking gear, giving the company an edge over smaller rivals such as Marvell Technology and challenging Nvidia's InfiniBand networking solution. (Reporting by Max Cherney in San Francisco and Arsheeya Bajwa in Bengaluru; Editing by Shinjini Ganguli)
Share
Share
Copy Link
OpenAI announces a partnership with Broadcom to develop and deploy custom AI chips, marking a significant step in the company's ambitious AI infrastructure expansion. This deal follows recent agreements with Nvidia and AMD, highlighting the growing demand for AI computing power.
OpenAI, the company behind ChatGPT, has announced a groundbreaking partnership with Broadcom to design and produce its first in-house artificial intelligence processors. This collaboration marks a significant milestone in OpenAI's aggressive strategy to secure the computing power necessary for its AI services
1
2
.The agreement between OpenAI and Broadcom involves the development and deployment of 10 gigawatts' worth of custom AI chips, starting in the second half of 2026
3
. This massive undertaking is equivalent to the power consumption of more than 8 million U.S. households or five times the electricity produced by the Hoover Dam4
.The Broadcom deal is the latest in a series of significant AI chip investments by OpenAI:
1
.3
.1
.Related Stories
OpenAI's move to develop custom chips aligns it with cloud-computing giants like Google and Amazon, who are also creating bespoke processors to meet the surging demand for AI computing power
4
. This strategy aims to reduce dependence on Nvidia's costly and supply-constrained processors.The partnership has significantly boosted Broadcom's market position, with its stock price rising nearly sixfold since the end of 2022
4
. The deal also gives Broadcom an edge over smaller rivals and challenges Nvidia's networking solutions4
.While custom chip development offers potential benefits, it's not without risks. Similar efforts by Microsoft and Meta have faced delays or performance issues compared to Nvidia chips
4
. However, OpenAI CEO Sam Altman remains optimistic, describing the race to develop AI infrastructure as "the biggest joint industrial project in human history"1
.Source: Economic Times
As OpenAI continues to expand its computing capacity, questions remain about how the company will fund these massive investments, given that the costs far exceed its current revenues
1
. Nevertheless, this ambitious infrastructure push underscores the critical importance of computing power in the rapidly evolving AI landscape.Summarized by
Navi
[1]
[3]
[4]