46 Sources
46 Sources
[1]
OpenAI and Broadcom partner on AI hardware | TechCrunch
The AI research lab announced Monday it formed a partnership with semiconductor company Broadcom for 10 gigawatts worth of custom AI accelerator hardware. These AI accelerator racks will be deployed to OpenAI data centers and partner data centers starting in 2026 and running through 2029. "By designing its own chips and systems, OpenAI can embed what it's learned from developing frontier models and products directly into the hardware, unlocking new levels of capability and intelligence," the company said in a press release. While terms of the deal were not disclosed, the Financial Times estimated it could cost OpenAI an estimated $350 billion to $500 billion. This is just the latest infrastructure deal for OpenAI in recent weeks. Last week, OpenAI announced it was purchasing an additional six gigawatts of chips from AMD in a deal worth tens of billions of dollars. In September, Nvidia announced it was investing $100 billion into OpenAI alongside a letter of intent for OpenAI to tap 10 gigawatts worth of Nvidia hardware. OpenAI also allegedly signed a historic $300 billion cloud infrastructure deal with Oracle in September. Neither company has confirmed the deal. TechCrunch reached out to OpenAI for more information.
[2]
OpenAI Partners With Broadcom To Deploy Custom AI Chips
Imad is a senior reporter covering Google and internet culture. Hailing from Texas, Imad started his journalism career in 2013 and has amassed bylines with The New York Times, The Washington Post, ESPN, Tom's Guide and Wired, among others. OpenAI is collaborating with chip maker Broadcom to develop and deploy newly designed AI chips, the companies said in blog posts on Monday. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. The deal will involve OpenAI designing systems and accelerators, which are specialized hardware used for demanding calculations, and Broadcom deploying these chips. OpenAI says developing its own chips will allow it to use what it learned developing frontier AI models and better tune its hardware to unlock new capabilities. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) "Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential and deliver real benefits for people and businesses," said Sam Altman, co-founder and CEO of OpenAI, in the blog post. "Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to benefit all humanity." Broadcom, too, is excited about the OpenAI partnership. "Our partnership with OpenAI continues to set new industry benchmarks for the design and deployment of open, scalable and power-efficient AI clusters," said Charlie Kawwas, president of the semiconductor solutions group for Broadcom. "Custom accelerators combine remarkably well with standards-based Ethernet scale-up and scale-out networking solutions to provide cost and performance optimized next generation AI infrastructure." The server racks will include Broadcom's suite of Ethernet, PCIe and optical connectivity products. Representatives for OpenAI and Broadcom did not immediately respond to requests for comment. Companies have been eager to make deals with OpenAI in recent months. The company's core product, ChatGPT, is now a household name and kicked off the AI revolution. It currently boasts 800 million active weekly users and its new Sora 2 generative video model has been flooding the internet with convincing AI clips. Last month, chip giant Nvidia invested $100 billion into OpenAI to build out 10 gigawatts worth of data centers. Earlier this month, AMD traded 160 million of its shares, or 10% of the company, in exchange for OpenAI to buy six gigawatts upcoming MI450 chips, which are still being designed. These deals continue to drive AI hype, although some fear such news might be fueling an AI bubble. At the same time, many of these tech companies buy from one another, trading investments and driving up each other's values, along with the overall market. Some fear the stock market is quickly becoming an AI-built house of cards of which a significant reverberation could collapse the whole deck, hurting the economy. To put it into some perspective, AI investments have been so vast that they surpassed US consumer spending. As a result, the valuations of Nvidia, Microsoft and Google have vaulted past $3 trillion. Actually, Nvidia's value has already usurped $4 trillion, which is is greater than the entire market equities of Italy, Germany, France, UK and Canada. The problem is that the profitability of expensive AI systems still hasn't been proven.
[3]
OpenAI partners with Broadcom to produce its own AI chips
OpenAI is teaming up with Broadcom to produce its own computer chips to power its AI data centers. The deal is the latest in a series of partnerships designed to reduce the company's reliance on Nvidia and secure enough computing power to fuel apps like ChatGPT and Sora and realize its mission to develop superintelligent AI. OpenAI said designing its own chips allows it to "embed what it's learned from developing frontier models and products directly into the hardware, unlocking new levels of capability and intelligence." The partnership, announced Monday, will enable OpenAI to develop and deploy "10 gigawatts of custom AI accelerators" using its own chips and systems. To put that number in context, the output of a typical nuclear reactor is around one gigawatt. Broadcom is expected to start deploying racks of equipment in the second half of 2026 and the deal should finish by the end of 2029, the announcement said. OpenAI co-founder and CEO Sam Altman said the partnership "is a critical step in building the infrastructure needed to unlock AI's potential and deliver real benefits for people and businesses." The announcement comes after OpenAI struck a six gigawatt deal with AMD and a 10 gigawatt deal with Nvidia. The infrastructure partnerships have only recently become possible after OpenAI altered its exclusive arrangement with Microsoft for AI compute. Creating custom chips is part of a growing movement within the tech industry as major players like Meta, Google, and Microsoft work to bolster vital supply lines amid soaring demand and reduce their reliance on Nvidia's AI chips. To date, custom chip projects have not posed a plausible threat to the chipmaking behemoth, though they have handsomely benefited other chipmakers like Broadcom.
[4]
Report: Arm developing custom CPU for OpenAI's in-house accelerator -- core IP would underpin 10GW of installed AI capacity
The Information reports that Arm is developing a CPU for OpenAI's custom Broadcom-built accelerator, part of a sweeping expansion plan. OpenAI is reportedly working with SoftBank-owned Arm on a new CPU to complement the custom AI accelerator it is co-developing with Broadcom. The collaboration, first reported by The Information, would see Arm design a server-class CPU that anchors OpenAI's next-generation AI racks, potentially representing one of Arm's biggest steps into the data center market to date. The chip in question is OpenAI's in-house AI accelerator, part of plans announced on October 13 to deploy custom AI accelerators and rack systems in collaboration with Broadcom. The SoC, specialized for inference workloads, is expected to enter production in late 2026 and scale up to support roughly 10 gigawatts of compute capacity between 2026 and 2029. The Broadcom accelerator, said to be fabricated by TSMC, has been in development for roughly 18 months. According to The Information, Arm's new role goes well beyond supplying architectural blueprints. The company has recently started designing and manufacturing its own CPUs rather than just licensing cores to partners, and sees the OpenAI contract as a chance to expand its server ambitions. People familiar with the discussions told the outlet that OpenAI could use the Arm-designed CPU not only with its Broadcom chip, but also with systems from Nvidia and AMD. The potential revenue from OpenAI's CPU program could reach into the billions, the report also said, representing a major windfall for SoftBank, which owns nearly 90% of Arm and has borrowed heavily against its stake. SoftBank has also pledged to invest tens of billions of dollars into OpenAI's data center build-out and to buy AI technology from the startup to help accelerate Arm's own chip development cycle. Together with earlier agreements with Nvidia and AMD, OpenAI says its chip programs now total as much as 26GW of planned data center capacity. If successful, OpenAI's custom chip deployments could reach a total installed base that analysts estimate could cost more than $1 trillion in construction and equipment in tandem with its Nvidia and AMD purchases. The OpenAI-Broadcom chip could also give the ChatGPT developer more leverage in pricing talks with Nvidia, whose H100 and forthcoming Blackwell GPUs still dominate the AI training market. If Broadcom and TSMC can scale production, OpenAI's inference chips may offer a partial hedge against the tight GPU supply that has constrained AI labs for much of the past year.
[5]
OpenAI turns to Broadcom for 10GW of custom accelerators
Every human deserves their own accelerator, says ChatGPT creator Broadcom has cuddled up with OpenAI as the ChatGPT outfit looks for ever more help building out the vast infrastructure it needs to deliver on its dreams of advanced intelligence - and possibly even a profit some day. The CEOs of both companies revealed today they have been jointly working on custom silicon, which they plan to begin deploying late next year, in a collaboration sized at 10 GW of custom AI accelerators. In a statement, Broadcom and OpenAI said: "By designing its own chips and systems, OpenAI can embed what it's learned from developing frontier models and products directly into the hardware, unlocking new levels of capability and intelligence." The racks will be "scaled entirely with Ethernet and other connectivity solutions from Broadcom... with deployments across OpenAI's facilities and partner datacenters." Sam Altman, co-founder and CEO of OpenAI, said: "Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity." In a podcast accompanying the announcement and featuring both Hock Tan and Altman, the OpenAI chief said that 10 GW would "serve the needs of the world to use advanced intelligence." Altman said the agreement covered a full system, apparently geared towards inference. He added that it turned out Broadcom was also "incredible" at designing systems and 10 GW was an astonishing capacity on top of what it's already building. The GPUs of today were amazing but with the combination of model, chip, and rack, "we will be able to wring out so much more intelligence per watt," he continued. Charlie Kawwas, president of the Semiconductor Solutions Group for Broadcom Inc, added: "The racks include Broadcom's end-to-end portfolio of Ethernet, PCIe and optical connectivity solutions, reaffirming our AI infrastructure portfolio leadership." OpenAI president Greg Brockman said it had been able to apply its own models to designing the chip. The model has come up with optimizations, he said. Although humans could have done this, he admitted, doing it this way accelerated the process. Brockman also envisaged a world where every human had their own accelerator working for them behind the scenes, and partnering with Broadcom will bring this nirvana quicker. "There's 10 billion humans. We are nowhere near being able to build 10 billion chips, and so there's a long way to go before we are able to saturate not just the demand, but what humanity really deserves." Altman described the AI buildout as the biggest joint industrial project in human history. He drew a comparison with the proportion of global GDP that went into the construction of the Great Wall. Though maybe that's not the best comparison, as it was largely built by forced labor to keep the barbarians out, took centuries to deliver, and ultimately the barbarians found their way in, or around it, anyway. Unlike other recent OpenAI deals, it seems its purchases of Broadcom kit won't be linked to any other financial entanglements between the companies. Earlier this month, AMD announced a 6 GW agreement to power OpenAI's AI infrastructure across multiple generations of AMD Instinct GPUs. The contract was accompanied by a warrant for up to 160 million shares of AMD common stock, structured to pay out as provided specific staged targets are met. In September, Nvidia announced a 10 GW deal with OpenAI, with an accompanying (up to) $100 billion investment by the chip maker. The same month, OpenAI said it will pay Oracle $300 billion over five years to build out 5 GW of capacity. OpenAI's ARR as of June was around $10 billion. Spinning a web of interdependencies means multiple billion dollar technology organizations have a vested interest in OpenAI succeeding - the genAI pioneer says it won't be cashflow positive for four more years, and expects to spend a lot more on DC infrastructure during those years. Market watchers are nervous that these sorts of deals indicate some sort of AI bubble, as companies bandy around phrases such as gigawatts and tokens, instead of boring old terms such as revenues or income. Some have even drawn parallels with the dotcom era, though clearly there's no comparison. Back then companies were bandying around words like eyeballs and stickiness. You don't need ChatGPT to tell you that we're clearly talking oranges and lemons here. ®
[6]
OpenAI extends chip spending spree with multibillion-dollar Broadcom deal
OpenAI has agreed to purchase 10 gigawatts of computer chips from US semiconductor giant Broadcom, the latest in a string of deals by the artificial intelligence start-up worth hundreds of billions of dollars. The mammoth chip order means OpenAI could spend another $350bn to $500bn on top of the roughly $1tn of chip and data centre deals it has signed in recent months as it races to secure the computing power to run services such as ChatGPT. In September, OpenAI agreed to buy chips with a total power consumption of 10GW from Nvidia, and last week announced it would buy a further 6GW of chips from rival chip designer AMD. OpenAI also recently struck a data centre deal with Oracle that will cost $300bn over five years. The deals have bound some of the world's biggest tech groups to OpenAI's fortunes, despite questions about how OpenAI will fund them as the cost dwarfs the start-up's existing revenues. They also commit OpenAI to an ambitious infrastructure development project as deploying the chips will require building vast new data centres. Under the latest deal, OpenAI has co-designed custom chips with Broadcom specifically for running its own AI models, marking the first time the start-up has produced its own AI chips. When completed, the deals would bring OpenAI's total access to computing capacity to over 26GW -- equivalent to about 26 nuclear reactors. In a podcast announcing the deal, OpenAI chief executive Sam Altman said his company had been working with Broadcom for 18 months to develop the custom chips, which would give it a "gigantic amount of computing infrastructure". He described the race to develop AI infrastructure as "the biggest joint industrial project in human history". The chips have been designed specifically for inference, the process whereby an AI responds to users' requests, Altman said. Inference is expected to become a far greater portion of the technology's needs as demand for AI computing progresses beyond training or creating models. The Financial Times first revealed in September that OpenAI was working with Broadcom to mass-produce its own chips next year with Broadcom. Hock Tan, chief executive of Broadcom, said: "This is like the railroad or the internet. [AI] is becoming a critical utility over time for 8bn people globally." He added: "But it cannot be done with just one party, it needs a lot of partnerships and collaboration across an ecosystem." Unlike its deals with Nvidia and AMD, OpenAI has not secured any financial incentive from Broadcom for buying vast quantities of its chips. Nvidia agreed to invest $100bn in OpenAI, while AMD gave the start-up warrants entitling it to buy up to 10 per cent of the chip group for just a cent a share. OpenAI's deal with Broadcom may cost less than $500bn as prices for chips come down over time due to greater competition in the market, which has so far been dominated by Nvidia. Each gigawatt of AI computing capacity costs about $50bn to deploy at today's prices, according to OpenAI executives. This includes roughly $35bn on chips and a further $15bn on the infrastructure to run them. Broadcom's chips are expected to cost less than Nvidia's. The Broadcom and AMD deals are intended to reduce OpenAI's reliance on Nvidia, help it control costs and bring chip prices down over time, according to people close to the deals. Altman has said for months that a shortage of processors had slowed down his company's progress in releasing new versions of ChatGPT.
[7]
OpenAI and Broadcom to co-develop 10GW of custom AI chips in yet another blockbuster AI partnership -- deployments start in 2026
The AI firm's latest hardware deal locks in another 10 gigawatts of capacity as it moves to design its own accelerators. OpenAI has signed a multi-year deal with Broadcom to co-develop and deploy 10 gigawatts of custom AI accelerators and rack systems, the companies announced on October 13. OpenAI will handle accelerator and system design, while Broadcom leads development and roll-out starting in the second half of 2026. Full deployment is targeted by the end of 2029. The agreement forms part of an ongoing, aggressive hardware push by OpenAI. Unlike with its current reliance on Nvidia GPUs, the new systems will be based on in-house accelerators paired with Broadcom's networking and hardware IP. The deal could mark a shift away from traditional GPU-centric clusters in favor of tightly integrated silicon tailored to OpenAI's training and inference workloads. The two companies have already been working together for over 18 months, and this formal agreement builds on that collaboration. Few technical details have been disclosed, but the joint announcement confirms that the systems will use Ethernet-based networking, suggesting a data-center architecture designed for scalability and vendor neutrality. OpenAI says deployments will be phased over several years, with the first racks going online in the second half of 2026. The new agreement adds to OpenAI's existing partnerships with Nvidia and AMD, bringing the company's total hardware commitments to an estimated 26 gigawatts, including roughly 10 gigawatts of Nvidia infrastructure and an undisclosed slice of AMD's upcoming MI series. Interestingly, OpenAI is not believed to be Broadcom's still-unknown $10 billion customer. Speaking with CNBC, Broadcom semiconductor president Charlie Kawwas appeared alongside OpenAI's Greg Brockman and joked, "I would love to take a $10 billion [purchase order] from my good friend Greg," he said, adding, "He has not given me that PO yet." WSJ reports that the deal is worth "multiple billions of dollars." OpenAI stands to gain a deep bench in ASIC design and proven supply chain maturity from Broadcom. The company already produces custom AI silicon for hyperscale customers, including Google's TPU infrastructure. By leveraging Broadcom's Ethernet and chiplet IP, OpenAI gets a path to differentiated hardware without building a silicon team from scratch. Meanwhile, for Nvidia, the deal adds to a growing list of partial defections among major AI customers exploring in-house silicon. Amazon, Google, Meta, and Microsoft are all now pursuing custom accelerators. What remains to be seen is how well these bespoke solutions perform at scale, and whether vendors like Broadcom can match the ecosystem maturity of CUDA. Neither company has disclosed foundry partners, packaging flows, or memory choices for the upcoming accelerators. Those decisions will shape delivery timelines just as much as wafer capacity. With deployment still a year out, the clock is ticking.
[8]
OpenAI's hyperscaler ambitions are being put to the test with its latest megadeals
Broadcom-OpenAI deal expected to be cheaper than current GPU options OpenAI began with a simple bet that better ideas, not better infrastructure, would unlock artificial general intelligence. But that view shifted years ago, as Altman realized that more compute, or processing power, meant more capability -- and ultimately, more dominance. On Monday morning, he unveiled his latest blockbuster deal, one that moves OpenAI squarely into the chipmaking business and further into competition with the hyperscalers. OpenAI is partnering with Broadcom to co-develop racks of custom AI accelerators, purpose-built for its own models. It's a big shift for a company that once believed intelligence would come from smarter algorithms, not bigger machines. "In 2017, the thing that we found was that we were getting the best results out of scale," the OpenAI CEO said in a company podcast on Monday. "It wasn't something we set out to prove. It was something we really discovered empirically because of everything else that didn't work nearly as well." That insight -- that the key was scale, not cleverness -- fundamentally reshaped OpenAI. Now, the company is expanding that logic even further, teaming up with Broadcom to design and deploy racks of custom silicon optimized for OpenAI's workloads. The deal gives OpenAI deeper control over its stack, from training frontier models to owning the infrastructure, distribution, and developer ecosystem that turns those models into lasting platforms. Altman's rapid series of deals and product launches is assembling a complete AI ecosystem, much like Apple did for smartphones and Microsoft did for PCs, with infrastructure, hardware, and developers at its core.
[9]
OpenAI is making its own AI chips with Broadcom's help
OpenAI is hungry for as much compute power as it can get its hands on, and the company has signed another deal with a chipmaker to help make that happen. This time around, it's teaming up with Broadcom to make custom chips and systems for use in both OpenAI's infrastructure and its partners' data centers. OpenAI is designing the "AI accelerators" and systems. Broadcom will start deploying those racks in the second half of next year, the companies said. The aim is to complete the rollout by the end of 2029. The two companies are said to have started working together 18 months ago. The deal is for 10 gigawatts of chips and it's worth "multiple billions of dollars," according to The Wall Street Journal. It was reported last month that OpenAI and Broadcom were making custom chips together. For what it's worth, the latter's CEO said recently that a new, unnamed client had put in an order worth $10 billion. The Broadcom deal follows agreements that OpenAI recently struck with both NVIDIA and AMD. NVIDIA is investing $100 billion into OpenAI and will provide it with 10 gigawatts of AI infrastructure. The deal with AMD is for six gigawatts of compute power. OpenAI is said to be paying AMD tens of billions of dollars under that agreement and it could ultimately take up to a 10 percent stake in the company. As with the Broadcom rollout, both the NVIDIA and AMD deployments are expected to start in the second half of 2026. OpenAI also inked a deal with Oracle in July for 4.5 gigawatts in data center capacity as part of its Stargate Project. According to recent reports, OpenAI CEO Sam Altman told employees that he wanted the company to build out 250 gigawatts of compute power over the next eight years, significantly up from the 2GW it's expected to have by the end of this year. (For context, 250GW is about a fifth of the energy generation capacity of the entire US, which sits at around 1,200GW.) As things stand, it would likely cost around $10 trillion to buy that much capacity. Altman said OpenAI would have to develop new financing tools to make that happen, but he hasn't elaborated much on what those might look like. Even its current deals have OpenAI on the hook for hundreds of billions of dollars. While the likes of NVIDIA and Microsoft have invested heavily into OpenAI, there isn't a backer on the planet that can plow $10 trillion into the company. As things stand, OpenAI is very, very far away from making up the difference in revenue too. It's reportedly expecting to make $13 billion in revenue this year.
[10]
OpenAI partners with Broadcom to design its own AI chips
SAN FRANCISCO (AP) -- OpenAI said Monday it is working with chipmaker Broadcom to design its own artificial intelligence computer chips. The two California companies didn't disclose the financial terms of the deal but said they will start deploying the new racks of customized "AI accelerators" next year. The deal marks the latest between OpenAI, maker of ChatGPT, and the companies building the chips and data centers required to power AI. OpenAI CEO Sam Altman said in a statement that "developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity." Broadcom shares surged more than 8% on Monday after the morning announcement. Broadcom CEO Hock Tan said in a statement that "we are thrilled to co-develop and deploy 10 gigawatts of next generation accelerators and network systems to pave the way for the future of AI."
[11]
OpenAI and Broadcom team up on $10 billion custom AI chip deal
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. Forward-looking: OpenAI's push for in-house chip design is part of a broader trend among Big Tech companies, with firms like Google, Amazon, and Meta already developing custom hardware tailored to their AI workloads. However, OpenAI's approach stands out for the scale of its ambition, the speed of its implementation, and close collaboration with hardware partners to develop not only chips but also rack-scale systems that use the latest networking technologies. OpenAI and Broadcom are moving forward to develop and roll out 10 gigawatts of custom AI chips and computing systems over the next four years. This not only marks a significant stride in OpenAI's effort to expand its infrastructure but also signals its ambition to influence the design of specialized hardware directly tied to its research and deployment of advanced models such as GPT and Sora. This is part of the recently announced partnership, estimated at around $10 billion, specifically aimed at co-developing and deploying custom AI chips at massive scale with Broadcom. The collaboration follows months of negotiations and builds on an 18-month partnership in which OpenAI and Broadcom co-developed a new series of AI accelerators, focusing on chips tailored for AI inference. These chips are expected to be manufactured using advanced nodes from TSMC and are likely to feature innovations such as systolic array architectures and high-bandwidth memory, critical for handling complex matrix and vector operations required by modern AI models. OpenAI plans to deploy these custom systems in its own data centers and in facilities managed by third parties, with a large-scale introduction scheduled to begin in the second half of next year. Although financial details of the agreement have not been made public, independent industry analyses estimate the contract to be worth at least $10 billion, with the potential to rise significantly depending on the scale and duration of the deployment. OpenAI's commitment brings its total secured computing capacity - spanning agreements with Broadcom, Nvidia, and AMD - to more than 26 gigawatts. For context, this figure surpasses New York City's summer electricity consumption several times over and is roughly equivalent to the output of about 26 nuclear reactors. In addition, OpenAI recently signed contracts to deploy 6 gigawatts of AMD Instinct GPUs and secured multi-gigawatt deals with Nvidia. For context, this figure surpasses New York City's summer electricity consumption several times over and is roughly equivalent to the output of about 26 nuclear reactors. Such Herculean infrastructure comes with enormous financial implications. Industry observers estimate that OpenAI could ultimately spend hundreds of billions of dollars, or even more, to meet its ambitious capacity targets. Reports indicate that OpenAI expects to generate approximately $13 billion in revenue this year. OpenAI and its leadership have emphasized that the current agreements are only the beginning. CEO Sam Altman has outlined plans for even more ambitious expansion, setting an internal goal of 250 gigawatts of new data center capacity by 2033. At current estimates, achieving this target could cost more than $10 trillion and would require building the equivalent of 250 nuclear power plants. This raises fundamental questions about funding, energy supply, and the feasibility of such rapid growth within a globally constrained supply chain. As OpenAI continues its rise as the world's most valuable private tech startup, currently valued at around $500 billion, its aggressive hardware strategy is rapidly reshaping expectations across both the technology and energy sectors. Consultancy estimates from Bain & Co. suggest that the scale of OpenAI's planned computing infrastructure could drive global AI revenue to about $2 trillion annually by 2030, surpassing the combined 2024 revenues of Amazon, Apple, Alphabet, Microsoft, Meta, and Nvidia. While OpenAI's path presents significant operational and financial challenges, its executives remain convinced that continuously expanding computing resources is essential to advancing the next generation of artificial general intelligence. Whether these enormous investments will ultimately pay off remains uncertain, but for now, OpenAI and its partners are moving forward at a pace few in the industry can match.
[12]
OpenAI partners with Broadcom to build custom AI chips, adding to Nvidia and AMD deals
OpenAI and Broadcom said Monday that they're jointly building and deploying 10 gigawatts of custom artificial intelligence accelerators as part of a broader effort across the industry to scale AI infrastructure. Broadcom shares soared 12% premarket following news of the deal. They didn't disclose financial terms. While the companies have been working together for 18 months, they're now going public with plans to develop and deploy racks of OpenAI-designed chips starting late next year. OpenAI has announced massive deals in recent weeks with Nvidia, Oracle and Advanced Micro Devices, as it tries to secure the capital and compute needs necessary for its historically ambitious AI buildout plans. "These things have gotten so complex you need the whole thing," OpenAI CEO Sam Altman said in a podcast with OpenAI and Broadcom executives that the companies released along with the news. The systems include networking, memory and compute -- all customized for OpenAI's workloads and built on Broadcom's Ethernet stack. By designing its own chips, OpenAI can bring compute costs down and stretch its infrastructure dollars further. Industry estimates peg the cost of a 1-gigawatt data center at roughly $50 billion, with $35 billion of that typically allocated to chips -- based on Nvidia's current pricing.
[13]
Image shows OpenAI's growing alliance of chip and cloud partners
New image outlines trillion-dollar network linking OpenAI's tech partners OpenAI CEO Sam Altman recently discussed the future of artificial intelligence with G42's Peng Xiao, covering topics ranging from infrastructure and chip design to global cooperation. The session at GITEX Global 2025 in Dubai touched on Stargate UAE, which OpenAI is developing in partnership with G42 as the first international site in its global infrastructure project. Peng spoke about the importance of collaboration, saying, "No company and no individual can do this alone, and that's why we're here sharing the stage with Sam as an equitable partner." OpenAI has expanded its network of industry partners rapidly in recent times, including making massive deals with Nvidia, AMD, and, most recently announced, Broadcom. Under its newest agreement, OpenAI will design custom chips with Broadcom, deploying enough processors to consume 10 gigawatts of electricity in the second half of next year. "Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential and deliver real benefits for people and businesses," Altman said. "Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity." Broadcom's president and CEO, Hock Tan, enthused, "Broadcom's collaboration with OpenAI signifies a pivotal moment in the pursuit of artificial general intelligence. OpenAI has been in the forefront of the AI revolution since the ChatGPT moment, and we are thrilled to co-develop and deploy 10 gigawatts of next-generation accelerators and network systems to pave the way for the future of AI." The scale of OpenAI's expanding infrastructure network was revealed in an image shared on X by StockMarket.News. Described as "the most expensive AI infrastructure network in history," this adds up to more than $1 trillion in combined deals. The X thread highlighted Stargate, a $500 billion public-private project involving the U.S. government, Oracle, and SoftBank; Nvidia's $100 billion supply deal for tens of thousands of GPUs; Microsoft's $13 billion Azure partnership; and AMD's role, which could also reach $100 billion for 6 gigawatts of next-generation chips. StockMarket.News described OpenAI's strategy as "circular financing," where GPU vendors fund OpenAI, OpenAI drives demand, and vendors profit in return. It said OpenAI now manages every layer of the AI supply chain, from chips to cloud infrastructure and energy. The outlet noted OpenAI's valuation has surpassed $500 billion, linking its growth to nearly $3 trillion in expected data center spending through 2028, with $800 billion financed through private credit. The post ended with an important question though: "How long can this model last before the money runs thin?"
[14]
OpenAI big chip orders dwarf its revenues -- for now
OpenAI is ordering hundreds of billions of dollars worth of chips in the artificial intelligence race, raising questions among investors about how the startup will finance these purchases. In less than a month, the San Francisco startup behind ChatGPT has committed to acquiring a staggering 26 gigawatts of sophisticated data processors from Nvidia, AMD, and Broadcom -- more than 10 million units that would consume power equivalent to 20 standard nuclear reactors. "They will need hundreds of billions of dollars to live up to their obligations," said Gil Luria, managing director at D.A. Davidson, a financial consulting firm. The challenge is daunting: OpenAI doesn't expect to be profitable until 2029 and is forecasting billions in losses this year, despite generating about $13 billion in revenue. OpenAI declined to comment on its financing strategy. However, in a CNBC interview, co-founder Greg Brockman acknowledged the difficulty of building sufficient computing infrastructure to handle the "avalanche of demand" for AI, noting that creative financing mechanisms will be necessary. Creative financing Nvidia, AMD, and Broadcom all declined to discuss specific deals with OpenAI. Silicon Valley-based Nvidia has announced plans to invest up to $100 billion in OpenAI over several years to build the world's largest AI infrastructure. OpenAI would use those funds to buy chips from Nvidia in a game of "circular financing," with Nvidia recouping its investment by taking a share in OpenAI, one of its biggest customers and the world's hottest AI company. AMD has taken a different approach, offering OpenAI options to acquire equity in AMD -- a transaction considered unusual in financial circles and a sign that it is AMD that is seeking to seize some of OpenAI's limelight with investors. "It represents another unhealthy dynamic," Luria said, suggesting the arrangement reveals AMD's desperation to compete in a market dominated by Nvidia. Crash or soar? The stakes couldn't be higher. OpenAI co-founder and CEO Sam Altman "has the power to crash the global economy for a decade or take us all to the promised land," Bernstein Research senior analyst Stacy Rasgon wrote in a note to investors this month. "Right now, we don't know which is in the cards." Even selling stakes in OpenAI at its current $500 billion valuation won't cover the startup's chip commitments, according to Luria, meaning the company will need to borrow money. One possibility: using the chips themselves as collateral for loans. Meanwhile, deep-pocketed competitors like Google and Meta can fund their AI efforts from massive profits generated by their online advertising businesses -- a luxury OpenAI doesn't have. The unbridled spending has sparked concerns about a speculative bubble reminiscent of the late 1990s dot-com frenzy, which collapsed and wiped out massive investments. However, some experts see key differences. "There is very real demand today for AI in a way that seems a little different than the boom in the 1990s," said Josh Lerner, a Harvard Business School professor of investment banking. CFRA analyst Angelo Zino pointed to OpenAI's remarkable growth and more than 800 million ChatGPT users as evidence that a partnership approach to financing makes sense. Still, Lerner acknowledges the uncertainty: "It's a real dilemma. How does one balance this future potential with the speculative nature" of its investments today?
[15]
OpenAI-Broadcom agreement sends shares of chipmaker soaring | Fortune
As part of the pact, OpenAI will design the hardware and work with Broadcom to develop it, according to a joint statement on Monday. The plan is to add 10 gigawatts' worth of AI data center capacity, with the companies beginning to deploy racks of servers containing the gear in the second half of 2026. By customizing the processors, OpenAI said it will be able to embed what it has learned from developing AI models and services "directly into the hardware, unlocking new levels of capability and intelligence." The hardware rollout should be completed by the end of 2029, according to the companies. For Broadcom, the move provides deeper access to the booming AI market. Monday's agreement confirms an arrangement that Broadcom Chief Executive Officer Hock Tan had hinted at during an earnings conference call last month. Investors sent Broadcom shares up as as much as 11% on Monday, betting that the OpenAI alliance will generate hundreds of billions of dollars in new revenue for the chipmaker. But the details of how OpenAI will pay for the equipment aren't spelled out. While the AI startup has shown it can easily raise funding from investors, it's burning through wads of cash and doesn't expect to be cash-flow positive until around the end of this decade. OpenAI, the creator of ChatGPT, has inked a number of blockbuster deals this year, aiming to ease constraints on computing power. Nvidia Corp., whose chips handle the majority of AI work, said last month that it will invest as much as $100 billion in OpenAI to support new infrastructure -- with a goal of at least 10 GW of capacity. And just last week, OpenAI announced a pact to deploy 6 GW of Advanced Micro Device Inc. processors over multiple years. As AI and cloud companies announce large projects every few days, it's often not clear how the efforts are being financed. The interlocking deals also have boosted fears of a bubble in AI spending, particularly as many of these partnerships involve OpenAI, a fast-growing but unprofitable business. While purchasing chips from others, OpenAI has also been working on designing its own semiconductors. They're mainly intended to handle the inference stage of running AI models -- the phase after the technology is trained. There's no investment or stock component to the Broadcom deal, OpenAI said, making it different than the agreements with Nvidia and AMD. An OpenAI spokesperson declined to comment on how the company will finance the chips, but the underlying idea is that more computing power will let the company sell more services. A single gigawatt of AI computing capacity today costs roughly $35 billion for the chips alone, with 10 GW totaling upwards of $350 billion. But a chief reason OpenAI is working to develop its own chip is to bring down its costs, and it's unclear what price Broadcom's chips will command under the deal. OpenAI might be trying to emulate Alphabet Inc.'s Google, which made its own chips using Broadcom's technology and saw lower costs compared with other AI companies, such as Meta Platforms Inc., according to Bloomberg Intelligence analyst Mandeep Singh. Google's success with Broadcom might have steered OpenAI to that chipmaker, rather than suppliers such as Marvell Technology Inc., Singh added. In announcing the agreement, OpenAI CEO Sam Altman said that his company has been working with Broadcom for 18 months. The startup is rethinking technology starting with the transistors and going all the way up to what happens when someone asks ChatGPT a question, he said on a podcast released by his company. "By being able to optimize across that entire stack, we can get huge efficiency gains, and that will lead to much better performance, faster models, cheaper models." When Tan referred to the agreement last month, he didn't name the customer, though people familiar with the matter identified it as OpenAI. "If you do your own chips, you control your destiny," Tan said in the podcast Monday. Broadcom has increasingly been seen as a key beneficiary of AI spending, helping propel its share price this year. The stock was up 40% so far this year through Friday's close, outpacing a 29% gain by the benchmark Philadelphia Stock Exchange Semiconductor Index. OpenAI, meanwhile, has garnered a $500 billion valuation, making it the world's biggest startup by that measure. By tapping Broadcom's networking technology, OpenAI is hedging its bets. Broadcom's Ethernet-based options compete with Nvidia's proprietary technology. OpenAI also will be designing its own gear as part of its work on custom hardware, the startup said. Broadcom won't be providing the data center capacity itself. Instead, it will deploy server racks with custom hardware to facilities run by either OpenAI or its cloud-computing partners. A single gigawatt is about the capacity of a conventional nuclear power plant. Still, 10 GW of computing power alone isn't enough to support OpenAI's vision of achieving artificial general intelligence, said OpenAI co-founder and President Greg Brockman. "That is a drop in the bucket compared to where we need to go," he said. Getting to the level under discussion isn't going to happen quickly, said Charlie Kawwas, president of Broadcom's semiconductor solutions group. "Take railroads -- it took about a century to roll it out as critical infrastructure. If you take the internet, it took about 30 years," he said. "This is not going to take five years."
[16]
OpenAI big chip orders dwarf its revenues -- for now
New York (AFP) - OpenAI is ordering hundreds of billions of dollars worth of chips in the artificial intelligence race, raising questions among investors about how the startup will finance these purchases. In less than a month, the San Francisco startup behind ChatGPT has committed to acquiring a staggering 26 gigawatts of sophisticated data processors from Nvidia, AMD, and Broadcom -- more than 10 million units that would consume power equivalent to 20 standard nuclear reactors. "They will need hundreds of billions of dollars to live up to their obligations," said Gil Luria, managing director at D.A. Davidson, a financial consulting firm. The challenge is daunting: OpenAI doesn't expect to be profitable until 2029 and is forecasting billions in losses this year, despite generating about $13 billion in revenue. OpenAI declined to comment on its financing strategy. However, in a CNBC interview, co-founder Greg Brockman acknowledged the difficulty of building sufficient computing infrastructure to handle the "avalanche of demand" for AI, noting that creative financing mechanisms will be necessary. Creative financing Nvidia, AMD, and Broadcom all declined to discuss specific deals with OpenAI. Silicon Valley-based Nvidia has announced plans to invest up to $100 billion in OpenAI over several years to build the world's largest AI infrastructure. OpenAI would use those funds to buy chips from Nvidia in a game of "circular financing," with Nvidia recouping its investment by taking a share in OpenAI, one of its biggest customers and the world's hottest AI company. AMD has taken a different approach, offering OpenAI options to acquire equity in AMD -- a transaction considered unusual in financial circles and a sign that it is AMD that is seeking to seize some of OpenAI's limelight with investors. "It represents another unhealthy dynamic," Luria said, suggesting the arrangement reveals AMD's desperation to compete in a market dominated by Nvidia. Crash or soar? The stakes couldn't be higher. OpenAI co-founder and CEO Sam Altman "has the power to crash the global economy for a decade or take us all to the promised land," Bernstein Research senior analyst Stacy Rasgon wrote in a note to investors this month. "Right now, we don't know which is in the cards." Even selling stakes in OpenAI at its current $500 billion valuation won't cover the startup's chip commitments, according to Luria, meaning the company will need to borrow money. One possibility: using the chips themselves as collateral for loans. Meanwhile, deep-pocketed competitors like Google and Meta can fund their AI efforts from massive profits generated by their online advertising businesses -- a luxury OpenAI doesn't have. The unbridled spending has sparked concerns about a speculative bubble reminiscent of the late 1990s dot-com frenzy, which collapsed and wiped out massive investments. However, some experts see key differences. "There is very real demand today for AI in a way that seems a little different than the boom in the 1990s," said Josh Lerner, a Harvard Business School professor of investment banking. CFRA analyst Angelo Zino pointed to OpenAI's remarkable growth and more than 800 million ChatGPT users as evidence that a partnership approach to financing makes sense. Still, Lerner acknowledges the uncertainty: "It's a real dilemma. How does one balance this future potential with the speculative nature" of its investments today?
[17]
OpenAI taps Broadcom to build its first AI processor in latest chip deal
OpenAI has partnered with Broadcom AVGO.O to produce its first in-house artificial intelligence processors, the latest chip tie-up for the ChatGPT maker as it races to secure the computing power needed to meet surging demand for its services. Shares of Broadcom rose more than 7% in early trading. The companies said on Monday that OpenAI would design the chips, which Broadcom will develop and deploy starting in the second half of 2026. They will roll out 10 gigawatts' worth of custom chips, whose power consumption is roughly equivalent to the needs of more than 8 million U.S. households or five times the electricity produced by the Hoover Dam. The agreement is the latest in a string of massive AI chip investments that have highlighted the technology industry's surging appetite for computing power as it races to build systems that meet or surpass human intelligence. OpenAI last week unveiled a 6-gigawatt AI chip supply deal with AMD AMD.O that includes an option to buy a stake in the chipmaker, days after disclosing that Nvidia NVDA.O plans to invest up to $100 billion in the startup and provide it with data-center systems with at least 10 gigawatts of capacity. "Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential," OpenAI CEO Sam Altman said in a statement. Financial details of the agreement were not disclosed and it was not immediately clear how OpenAI would fund the deal. The tie-up with Broadcom, first reported by Reuters last year, places OpenAI among cloud-computing giants such as Alphabet-owned Google GOOGL.O and Amazon.com AMZN.O that are developing custom chips to meet surging AI demand and reduce dependence on Nvidia's costly processors that are limited in supply. The approach is not a sure bet. Similar efforts by Microsoft MSFT.O and Meta META.O have run into delays or failed to match the performance of Nvidia chips, according to media reports, and analysts believe custom chips do not pose a threat to Nvidia's dominance in the short term. The rise of custom chips has, however, turned Broadcom -- long known for its networking hardware -- into one of the biggest winners of the generative AI boom, with its stock price rising nearly six-fold since the end of 2022. The company unveiled a blockbuster $10 billion custom AI chip order in September from an unnamed new customer that some analysts and market watchers speculated was OpenAI. Broadcom and OpenAI said on Monday that the deployment of the new custom chips would be completed by the end of 2029, building on their existing co-development and supply agreements. The new systems will be scaled entirely using Broadcom's Ethernet and other networking gear, giving the company an edge over smaller rivals such as Marvell Technology MRVL.O and challenging Nvidia's InfiniBand networking solution.
[18]
OpenAI's Compute Dream Is Too Big for the World
"If OpenAI doesn't secure enough compute fast, it could be outpaced by giants with bigger balance sheets." OpenAI is on a compute spree, securing multi-billion-dollar deals over the past few months. The AI startup has locked in unprecedented chip partnerships with NVIDIA, AMD, and Broadcom, totalling more than 26 gigawatts of AI compute capacity and financial commitments potentially exceeding $1 trillion. The Broadcom deal is particularly exciting because OpenAI will be developing in-house chips through this partnership. The startup will develop and deploy 10 gigawatts of AI accelerators and networking systems. For Broadcom CEO Hock Tan, the partnership with OpenAI is the beginning of a new era in chip design. "GPT-5, 6, 7, on and on and each of them will require a different chip, a better chip, a more developed chip," he said in a podcast while announcing the partnership.
[19]
OpenAI partners with Broadcom to deploy 10 gigawatts of AI hardware - SiliconANGLE
OpenAI partners with Broadcom to deploy 10 gigawatts of AI hardware Shares of Broadcom Inc. jumped more than 9% today after it announced a four-year infrastructure partnership with OpenAI. The initiative will see the ChatGPT developer deploy 10 gigawatts' worth of data center hardware over the next four years. According to OpenAI, the infrastructure will be powered by custom artificial intelligence processors co-developed with Broadcom. In a podcast released today, OpenAI president Greg Brockman said that the AI provider used its own neural networks to design the chips. "We've been able to get massive area reductions," Brockman detailed. "You take components that humans have already optimized and just pour compute into it, and the model comes up with its own optimizations." Off-the-shelf graphics cards are geared toward a broad range of customers, which means modules that are important for some users aren't needed by others. Building a custom processor makes it possible to remove unneeded modules, which saves power and space. That power and space can be reallocated to circuits optimized for a company's workloads. OpenAI plans to deploy its custom processors in racks that will likewise be based on an in-house design. The systems will be equipped with PCIe and Ethernet networking equipment from Broadcom. PCIe is mainly used to link together internal server components, while Ethernet is geared toward connecting servers with one another. Broadcom debuted a new AI-optimized Ethernet switch, the TH6-Davisson, last Wednesday. It can process 102.4 terabits of traffic per second, which the company says is double the throughput of the nearest competitor. The laser emitters that the TH6-Davisson uses to transmit data over the network are based on a field-replaceable design meant to ease maintenance. Usually, Ethernet switches are deployed alongside devices called pluggable transceivers. Those devices are responsible for turning electronic data into light that can be transmitted over fiber-optic networks and vice versa. TH6-Davisson features a built-in transceiver that removes the need for an external optical module, which lowers costs. OpenAI didn't specify which of Broadcom's PCIe products it will use as part of the partnership. The chipmaker sells a line of PCIe switches called the PEX series. It also makes retimers, modules that prevent errors from finding their way into data while it travels across a PCIe link. "OpenAI and Broadcom have been working together for the last 18 months," OpenAI Chief Executive Officer Sam Altman said on the podcast the AI provider released today. "By being able to optimize across that entire stack, we can get huge efficiency gains and that will lead to much better performance, faster models, cheaper models." OpenAI and Broadcom plan to deploy the first data center racks developed through their partnership in the second half of 2026. According to the chipmaker, the remaining systems will go online through 2029. The appliances' expected power consumption of 10 gigawatts corresponds to the energy usage of several million homes. Broadcom and OpenAI didn't specify the expected price of the project. In August, Nvidia Corp. Chief Executive Officer Jensen Huang stated that one gigawatt of AI data center capacity costs $50 billion to $60 billion. He added that most of that sum goes to Nvidia hardware, which suggests Broadcom stands to generate billions of revenue from its new OpenaI partnership.
[20]
OpenAI partners with Broadcom to make an ideal AI chip (and get distance from Nvidia)
OpenAI and Broadcom have formed a multibillion-dollar partnership to develop OpenAI-designed chips. Under the deal, OpenAI will design the chips to its own specifications and Broadcom will manage the development and fabrication of the chips, as well as help with their deployment. The companies plan to deploy enough chips to require 10 gigawatts of electrical power beginning in mid-2026, and running through 2029. Broadcom stock jumped almost 10% on the announcement Monday. The deal marks the second major move by OpenAI to reduce its dependence on Nvidia, which now dominates the AI chip market -- the company announced a partnership with chipmaker AMD last week. The Broadcom partnership is part of a many-pronged effort by OpenAI to dramatically scale up the computing power it uses to train and operate AI models. Within its "Stargate" initiative to add data centers, the company has attracted large investments from Softbank, Oracle, Nvidia, AMD, and MGX. Coreweave, Microsoft, and ARM will play supporting roles. OpenAI also signed a $300 billion deal with Oracle to buy computing power within existing Oracle data centers.
[21]
OpenAI bets big on energy-hungry custom chips to scale ChatGPT and Sora
The 10-GW requirement equals the electricity use of roughly 8 million U.S. households. OpenAI is partnering with chipmaker Broadcom to design and develop custom AI chips and systems requiring 10 gigawatts of power. The collaboration, involving OpenAI CEO Sam Altman, aims to build infrastructure to support the company's expanding user base and power-intensive applications. This partnership follows previous agreements OpenAI secured with high-profile chip companies Nvidia and AMD. These deals were part of a strategy to obtain more computing resources for its services, which have seen considerable growth. The company's ChatGPT application now serves 800 million weekly users. An OpenAI executive also noted on the social media platform X that the recently released Sora video-generation app is experiencing a faster growth rate than ChatGPT did. In a press release announcing the deal, Sam Altman, co-founder and CEO of OpenAI, stated, "Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential and deliver real benefits for people and businesses." The deployment of the resulting AI accelerator and network systems is scheduled to begin in the second half of 2026. Sam Altman was pictured in Berlin, Germany, on September 25, 2025. The scale of the 10-gigawatt power requirement is comparable to the electricity consumption of a large city. According to Reuters, this level of power usage is equivalent to that of 8 million U.S. households, which has drawn attention to the environmental impact of AI development. Altman had previously commented on the energy demands of his company's products, stating that an average query on ChatGPT consumes as much energy as a lightbulb uses in a couple of minutes. He also indicated that generating realistic video clips with more advanced models, such as Sora 2, would be significantly more power-intensive. By creating its own custom AI accelerators, or chips, OpenAI will assume a larger role in the hardware required to operate services like ChatGPT. An OpenAI press release noted that through this initiative, the company "can embed what it's learned from developing frontier models and products directly into the hardware, unlocking new levels of capability and intelligence." Following the public announcement of the partnership, Broadcom's shares experienced a 12% surge on Monday morning. The deal came after Broadcom CEO Hock Tan disclosed during a company earnings call that the firm had secured a new $10 billion customer. At the time of Tan's statement, it was believed that the new customer was OpenAI.
[22]
OpenAI partners with Broadcom to design its own AI chips
SAN FRANCISCO -- SAN FRANCISCO (AP) -- OpenAI said Monday it is working with chipmaker Broadcom to design its own artificial intelligence computer chips. The two California companies didn't disclose the financial terms of the deal but said they will start deploying the new racks of customized "AI accelerators" next year. The deal marks the latest between OpenAI, maker of ChatGPT, and the companies building the chips and data centers required to power AI. OpenAI CEO Sam Altman said in a statement that "developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity." Broadcom shares surged more than 8% on Monday after the morning announcement. Broadcom CEO Hock Tan said in a statement that "we are thrilled to co-develop and deploy 10 gigawatts of next generation accelerators and network systems to pave the way for the future of AI."
[23]
OpenAI partners with Broadcom to design its own AI chips
SAN FRANCISCO (AP) -- OpenAI said Monday it is working with chipmaker Broadcom to design its own artificial intelligence computer chips. The two California companies didn't disclose the financial terms of the deal but said they will start deploying the new racks of customized "AI accelerators" next year. The deal marks the latest between OpenAI, maker of ChatGPT, and the companies building the chips and data centers required to power AI. OpenAI CEO Sam Altman said in a statement that "developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity." Broadcom shares surged more than 8% on Monday after the morning announcement. Broadcom CEO Hock Tan said in a statement that "we are thrilled to co-develop and deploy 10 gigawatts of next generation accelerators and network systems to pave the way for the future of AI."
[24]
OpenAI partners with Broadcom to design its own AI chips
SAN FRANCISCO (AP) -- OpenAI said Monday it is working with chipmaker Broadcom to design its own artificial intelligence computer chips. The two California companies didn't disclose the financial terms of the deal but said they will start deploying the new racks of customized "AI accelerators" late next year. It's the latest big deal between OpenAI, maker of ChatGPT, and the companies building the chips and data centers required to power AI. OpenAI in recent weeks has announced partnerships with chipmakers Nvidia and AMD that will supply the AI startup with specialized chips for running its AI systems. OpenAI has also made big deals with Oracle, CoreWeave and other companies developing the data centers where those chips are housed. Many of the deals rely on circular financing, in which the companies are both investing in OpenAI and supplying the world's most valuable startup with technology, fueling concerns about an AI bubble. OpenAI doesn't yet turn a profit but says its flagship chatbot now has more than 800 million weekly users. OpenAI CEO Sam Altman said the work to develop a custom chip began more than a year ago. "Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity," he said in a statement. Broadcom shares surged more than 9% on Monday after the morning announcement. Broadcom CEO Hock Tan said in a statement that "we are thrilled to co-develop and deploy 10 gigawatts of next generation accelerators and network systems to pave the way for the future of AI."
[25]
OpenAI Partners With Broadcom to Design Its Own AI Chips
SAN FRANCISCO (AP) -- OpenAI said Monday it is working with chipmaker Broadcom to design its own artificial intelligence computer chips. The two California companies didn't disclose the financial terms of the deal but said they will start deploying the new racks of customized "AI accelerators" next year. The deal marks the latest between OpenAI, maker of ChatGPT, and the companies building the chips and data centers required to power AI. OpenAI CEO Sam Altman said in a statement that "developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity." Broadcom shares surged more than 8% on Monday after the morning announcement. Broadcom CEO Hock Tan said in a statement that "we are thrilled to co-develop and deploy 10 gigawatts of next generation accelerators and network systems to pave the way for the future of AI."
[26]
OpenAI to Deploy 10 Gigawatts of In-House AI Accelerators with Broadcom Partnership | AIM
Deployment of the racks is expected to begin in the second half of 2026 and conclude by the end of 2029. OpenAI and Broadcom announced a multi-year strategic collaboration to co-develop and deploy 10 gigawatts of OpenAI-designed AI accelerators and networking systems, marking a major expansion in OpenAI's infrastructure capabilities. Under the partnership, OpenAI will design the accelerators and systems, while Broadcom will provide Ethernet and other connectivity solutions for large-scale deployment across OpenAI facilities and partner data centres. Deployment of the racks is expected to begin in the second half of 2026 and conclude by the end of 2029. The collaboration will see both companies co-developing systems that integrate Broadcom's accelerators, Ethernet, PCIe, and optical connectivity technologies. According to the companies, the initiative aims to address the rising global demand for AI compute capacity and enhance the scalability of next-generation AI clusters. "Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential and deliver real benefits for people and businesses," said Sam Altman, co-founder and CEO of OpenAI. "Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity." Hock Tan, president and CEO of Broadcom, said the collaboration represents a pivotal moment in the pursuit of artificial general intelligence. "OpenAI has been in the forefront of the AI revolution since the ChatGPT moment, and we are thrilled to co-develop and deploy 10 gigawatts of next-generation accelerators and network systems to pave the way for the future of AI," he added. Greg Brockman, OpenAI's co-founder and President, said the move would allow the company to integrate its learning from building frontier AI models directly into hardware. "By building our own chip, we can embed what we've learned from creating frontier models and products directly into the hardware, unlocking new levels of capability and intelligence," he said. Charlie Kawwas, president of Broadcom's Semiconductor Solutions Group, said the partnership continues to set new industry benchmarks for the design and deployment of open, scalable and power-efficient AI clusters. He noted that "custom accelerators combine well with standards-based Ethernet networking solutions to provide cost and performance-optimised next-generation AI infrastructure." The companies have signed a term sheet covering co-development, supply, and deployment of the AI accelerators and associated systems. OpenAI said the partnership supports its growing user base of over 800 million weekly active users, including enterprises, small businesses, and developers. The initiative aligns with OpenAI's mission to ensure artificial general intelligence benefits all of humanity.
[27]
OpenAI Wants to Design and Deploy AI Chips in Partnership With Broadcom
The two companies have signed a term sheet for the multi-year deal OpenAI and Broadcom announced a multi-year strategic collaboration on Monday. As part of the deal, the two companies will jointly design and develop chips and systems to power the world's growing artificial intelligence (AI) compute demands. The two companies will collaborate to develop 10 gigawatts of custom AI accelerators. The move also marks the San Francisco-based AI giant's second official foray into the hardware space, after the under-development AI device, which is being built in partnership with Jony Ive. OpenAI to Design AI Chips In a post, the ChatGPT maker announced a strategic partnership with Broadcom to develop 10GW worth of custom AI accelerators, highlighting that the two companies have signed a term sheet for the same. In this collaboration, OpenAI will be designing the chipsets that can handle heavy AI workflows, while Broadcom will fabricate and deploy them. The San Jose-based semiconductor giant will also provide Ethernet solutions to help the AI system scale. OpenAI says that by designing its own chips and systems, it can apply its learnings from developing large language models (LLMs) and products such as the ChatGPT and Sora apps. Essentially, these systems will be optimised for the AI giant's workloads. Additionally, these systems will be deployed across the company's facilities and partner data centres to meet the compute demand of third parties as well. "Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential and deliver real benefits for people and businesses. Developing our own accelerators adds to the broader ecosystem of partners, all building the capacity required to push the frontier of AI to provide benefits to all humanity," said Sam Altman, Co-Founder and CEO of OpenAI. While the financial details of the deal were not disclosed, the post stated that Broadband will have exclusive rights to bring the racks and the Ethernet solutions for the AI chips. Interestingly, this marks the fourth partnership forged by OpenAI to scale the compute for its AI needs. Oracle, Nvidia, and AMD are the other partners. OpenAI said that ChatGPT has grown to more than 800 million weekly active users, alongside the adoption of its services across enterprises, small businesses, and developers. The rapid growth has exponentially increased the requirement for more servers to handle the workload.
[28]
It's Broadcom's Turn to Get a Stock Bump From Some OpenAI News
Today's example is Broadcom, which saw its stock jump on news of a deal with the ChatGPT maker. OpenAI isn't publicly traded. But when it acts, stocks often move. On Monday, it was Broadcom's (AVGO) turn. Shares of the chipmaker were recently up nearly 10%, approaching 2025 highs, following an announcement that the ChatGPT maker would team with the chipmaker to co-develop AI systems for delivery from 2026 to 2029. Today's climb in Broadcom shares is another example of the sustained popularity of the news-driven AI trade, which continues amid concerns about whether there's a bubble in the sector. It also highlights the power of OpenAI, considered the world's most valuable startup at about half a trillion dollars. The deal is "a pivotal moment in the pursuit of artificial general intelligence," Broadcom CEO Hock Tan said in a statement. OpenAI chief Sam Altman called it "a critical step in building the infrastructure needed to unlock AI's potential." As for Broadcom specifically, its shares recently changed hands at about $356, roughly 9% off the Visible Alpha mean of Wall Street analysts near $389. The stock has gained 54% since the start of 2025.
[29]
Broadcom stock (AVGO) soars after announcing custom AI chip deal with OpenAI
OpenAI CEO Sam Altman says more large-scale infrastructure partnerships are on the way, following multibillion-dollar deals with Nvidia and AMD that reshape the AI hardware market. The Broadcom stock price surged on Monday after the company announced a major strategic partnership with OpenAI to co-develop and deploy up to 10 gigawatts of custom artificial intelligence accelerators. As the stock market today opened for trading, the AVGO stock price jumped more than 7%, reflecting strong investor confidence in the deal. However, in a surprising clarification, a Broadcom executive confirmed that OpenAI is not the mystery $10 billion customer the company announced during its September earnings call. The collaboration is a significant move for both companies, positioning them to build a massive amount of custom AI infrastructure. "Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential and deliver real benefits for people and businesses," said OpenAI CEO Sam Altman in a statement. The announcement put to rest weeks of speculation that OpenAI was the large, unnamed customer that Broadcom CEO Hock Tan mentioned in September. However, Charlie Kawwas, president of Broadcom's semiconductor solutions group, stated that this was not the case. "I would love to take a $10 billion [purchase order] from my good friend Greg [Brockman]. He has not given me that PO yet." This clarification indicates that Broadcom has yet another major AI customer in its pipeline, a positive sign for the future of AVGO stock. This partnership is the latest in a series of massive infrastructure deals for OpenAI as it scales up its computing capacity. The company has recently signed multi-billion dollar agreements with both Nvidia and AMD. The deals have raised some concerns about a potential "circular AI trade," where chipmakers invest in AI companies, which then use that capital to buy chips from the investors. This has fueled discussions about a possible AI bubble, though many, like Amazon founder Jeff Bezos, believe that some of these investments will ultimately pay off. The significant investment in custom silicon also shows a broader industry trend where major tech companies like Google, Amazon, and Microsoft are developing their own chips to reduce their reliance on a single hardware provider. While OpenAI will continue to use hardware from partners like Nvidia and AMD, this move gives it a larger role in designing the specific hardware needed to power its services like ChatGPT.
[30]
OpenAI big chip orders dwarf its revenues -- for now
OpenAI is ordering hundreds of billions of dollars worth of chips in the artificial intelligence race, raising questions among investors about how the startup will finance these purchases. "They will need hundreds of billions of dollars to live up to their obligations," said Gil Luria, managing director at D.A. Davidson, a financial consulting firm. OpenAI is ordering hundreds of billions of dollars worth of chips in the artificial intelligence race, raising questions among investors about how the startup will finance these purchases. In less than a month, the San Francisco startup behind ChatGPT has committed to acquiring a staggering 26 gigawatts of sophisticated data processors from Nvidia, AMD, and Broadcom -- more than 10 million units that would consume power equivalent to 20 standard nuclear reactors. "They will need hundreds of billions of dollars to live up to their obligations," said Gil Luria, managing director at D.A. Davidson, a financial consulting firm. The challenge is daunting: OpenAI doesn't expect to be profitable until 2029 and is forecasting billions in losses this year, despite generating about $13 billion in revenue. OpenAI declined to comment on its financing strategy. However, in a CNBC interview, co-founder Greg Brockman acknowledged the difficulty of building sufficient computing infrastructure to handle the "avalanche of demand" for AI, noting that creative financing mechanisms will be necessary. Creative financing Nvidia, AMD, and Broadcom all declined to discuss specific deals with OpenAI. Silicon Valley-based Nvidia has announced plans to invest up to $100 billion in OpenAI over several years to build the world's largest AI infrastructure. OpenAI would use those funds to buy chips from Nvidia in a game of "circular financing," with Nvidia recouping its investment by taking a share in OpenAI, one of its biggest customers and the world's hottest AI company. AMD has taken a different approach, offering OpenAI options to acquire equity in AMD -- a transaction considered unusual in financial circles and a sign that it is AMD that is seeking to seize some of OpenAI's limelight with investors. "It represents another unhealthy dynamic," Luria said, suggesting the arrangement reveals AMD's desperation to compete in a market dominated by Nvidia. Crash or soar? The stakes couldn't be higher. OpenAI co-founder and CEO Sam Altman "has the power to crash the global economy for a decade or take us all to the promised land," Bernstein Research senior analyst Stacy Rasgon wrote in a note to investors this month. "Right now, we don't know which is in the cards." Even selling stakes in OpenAI at its current $500 billion valuation won't cover the startup's chip commitments, according to Luria, meaning the company will need to borrow money. One possibility: using the chips themselves as collateral for loans. Meanwhile, deep-pocketed competitors like Google and Meta can fund their AI efforts from massive profits generated by their online advertising businesses -- a luxury OpenAI doesn't have. The unbridled spending has sparked concerns about a speculative bubble reminiscent of the late 1990s dot-com frenzy, which collapsed and wiped out massive investments. However, some experts see key differences. "There is very real demand today for AI in a way that seems a little different than the boom in the 1990s," said Josh Lerner, a Harvard Business School professor of investment banking. CFRA analyst Angelo Zino pointed to OpenAI's remarkable growth and more than 800 million ChatGPT users as evidence that a partnership approach to financing makes sense. Still, Lerner acknowledges the uncertainty: "It's a real dilemma. How does one balance this future potential with the speculative nature" of its investments today
[31]
OpenAI's Secret Weapon : New Custom AI Chips That Could Change Everything
What if the future of artificial intelligence wasn't just about smarter algorithms but also about the hardware that powers them? OpenAI's bold move into developing custom AI chips in partnership with Broadcom signals a seismic shift in how AI infrastructure is built and optimized. As AI systems grow more complex and demand for computational power skyrockets, the traditional reliance on off-the-shelf hardware is no longer enough. This initiative isn't just about keeping up, it's about redefining the playing field. By creating chips tailored for their advanced models, OpenAI is positioning itself to tackle the industry's most pressing challenges: scalability, efficiency, and cost. Could this be the blueprint for the next generation of AI innovation? In this breakdown of the news, Wes Roth explore how OpenAI's custom chip strategy could transform not only its own operations but also the broader AI ecosystem. From the integration of AI-driven chip design to the ripple effects on industries like healthcare and manufacturing, this initiative is packed with implications that extend far beyond technology. You'll discover why specialized hardware is becoming a cornerstone of AI development and how OpenAI's partnership with Broadcom reflects a larger trend toward innovation at the intersection of software and hardware. As we unpack the details, one question looms large: Is this the start of a new era in AI infrastructure? OpenAI's decision to partner with Broadcom is rooted in the need for tailored hardware solutions that align with the unique requirements of its advanced AI models. These custom chips are being developed to handle both training and inference processes with enhanced efficiency. Starting late next year, OpenAI plans to deploy 10 gigawatts of AI accelerators, seamlessly integrating these chips into its infrastructure. This collaboration goes beyond hardware development, it represents a strategic alignment to ensure scalability, optimize performance, and maintain a competitive edge in the rapidly evolving AI landscape. The rapid advancement of AI technologies has created an unprecedented demand for computational resources. OpenAI's partnership with Broadcom directly addresses this challenge by focusing on hardware-software integration to support increasingly complex workloads. By optimizing this integration, the initiative aims to improve efficiency while keeping operational costs manageable. This approach is critical for OpenAI to remain competitive in an industry where compute capacity is a key factor in determining success. The deployment of custom chips is expected to significantly enhance the ability to process large-scale AI models, making sure that OpenAI can meet the growing needs of its users. Here are additional guides from our expansive article library that you may find useful on OpenAI. This collaboration with Broadcom complements OpenAI's existing partnerships with Nvidia and AMD, reflecting a broader industry trend toward customized hardware solutions. Across the AI sector, companies are investing heavily in infrastructure to support the rapid expansion of AI applications. The shift toward specialized hardware highlights its importance in achieving the performance and efficiency required for next-generation AI systems. By diversifying its hardware partnerships, OpenAI is positioning itself to use the best technologies available, making sure resilience and adaptability in a competitive market. OpenAI's investment in custom chips and data centers is part of a larger financial strategy, with projected spending between $350-500 billion and $1 trillion in recent agreements. These figures underscore the confidence in AI's fantastic potential but also raise concerns about sustainability. Analysts have cautioned about the possibility of an AI industry bubble, emphasizing the importance of strategic planning and long-term viability. OpenAI's focus on custom hardware is a calculated move to ensure its investments deliver lasting value while addressing the risks associated with such large-scale financial commitments. One of the most forward-thinking aspects of this initiative is the use of AI to optimize chip design. By using AI-driven techniques, OpenAI and Broadcom aim to achieve significant advancements in power efficiency and performance. These innovations are expected to materialize later this decade, potentially reshaping the AI hardware landscape. The ability to use AI in designing its own infrastructure represents a self-reinforcing cycle of innovation, allowing the development of disruptive technologies that could redefine the industry and set new benchmarks for efficiency. The development of custom AI chips has far-reaching implications for industries beyond technology. These chips are expected to accelerate the adoption of AI-driven solutions across sectors such as healthcare, finance, and manufacturing. By enhancing capabilities and reducing costs, businesses will gain access to more efficient and affordable AI tools. This shift could foster innovation, streamline operations, and unlock new opportunities for growth. The ripple effects of these advancements may extend to areas such as personalized medicine, automated financial analysis, and smart manufacturing systems, demonstrating the fantastic potential of AI-powered infrastructure. OpenAI's partnership with Broadcom represents a strategic leap forward in the development of AI infrastructure. By addressing critical challenges such as compute capacity, performance optimization, and cost management, this initiative positions OpenAI to lead the next wave of AI innovation. As the industry continues to invest in custom hardware and advanced technologies, the impact of these developments is likely to extend far beyond AI, shaping the future of technology and society. OpenAI's commitment to innovation and collaboration underscores its role as a driving force in the evolution of artificial intelligence.
[32]
Broadcom Stock Soars On AI Accelerator Deal With OpenAI - Broadcom (NASDAQ:AVGO)
Broadcom Inc (NASDAQ:AVGO) shares are trading higher Monday after the company and OpenAI announced a collaboration to deploy custom AI accelerators. What To Know: Broadcom and OpenAI will work together to co-develop 10 gigawatts of OpenAI-designed accelerators. The racks will be scaled entirely with Ethernet and other connectivity solutions from Broadcom in an effort to meet surging global demand for AI. "Broadcom's collaboration with OpenAI signifies a pivotal moment in the pursuit of artificial general intelligence," said Hock Tan, president and CEO of Broadcom. "OpenAI has been in the forefront of the AI revolution since the ChatGPT moment, and we are thrilled to co-develop and deploy 10 gigawatts of next- generation accelerators and network systems to pave the way for the future of AI." Broadcom and OpenAI have signed a term sheet to deploy racks incorporating the AI accelerators and Broadcom networking solutions. The collaboration aims to help OpenAI advance its mission to ensure that artificial general intelligence benefits all of humanity. How To Buy AVGO Stock Besides going to a brokerage platform to purchase a share - or fractional share - of stock, you can also gain access to shares either by buying an exchange traded fund (ETF) that holds the stock itself, or by allocating yourself to a strategy in your 401(k) that would seek to acquire shares in a mutual fund or other instrument. For example, in Broadcom's case, it is in the Information Technology sector. An ETF will likely hold shares in many liquid and large companies that help track that sector, allowing an investor to gain exposure to the trends within that segment. AVGO Price Action: Broadcom shares were up 9.79% at $356.42 at the time of publication on Monday, according to Benzinga Pro. Read More: Altimeter Capital CEO Brad Gerstner Breaks Down AMD-OpenAI GPU Bet, Says Lisa Su Is Betting The Farm To Catch Up To Rival Nvidia Photo: Ken Wolter/Shutterstock.com AVGOBroadcom Inc$355.609.54%OverviewMarket News and Data brought to you by Benzinga APIs
[33]
OpenAI Partners With Broadcom To Build Custom Chips For Future AI Models
OpenAI has officially partnered with semiconductor giant Broadcom to co-develop custom AI chips designed to supercharge its next generation of large language models. It's a strategic move that goes far beyond just making results appear faster, as OpenAI aims to gain control over the hardware that powers AI, reduce dependency on Nvidia, and build the foundation for the technology that drives the next generation of AI capabilities. OpenAI's decision to work with Broadcom marks a big step from software to hardware, and together, they are developing chips and networking systems designed specifically for AI training and performance. These chips will be built to handle massive workloads efficiently, cutting power usage while bolstering speed, which is a key factor as models are getting bigger and more complex. On the flip side, Broadcom will help with far more than just performance, as it will provide advanced networking, optical links, and other hardware to make OpenAI's data centers run faster and smoother. The first systems are planned for 2026, with a wider rollout by 2029. The news comes a few days after Sam Altman said that tech giants should rely on TSMC to expand chip capacity. In comparison to the competition, companies like Google, Amazon, and Meta are already designing their own custom chips, but OpenAI's approach will be different. Instead of building everything from scratch, it is working with an experienced partner to save time and reduce costs. This lets OpenAI keep control of the chip's design and performance while Broadcom will only provide production and infrastructure support. Developing custom chips for AI is not an easy feat, as it requires years of research and billions in investment. It also requires close coordination between hardware and software teams for seamless integration with existing and new models which will be introduced in the future. By creating its own hardware foundation, OpenAI is taking a major step forward toward long-term sustainability. The company says that designing its own custom chips will also allow it to "embed what it's learned from developing frontier models and products directly into the hardware, unlocking new levels of capability and intelligence." It will enable the company to deploy "10 gigawatts of custom AI accelerators" using its own chips. Will this partnership help OpenAI stay ahead in the fast-growing world of AI?
[34]
Broadcom skyrockets 9% on secretive OpenAI chip deal -- investors see next big AI gold rush
Avgo stock: Broadcom shares jumped 9% on Monday after the semiconductor giant and OpenAI officially announced a major partnership to build and deploy 10 gigawatts of custom artificial intelligence accelerators, a massive step in the race to expand global AI infrastructure, as per a report. The two companies, which have quietly been working together for 18 months, are now going public with plans to roll out racks of OpenAI-designed chips starting late next year, as per a CNBC report. While financial terms weren't disclosed, the deal signals a powerful new alliance as OpenAI looks to secure the computing muscle needed to support its growing lineup of AI products and research, according to the report. "These things have gotten so complex you need the whole thing," OpenAI CEO Sam Altman said during a podcast released alongside the announcement, as quoted by CNBC. He described the Broadcom deal is "a gigantic amount of computing infrastructure to serve the needs of the world to use advanced intelligence," adding, "We can get huge efficiency gains, and that will lead to much better performance, faster models, cheaper models -- all of that," as quoted in the report. ALSO READ: Your 2026 Social Security check could get a surprise 'Trump Bump' -- see what's coming The systems will be built on Broadcom's Ethernet stack, integrating custom networking, memory, and compute components tailored specifically for OpenAI's workloads, according to the CNBC report. Designing its own chips could help OpenAI significantly lower compute costs. Industry estimates suggest a 1-gigawatt data center can cost around $50 billion, with roughly $35 billion of that tied to chips based on Nvidia's current pricing, as per the report. The Broadcom partnership adds to a flurry of recent OpenAI deals with Nvidia, Oracle, and AMD, totaling about 33 gigawatts of computing commitments in just three weeks. Altman said OpenAI currently operates on a little over 2 gigawatts, enough to power ChatGPT, video generator Sora, and its ongoing research, but demand is rising fast, as per the report. Altman said, "Even though it's vastly more than the world has today, we expect that very high-quality intelligence delivered very fast and at a very low price -- the world will absorb it super fast and just find incredible new things to use it for," as quoted by CNBC. ALSO READ: Meet Garret Jin: The mysterious trader who made millions shorting Bitcoin before the crash Broadcom has become one of the biggest winners of the generative AI boom. Its custom AI chips, known as XPUs, are already in use by major tech players like Google, Meta, and ByteDance, according to analysts. The company's stock has surged 40% this year after more than doubling in 2024, pushing its market value past $1.5 trillion. OpenAI President Greg Brockman revealed that the company used its own AI models to optimize chip design. He said, "We've been able to get massive area reductions," adding, "You take components that humans have already optimized and just pour compute into it, and the model comes out with its own optimizations," as quoted by CNBC. Broadcom CEO Hock Tan called OpenAI's work "the most advanced" in the field and said designing its own chips would help it maintain control as it moves toward ever more powerful AI systems. He said, "You continue to need compute capacity -- the best, latest compute capacity -- as you progress in a road map towards a better and better frontier model and towards superintelligence," adding, "If you do your own chips, you control your destiny," as quoted by CNBC. What is the Broadcom-OpenAI deal about? They're teaming up to build 10 gigawatts of custom AI chips to support OpenAI's growing AI workloads. Who uses Broadcom's AI chips? Analysts say Google, Meta, and ByteDance are some of Broadcom's early AI clients.
[35]
Inside OpenAI & Broadcom's Bold Partnership : New AI Chips Announced
What happens when two giants in their respective fields join forces to reshape the future of technology? The recent partnership between OpenAI and Broadcom is poised to answer that question. In a bold move to address the surging demand for advanced AI infrastructure, these industry leaders are collaborating to design custom AI chips that promise to transform how artificial intelligence operates at scale. With plans to deploy a staggering 10 gigawatts of computing power by next year, an infrastructure rivaling the scale of major industrial projects, this partnership is not just about keeping pace with the AI boom; it's about setting a new standard. At its core, this collaboration represents a powerful fusion of expertise: OpenAI's mastery of AI workloads and Broadcom's leadership in semiconductor innovation. Together, they're tackling one of the most pressing challenges in tech today: how to make AI systems faster, more efficient, and globally scalable. In this OpenAI podcast learn more about the fantastic potential of this alliance, from the innovative technologies driving their custom chip designs to the broader implications for AI development. Readers will discover how innovations like 3D chip stacking and optical switching are poised to redefine AI hardware capabilities, allowing faster model training and inference. But the story doesn't stop at the technical level, this partnership also signals a shift in how AI infrastructure is conceived, with a focus on vertical integration and seamless optimization across hardware and software. Whether you're curious about the future of generative AI, the practical applications of these advancements, or the vision of AI as a foundational global utility, this collaboration offers a glimpse into the next chapter of technological evolution. As we unpack the details, one question lingers: could this partnership be the key to unlocking AI's full potential? The collaboration between OpenAI and Broadcom is centered on developing custom AI chips specifically tailored to the unique demands of AI workloads. By integrating chip design with data center deployment, the partnership aims to establish a seamless, vertically integrated system. This approach ensures that every component, from hardware to software, is optimized to maximize performance and efficiency for AI applications. The overarching goal is to address the growing global demand for AI infrastructure while supporting the development of next-generation AI models. By focusing on scalability and efficiency, the partnership seeks to enable the widespread adoption of AI technologies, making sure they remain accessible and practical for a broad range of applications. To achieve their ambitious objectives, OpenAI and Broadcom are using innovative technologies that redefine the boundaries of AI hardware. These innovations include: These technological breakthroughs aim to deliver faster, more efficient, and cost-effective AI systems. By addressing key bottlenecks in AI hardware, the partnership is paving the way for applications that can transform industries ranging from healthcare to finance and beyond. Below are more guides on OpenAI from our extensive range of articles. A cornerstone of this partnership is the deployment of a massive computing infrastructure designed to meet the escalating global demand for AI capabilities. OpenAI and Broadcom plan to roll out 10 gigawatts of computing capacity by late next year, with a long-term vision to scale this infrastructure beyond 30 gigawatts. To put this into perspective, such infrastructure rivals the scale of major industrial projects, reflecting the ambition and complexity of this initiative. This level of capacity is critical for supporting the training and deployment of advanced AI models, including future iterations like GPT-6 and GPT-7. By investing in scalable infrastructure, the partnership aims to ensure that AI technologies remain robust, reliable, and capable of addressing increasingly complex challenges. The collaboration between OpenAI and Broadcom is poised to unlock new possibilities in AI development, allowing the creation of more powerful and efficient generative AI models. These advancements will support a diverse range of applications, including: By driving innovation and expanding the potential of AI technologies, the partnership aims to provide widespread access to access to these tools, making sure that users worldwide can benefit from their practical applications and fantastic potential. The partnership uses the complementary expertise of OpenAI and Broadcom to achieve its ambitious goals. OpenAI brings deep knowledge of AI workloads and model development, while Broadcom contributes its leadership in semiconductor innovation and hardware design. Together, they are creating chips and systems optimized for AI applications, fostering a collaborative ecosystem that prioritizes efficiency, scalability, and innovation. This synergy ensures that the resulting technologies are not only innovative but also aligned with the broader mission of advancing AI for the benefit of humanity. By combining their strengths, OpenAI and Broadcom are setting a new standard for AI infrastructure development, emphasizing both technical excellence and practical utility. Looking ahead, the partnership between OpenAI and Broadcom is set to drive continued advancements in AI chip design and infrastructure. Future AI models, such as GPT-6 and GPT-7, will benefit from enhanced performance, scalability, and efficiency enabled by these innovations. The long-term vision is to position AI as a foundational global utility, akin to the internet or electricity, serving billions of users worldwide. By prioritizing efficiency, scalability, and accessibility, OpenAI and Broadcom aim to ensure that AI technologies contribute to economic growth and improve quality of life for people across the globe. This partnership represents a pivotal moment in the evolution of AI infrastructure, redefining the role of AI in shaping a more connected, intelligent, and equitable world.
[36]
OpenAI Makes $500B Chip Bet: Is Broadcom The Next AI Winner? - Broadcom (NASDAQ:AVGO), Oracle (NYSE:ORCL)
OpenAI is accelerating its push to dominate the artificial intelligence landscape with a staggering $500 billion chip initiative involving Broadcom Inc. (NASDAQ:AVGO). On Monday, the company behind ChatGPT struck a multibillion-dollar agreement with Broadcom to co-design custom chips capable of powering its AI models, marking one of the most extensive semiconductor procurement campaigns in history. According to the Financial Times, the deal could cost OpenAI between $350 billion and $500 billion, bringing its total chip capacity plans to over 26 gigawatts -- comparable to the energy output of 26 nuclear reactors. Chief Executive Sam Altman said the company has worked with Broadcom for 18 months to develop silicon tailored for AI inference, the computational process of responding to user queries. "The biggest joint industrial project in human history" is how Altman described the AI infrastructure push. Broadcom CEO Hock Tan compared the AI revolution to the advent of the internet or railroads, calling it a "critical utility over time for 8 billion people globally." Shares of Broadcom Inc. rallied as much as 9% during premarket trading in New York, on track to fully recoup losses from Friday. See Also: China's Silence Is Scarier Than Tariffs -- 5 Ways It Could Hit Trump Hard Nvidia and AMD Deals Add to OpenAI's Chip Arsenal The Broadcom pact follows two other major deals OpenAI recently signed in its AI hardware arms race. In September, OpenAI committed to a 10-gigawatt order from Nvidia. Last month, it added another 6 gigawatts with AMD, bringing the total across all three vendors to 26 gigawatts. Nvidia's involvement includes a $100 billion equity investment into OpenAI, a move some analysts view as financially circular. James Schneider, an analyst at Goldman Sachs, noted that the structure resembles "circular revenue," with Nvidia funding OpenAI only to recognize revenue when OpenAI purchases its GPUs back. INvidia expects $13 billion in revenue from OpenAI in 2026. The company will reinvest $10 billion of its gross profit. Schneider said, "When equity investment comes from a supplier, additional scrutiny is warranted." Meanwhile, AMD's agreement with OpenAI could be even more transformational. The deal includes up to 6 gigawatts of AMD Instinct GPU deployments, starting with 1 gigawatt of MI450 GPUs in late 2026. AMD will also issue OpenAI up to 160 million shares through performance-linked warrants -- potentially worth $75 billion -- if deployment milestones are met, including a $600 share price. Goldman Sachs estimates the AMD deal represents a $135 billion revenue opportunity based on expected GPU pricing. AMD CFO Jean Hu said the partnership has the potential to generate "tens of billions of dollars in revenue" for the chipmaker. How Will OpenAI Fund This $1.5 Trillion Infrastructure Bet? Despite its relatively modest revenue base, OpenAI is committing to over $1.5 trillion in semiconductor and infrastructure spending over the next decade. That includes $300 billion in data center build-outs with Oracle Corp. (NYSE:ORCL). Each gigawatt of AI capacity requires about $50 billion in total investment -- $35 billion for chips and $15 billion for infrastructure. Broadcom's chips are expected to be cheaper than Nvidia's, but even so, the economics are eye-watering. The company's strategy signals a shift toward vertical integration, involving the design of its own chips, securing dedicated compute power, and minimizing reliance on third-party vendors. Still, the sheer size of the investment raises critical questions. Now Read: AMD's OpenAI Deal Could Spark A New AI Arms Race With Nvidia © Jeff Lange / USA TODAY NETWORK / USA TODAY NETWORK via Imagn Images AVGOBroadcom Inc$354.109.08%OverviewORCLOracle Corp$306.374.58%Market News and Data brought to you by Benzinga APIs
[37]
OpenAI and Broadcom to build 10 Gigawatts of AI Accelerators
Given the computing needs for OpenAI to run its platforms, the company has been making deals with chip players to improve its infrastructure and data center operations. Now, OpenAI has announced a partnership with Broadcom, which will help the company develop 10 gigawatts of custom AI accelerators. This multi-year partnership enables OpenAI and Broadcom to deliver accelerator and network systems for next-generation AI clusters. The main highlight here is that OpenAI will design the accelerators and systems, utilizing its learnings on developing frontier models, and translate directly into the hardware design of the chips. These new systems and chips, which OpenAI calls "AI accelerators and systems," will be developed, scaled completely with Ethernet and other connectivity solutions from Broadcom, and deployed by Broadcom across OpenAI's facilities and partner data centers. These massive AI accelerators and systems will consume a total power of 10 gigawatts. The racks include Broadcom's end-to-end portfolio of Ethernet, PCIe, and optical connectivity solutions. According to Charlie Kawwas, Ph. D., President of the Semiconductor Solutions Group for Broadcom, these systems will set new industry benchmarks for the design and deployment of open, scalable, and power-efficient AI clusters. Broadcom plans to deploy racks of AI accelerators and network systems starting in the second half of 2026, aiming for completion by the end of 2029. Regarding the partnership, Sam Altman, co-founder and CEO of OpenAI, said, Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential and deliver real benefits for people and businesses," "Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity. Hock Tan, President and CEO of Broadcom, said, Broadcom's collaboration with OpenAI signifies a pivotal moment in the pursuit of artificial general intelligence," "OpenAI has been in the forefront of the AI revolution since the ChatGPT moment, and we are thrilled to co-develop and deploy 10 gigawatts of next generation accelerators and network systems to pave the way for the future of AI. Charlie Kawwas, Ph. D., President of the Semiconductor Solutions Group for Broadcom, said,
[38]
OpenAI taps Broadcom to build its first AI processor in latest chip deal - The Economic Times
OpenAI has partnered with Broadcom to produce its first in-house artificial intelligence processors, the latest chip tie-up for the ChatGPT maker as it races to secure the computing power needed to meet surging demand for its services. The companies said on Monday that OpenAI would design the chips, which Broadcom will develop and deploy starting in the second half of 2026. They will roll out 10 gigawatts' worth of custom chips, whose power consumption is roughly equivalent to the needs of more than 8 million US households or five times the electricity produced by the Hoover Dam. The agreement is the latest in a string of massive AI chip investments that have highlighted the technology industry's surging appetite for computing power as it races to build systems that meet or surpass human intelligence. OpenAI last week unveiled a 6-gigawatt AI chip supply deal with AMD that includes an option to buy a stake in the chipmaker, days after disclosing that Nvidia plans to invest up to $100 billion in the startup and provide it with data-center systems with at least 10 gigawatts of capacity. "Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential," OpenAI CEO Sam Altman said in a statement. Financial details of the agreement were not disclosed and it was not immediately clear how OpenAI would fund the deal. Custom chip boom The tie-up with Broadcom, first reported by Reuters last year, places OpenAI among cloud-computing giants such as Alphabet-owned Google and Amazon.com that are developing custom chips to meet surging AI demand and reduce dependence on Nvidia's costly processors that are limited in supply. The approach is not a sure bet. Similar efforts by Microsoft and Meta have run into delays or failed to match the performance of Nvidia chips, according to media reports, and analysts believe custom chips do not pose a threat to Nvidia's dominance in the short term. The rise of custom chips has, however, turned Broadcom - long known for its networking hardware - into one of the biggest winners of the generative AI boom, with its stock price rising nearly sixfold since the end of 2022. The company unveiled a blockbuster $10 billion custom AI chip order in September from an unnamed new customer that some analysts and market watchers speculated was OpenAI. Broadcom and OpenAI said on Monday that the deployment of the new custom chips would be completed by the end of 2029, building on their existing co-development and supply agreements. The new systems will be scaled entirely using Broadcom's Ethernet and other networking gear, giving the company an edge over smaller rivals such as Marvell Technology and challenging Nvidia's InfiniBand networking solution.
[39]
OpenAI Partners with Broadcom in Multibillion-Dollar Chip Deal to Power Next-Gen AI Revolution
OpenAI is now planning to design its own graphics processing units, or GPUs, to integrate powerful artificial intelligence models into the hardware for future systems. According to the announced agreement, the chips will be co-developed by OpenAI and Broadcom and deployed by the chip company starting in the second half of 2026. They will roll out 10 gigawatts' worth of custom chips, whose power consumption is roughly equivalent to the needs of more than 8 million US households or five times the electricity produced by the Hoover Dam. "Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential," OpenAI CEO said in a statement. "Broadcom's collaboration with OpenAI signifies a pivotal moment in the pursuit of artificial general intelligence," said Hock Tan, President and CEO of Broadcom. "OpenAI has been at the forefront of the AI revolution since the ChatGPT moment, and we are thrilled to co-develop and deploy 10 gigawatts of next-generation accelerators and network systems to pave the way for the future of AI," he added. "Our collaboration with Broadcom will power breakthroughs in AI and bring the technology's full potential closer to reality," said OpenAI co-founder and President Greg Brockman. "By building our own chip, we can embed what we've learned from creating frontier models and products directly into the hardware, unlocking new levels of capability and intelligence," added Brockman.
[40]
OpenAI partners with Broadcom to design its own AI chips
OpenAI said Monday it is working with chipmaker Broadcom to design its own artificial intelligence computer chips. The two California companies didn't disclose the financial terms of the deal but said they will start deploying the new racks of customized "AI accelerators" late next year. It's the latest big deal between OpenAI, maker of ChatGPT, and the companies building the chips and data centers required to power AI. OpenAI in recent weeks has announced partnerships with chipmakers Nvidia and AMD that will supply the AI startup with specialized chips for running its AI systems. OpenAI has also made big deals with Oracle, CoreWeave and other companies developing the data centers where those chips are housed. Many of the deals rely on circular financing, in which the companies are both investing in OpenAI and supplying the world's most valuable startup with technology, fueling concerns about an AI bubble. OpenAI doesn't yet turn a profit but says its products now have more than 800 million weekly users. "What's real about this announcement is OpenAI's intention of having its own custom chips," said analyst Gil Luria, head of technology research at D.A. Davidson. "The rest is fantastical. OpenAI has made, at this point, approaching US$1 trillion of commitments, and it's a company that only has US$15 billion of revenue." OpenAI CEO Sam Altman said the work with Broadcom to develop a custom chip began about 18 months ago. Broadcom also works with other leading AI developers, including tech giants Amazon and Google. Altman said on a podcast announcing the deal that the computing power made possible through the Broadcom partnership will amount to 10 gigawatts, which he described as "a gigantic amount of computing infrastructure to serve the needs of the world to use advanced intelligence." Broadcom shares surged more than nine per cent on Monday. Broadcom CEO Hock Tan said on the same podcast that OpenAI needs more computing capacity as it progresses toward a "better and better frontier model and towards superintelligence." "If you do your own chips, you control your destiny," he added.
[41]
OpenAI working with Softbank's Arm on Broadcom chip deal- The Information By Investing.com
Investing.com-- OpenAI is working with Japan's SoftBank Group Corp. (TYO:9984) and its Arm (NASDAQ:ARM) unit on a recently announced deal with Broadcom Inc (NASDAQ:AVGO) to build out artificial intelligence data center chips, The Information reported on Monday. The AI startup is working with British chip designer Arm to develop a CPU chip that will work with OpenAI's server designs, the report said. OpenAI and Broadcom announced a deal on Monday to produce the start-up's first in-house AI processors, as the company seeks even more computing power to build out its AI ambitions. OpenAI will design the chips, while Broadcom will develop and deploy the processors in the second half of 2026. Arm is a British chip designer, and has come into increasing focus in recent years as a growing AI industry pushed up demand for chips. The company was publicly relisted by Softbank in 2023.
[42]
Broadcom soars after OpenAI pact for 10GW of AI chips By Investing.com
Investing.com -- Broadcom Inc. (NASDAQ:AVGO) stock surged 12% Monday morning following the announcement of a strategic collaboration with OpenAI to develop and deploy 10 gigawatts of custom AI accelerators. The multi-year partnership will see OpenAI design the accelerators while Broadcom handles development and deployment. The companies have signed a term sheet for deploying racks incorporating the AI accelerators and Broadcom's networking solutions, with deployment targeted to begin in the second half of 2026 and complete by the end of 2029. "Broadcom's collaboration with OpenAI signifies a pivotal moment in the pursuit of artificial general intelligence," said Hock Tan, President and CEO of Broadcom. "OpenAI has been in the forefront of the AI revolution since the ChatGPT moment, and we are thrilled to co-develop and deploy 10 gigawatts of next-generation accelerators and network systems to pave the way for the future of AI." The partnership reinforces Broadcom's position in the AI infrastructure market, particularly highlighting the importance of custom accelerators and Ethernet technology for networking in AI data centers. The racks will include Broadcom's end-to-end portfolio of Ethernet, PCIe, and optical connectivity solutions. Sam Altman, co-founder and CEO of OpenAI, noted that the partnership is "a critical step in building the infrastructure needed to unlock AI's potential." By designing its own chips, OpenAI aims to embed what it has learned from developing frontier models directly into hardware. OpenAI, which now boasts over 800 million weekly active users, will deploy these systems across its facilities and partner data centers to meet the growing global demand for AI computing power.
[43]
OpenAI and Broadcom Inc. Announce Strategic Collaboration to Deploy 10 Gigawatts of OpenAI- Designed AI Accelerators
OpenAI and Broadcom Inc. announced a collaboration for 10 gigawatts of custom AI accelerators. OpenAI will design the accelerators and systems, which will be developed and deployed in partnership with Broadcom. By designing its own chips and systems, OpenAI can embed what it?s learned from developing frontier models and products directly into the hardware, unlocking new levels of capability and intelligence. The racks, scaled entirely with Ethernet and other connectivity solutions from Broadcom, will meet surging global demand for AI, with deployments across OpenAI?s facilities and partner data centers. OpenAI and Broadcom have long-standing agreements on the co-development and supply of the AI accelerators. The two companies have signed a term sheet to deploy racks incorporating the AI accelerators and Broadcom networking solutions. For Broadcom, this collaboration reinforces the importance of custom accelerators and the choice of Ethernet as the technology for scale-up and scale-out networking in AI data centers. OpenAI has grown to over 800 million weekly active users and strong adoption across global enterprises, small businesses, and developers. This collaboration will help OpenAI advance its mission to ensure that artificial general intelligence benefits all of humanity.
[44]
OpenAI teams up with Broadcom to design custom AI chips
On Monday, OpenAI formalized a strategic partnership with Broadcom to design and deploy a new generation of chips dedicated to artificial intelligence. This collaboration, which began 18 months ago, aims to build up to 10 gigawatts of AI accelerators co-developed and optimized for OpenAI models. The agreement confirms speculation surrounding a $10bn contract mentioned in September by Broadcom with an unnamed customer, now identified as OpenAI. The first installations are scheduled for late 2026 and will include racks of "XPUs" chips integrating network and memory, built on Broadcom's Ethernet technology. Sam Altman, CEO of OpenAI, explained that this internal approach would significantly reduce computing costs while improving the speed and efficiency of the models. Greg Brockman, president of the company, said that OpenAI uses its own AI models to optimize component design. According to him, this method reduces the size and complexity of circuits while increasing their performance. This alliance comes amid a global race for computing power, dominated by Nvidia and AMD. OpenAI currently has just over 2 gigawatts of capacity but plans to increase its total infrastructure to 33 gigawatts with this new partnership. For Hock Tan, CEO of Broadcom, this initiative illustrates OpenAI's desire to "control its technological destiny." The announcement propelled Broadcom's stock up more than 12%, bringing its market capitalization to over $1.5 trillion. Altman said that the planned 10 gigawatts is only a first step in meeting the growing demand for ever faster and more affordable artificial intelligence.
[45]
OpenAI taps Broadcom to build its first AI processor in latest chip deal
(Reuters) -OpenAI has partnered with Broadcom to produce its first in-house artificial intelligence processors, the latest chip tie-up for the ChatGPT maker as it races to secure the computing power needed to meet surging demand for its services. Shares of Broadcom rose more than 12% in premarket trading. The companies said on Monday that OpenAI would design the chips, which Broadcom will develop and deploy starting in the second half of 2026. They will roll out 10 gigawatts' worth of custom chips, whose power consumption is roughly equivalent to the needs of more than 8 million U.S. households or five times the electricity produced by the Hoover Dam. The agreement is the latest in a string of massive AI chip investments that have highlighted the technology industry's surging appetite for computing power as it races to build systems that meet or surpass human intelligence. OpenAI last week unveiled a 6-gigawatt AI chip supply deal with AMD that includes an option to buy a stake in the chipmaker, days after disclosing that Nvidia plans to invest up to $100 billion in the startup and provide it with data-center systems with at least 10 gigawatts of capacity. "Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential," OpenAI CEO Sam Altman said in a statement. Financial details of the agreement were not disclosed and it was not immediately clear how OpenAI would fund the deal. CUSTOM CHIP BOOM The tie-up with Broadcom, first reported by Reuters last year, places OpenAI among cloud-computing giants such as Alphabet-owned Google and Amazon.com that are developing custom chips to meet surging AI demand and reduce dependence on Nvidia's costly processors that are limited in supply. The approach is not a sure bet. Similar efforts by Microsoft and Meta have run into delays or failed to match the performance of Nvidia chips, according to media reports, and analysts believe custom chips do not pose a threat to Nvidia's dominance in the short term. The rise of custom chips has, however, turned Broadcom - long known for its networking hardware - into one of the biggest winners of the generative AI boom, with its stock price rising nearly six-fold since the end of 2022. The company unveiled a blockbuster $10 billion custom AI chip order in September from an unnamed new customer that some analysts and market watchers speculated was OpenAI. Broadcom and OpenAI said on Monday that the deployment of the new custom chips would be completed by the end of 2029, building on their existing co-development and supply agreements. The new systems will be scaled entirely using Broadcom's Ethernet and other networking gear, giving the company an edge over smaller rivals such as Marvell Technology and challenging Nvidia's InfiniBand networking solution. (Reporting by Max Cherney in San Francisco and Arsheeya Bajwa in Bengaluru; Editing by Shinjini Ganguli)
[46]
OpenAI is building its own AI chip jointly with Broadcom: Deal is worth billions
OpenAI's in-house chip project marks new phase in AI infrastructure In a bold move to redefine the future of AI infrastructure, OpenAI has entered into a multibillion-dollar partnership with chipmaker Broadcom to design and manufacture its own artificial intelligence processors. The deal marks one of the most significant shifts in OpenAI's hardware strategy to date, signaling its intent to reduce dependence on Nvidia's GPUs - the dominant force powering AI training worldwide. According to The Information, the collaboration involves not just Broadcom but also Arm, the chip design company owned by SoftBank, to integrate custom processing cores optimized for OpenAI's large-scale models. The project, internally referred to as a cornerstone of OpenAI's "compute independence" plan, could begin deployment as early as 2026, with scaling expected through the decade. Also read: Google updates NotebookLM with new features: How to use Over the past year, OpenAI's rapid product expansion - from ChatGPT to advanced multimodal models - has strained access to GPU capacity. Nvidia's H100 chips remain the gold standard for training large language models, but global shortages and soaring costs have made self-reliance an attractive, if daunting, proposition. By teaming up with Broadcom, one of the world's largest semiconductor manufacturers, OpenAI gains access to cutting-edge chip design and production capabilities. Broadcom has a long history of fabricating advanced accelerators and networking hardware, which could prove crucial as OpenAI looks to build datacenters with tens of thousands of interconnected AI chips. While neither company disclosed financial terms, industry estimates peg the partnership's value at several billion dollars over multiple years, placing it on par with the scale of OpenAI's existing cloud contracts with Microsoft. Broadcom will reportedly handle fabrication, testing, and integration, while OpenAI will take the lead on chip architecture and optimization for its AI workloads. The chips, expected to rival or exceed Nvidia's next-generation offerings, could power OpenAI's in-house training clusters and possibly Azure-based systems that serve enterprise clients. The inclusion of Arm in the collaboration hints at a hybrid compute design, where Arm CPUs work alongside AI accelerators. Such an approach mirrors trends seen in high-performance AI servers, where flexible CPU cores manage orchestration, memory allocation, and data routing while accelerators handle dense matrix computation. Also read: Panther Lake: 2026 Intel laptops to have faster GPU, better AI and battery This announcement extends a broader pattern among AI leaders seeking to secure hardware independence. Google's Tensor Processing Units (TPUs) already power most of its AI workloads, while Meta and Amazon have both invested heavily in their own chips to cut costs and tailor performance. For OpenAI, however, the move carries an added dimension, reducing exposure to Nvidia, whose dominance in the GPU market gives it immense pricing power. Custom chips could offer OpenAI significant savings and deeper control over efficiency, latency, and performance per watt. Still, the path is fraught with challenges. Designing, validating, and manufacturing chips at scale can take years and cost billions in R&D. Even for giants like Apple or Google, achieving competitive yields and software compatibility remains complex. Analysts caution that any misstep in execution could delay OpenAI's rollout or blunt its competitive edge. OpenAI CEO Sam Altman has long emphasized that access to massive compute will define who leads the next era of AI. Reports suggest OpenAI aims to deploy 10 gigawatts of AI accelerator capacity by 2029, making it one of the largest compute buildouts ever attempted by a private company. If successful, this partnership could transform OpenAI from a model-building lab into a full-stack AI ecosystem, spanning chips, data centers, and software. It would also position the company to compete directly not just in AI models, but in the foundational hardware layer that powers them. As AI systems become more multimodal, real-time, and embodied in devices and robots, OpenAI's push into chipmaking could mark the start of a new era: one where intelligence isn't just written in code, but etched into silicon.
Share
Share
Copy Link
OpenAI announces a partnership with Broadcom to develop and deploy 10 gigawatts of custom AI accelerators, marking a significant step in the company's efforts to build its own AI infrastructure and reduce reliance on existing chip suppliers.
OpenAI, the artificial intelligence research laboratory, has announced a groundbreaking partnership with semiconductor giant Broadcom to develop and deploy custom AI accelerator hardware
1
2
. This collaboration marks a significant step in OpenAI's efforts to build its own AI infrastructure and reduce reliance on existing chip suppliers.
Source: Digit
The partnership will result in the production of 10 gigawatts worth of custom AI accelerator hardware, which will be deployed across OpenAI's data centers and partner facilities starting in 2026 and running through 2029
1
3
. By designing its own chips and systems, OpenAI aims to embed the knowledge gained from developing frontier models and products directly into the hardware, potentially unlocking new levels of capability and intelligence1
.
Source: Market Screener
Reports suggest that Arm, the British semiconductor design company, is developing a custom CPU for OpenAI's in-house accelerator
4
. This collaboration could represent one of Arm's most significant steps into the data center market. The CPU might be used not only with the Broadcom chip but also with systems from Nvidia and AMD, further expanding OpenAI's hardware ecosystem.Related Stories

Source: NDTV Gadgets 360
The Broadcom deal is part of OpenAI's broader strategy to secure vast computing power for its AI ambitions. Recent partnerships include:
1
1
5
1
These agreements collectively represent over 26 gigawatts of planned data center capacity, with a potential cost exceeding $1 trillion in construction and equipment
4
.The partnership between OpenAI and Broadcom reflects a growing trend in the tech industry, with major players like Meta, Google, and Microsoft also working to develop custom chips and bolster their AI supply chains
3
. However, this massive influx of investment into AI infrastructure has raised concerns about a potential AI bubble, with some analysts drawing parallels to the dotcom era2
5
.As OpenAI continues to expand its infrastructure and partnerships, questions remain about the long-term profitability and sustainability of these investments. The company reportedly expects to achieve positive cash flow in four years, but the scale of its infrastructure spending has raised eyebrows in the financial community
5
.Summarized by
Navi
[1]
[4]
[5]
1
Business and Economy

2
Business and Economy

3
Technology
