23 Sources
23 Sources
[1]
OpenAI links up with Broadcom to produce its own AI chips
OpenAI is set to produce its own artificial intelligence chip for the first time next year, as the ChatGPT maker attempts to address insatiable demand for computing power and reduce its reliance on chip giant Nvidia. The chip, co-designed with US semiconductor giant Broadcom, would ship next year, according to multiple people familiar with the partnership. Broadcom's chief executive Hock Tan on Thursday referred to a mystery new customer committing to $10 billion in orders. OpenAI's move follows the strategy of tech giants such as Google, Amazon and Meta, which have designed their own specialised chips to run AI workloads. The industry has seen huge demand for the computing power to train and run AI models. OpenAI planned to put the chip to use internally, according to one person close to the project, rather than make them available to external customers. Last year it began an initial collaboration with Broadcom, according to reports at the time, but the timeline for mass production of a successful chip design had previously been unclear. On a call with analysts, Tan announced that Broadcom had secured a fourth major customer for its custom AI chip business, as it reported earnings that topped Wall Street estimates. Broadcom does not disclose the names of these customers, but people familiar with the matter confirmed OpenAI was the new client. Broadcom and OpenAI declined to comment. Tan said the deal had lifted the company's growth prospects by bringing "immediate and fairly substantial demand," shipping chips for that customer "pretty strongly" from next year. The prospect that custom AI chips will take a growing share of the booming AI infrastructure market has helped propel Broadcom's shares more than 30 percent higher this year. They rallied almost 9 percent in pre-market trading in New York on Friday. HSBC analysts have recently noted that they expect to see a much higher growth rate from Broadcom's custom chip business compared with Nvidia's chip business in 2026. Nvidia continues to dominate the AI hardware space, with the Big Tech "hyperscalers" still representing a significant share of its customer base. But its growth has slowed relative to the astronomical figures it saw at the start of the AI investment boom. OpenAI chief executive Sam Altman has been vocal about the demand for more computing power to serve the number of businesses and consumers using products such as ChatGPT, as well as to train and run AI models. The company was one of the earliest customers for Nvidia's AI chips and has since proven to be a voracious consumer of its hardware. Last month, Altman said the company was prioritising compute "in light of the increased demand from [OpenAI's latest model] GPT-5" and planned to double its compute fleet "over the next 5 months."
[2]
OpenAI could launch its own AI chip next year
Unnamed sources tell the outlet that OpenAI designed the chip with US semiconductor giant Broadcom, which announced a $10 billion chip order from an unnamed customer on Thursday. Industry analysts widely believe that new client is OpenAI, a partnership that has been something of an open secret for months. OpenAI will deploy the custom chips internally, rather than sell them to external customers, the report says. The move mirrors the approach of other tech giants like Google and Amazon, which have designed custom chips to cut costs, shore up supply and reduce their dependence on chipmaking giant Nvidia. Tech titans like OpenAI have been pumping billions into bolstering AI infrastructure and securing chips to run their data-hungry AI models.
[3]
Report: OpenAI will launch its own AI chip next year
Many AI companies are launching their own chipmaking operations. OpenAI is gearing up to launch its own AI chip, part of a broader industry effort to gain independence from third-party semiconductor companies. The ChatGPT-maker will start mass-producing its first in-house graphics processing unit (GPU) in partnership with US chipmaker Broadcom next year, according to a Thursday report from the Financial Times. The chip will reportedly be used internally by OpenAI rather than sold to other businesses. Also: AI's not 'reasoning' at all - how this team debunked the industry hype The report followed a Thursday statement from Broadcom CEO Hock Tan to investors that the company had secured a $10 billion deal with an unnamed new customer in order to build new AI chips. Many analysts suspected that the new client was OpenAI. OpenAI and Broadcom did not immediately respond to ZDNET's request for comment. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Originally invented to render video game graphics, GPUs have become the technological cornerstone of the ongoing generative AI boom, fueling chatbots like ChatGPT, Gemini, and Claude. The market has thus far been dominated by Nvidia, which is currently ranked as the wealthiest company in the world thanks to the surging demand for its industry-leading chips like the A100 and H100. Also: The fastest growing AI chatbot lately? It's not ChatGPT or Gemini Other chipmakers, like AMD and Broadcom, have also seen significant growth. Broadcom crossed the trillion-dollar valuation mark in December and is rumoured to also partner with Google, Meta, and ByteDance. The rule over the AI market by a small handful of chipmakers has led to a situation in which tech developers often have to wait months to receive orders of new chips. The strain on supply, meanwhile, has caused prices to spike. Also: DeepSeek may be about to shake up the AI world again - what we know Leading AI companies like Microsoft, Amazon, and Meta have therefore sought to reduce their dependency on Nvidia and the other third-party chipmakers by launching their own chip-manufacturing efforts. Recently, the Trump administration allowed Nvidia to sell certain chips to China again after a Biden-era ban meant to curb competition. OpenAI CEO Sam Altman has been adamant about the need for his own company to develop its own chips, but it hasn't yet been clear exactly how that vision would take shape. (The company has thus far been dependent upon Nvidia and AMD.) Reuters reported last year that the company was working with Broadcom and Taiwan Semiconductor Manufacturing Co. (TSMC) in order to build its own AI chips. In March, TSMC announced a $100 million investment in chip manufacturing in the US. Also: Worried about AI's soaring energy needs? Avoiding chatbots won't help - but 3 things could The industry-wide pressure to secure a private source of AI chips has been exacerbated by a simultaneous drive to build increasingly massive data centers, which power AI systems and require vast quantities of water and electricity.
[4]
OpenAI widely thought to be Broadcom's mystery $10 billion custom AI processor customer -- order could be for millions of AI processors
Broadcom this week announced that it had signed an agreement to supply $10 billion worth of AI data center hardware to an undisclosed customer. The hardware includes custom-designed AI accelerators tailored for workloads specific to the client, as well as other gear from the company. It is widely rumored that the customer in OpenAI, which reportedly intends to use the AI processors for inference workloads, and the sheer scale of the order could amount to several million AI processors. As always, Broadcom isn't tipping its hat on the mystery customer. "Last quarter, one of these prospects released production orders to Broadcom, and we have accordingly characterized them as a qualified customer for XPUs and, in fact, have secured over $10 billion of orders of AI racks based on our XPUs," said Hock Tan, President and CEO of Broadcom, during the company's earnings conference call. While the $10 billion number is a big deal, there is a lot more in the announcement. Broadcom stated the customer is now 'qualified,' confirming that the chip and possibly the associated platform passed customer validation. That means that whoever this client is, they are happy with the functionality and performance (and so the design has been locked). Meanwhile, the phrase 'released production orders' means the customer moved from evaluation or prototyping to full-scale commercial procurement, though Broadcom will recognize revenue at a later point, possibly once the hardware is delivered or deployed. Broadcom does not reveal the names of its customers, but it said it would deliver 'racks based on its XPUs' in Q3 2026, which ends in early August 2025. Although the company calls its products 'racks,' these are not racks supplied by companies like Dell. What Broadcom intends to deliver to its undisclosed client is custom AI accelerators (XPUs) and networking chips, along with reference AI rack platforms that integrate these components. These are not full systems, but building blocks for large customers to assemble their own AI infrastructure at scale. Assuming that Broadcom delivers its hardware to the undisclosed client in June or July, 2026, that client will be able to able to deploy them in a few months (i.e., in Fall, 2026), depending on various factors. Such a timeframe aligns with reports suggesting that OpenAI's first deployment of its custom AI processors, developed in collaboration with Broadcom, was expected between late 2026 and early 2027. OpenAI's custom AI processor is expected to utilize a systolic array architecture -- a grid of identical processing elements (PEs) optimized for matrix and/or vector operations, arranged in rows and columns -- a design similar to that of other AI accelerators. The chip will reportedly feature HBM memory, though it is unclear whether it will adopt HBM3E or HBM4, and is likely to be manufactured on TSMC's N3-series (3nm-class) process technology. A $10 billion hardware investment is a massive commitment for OpenAI (especially given the fact that it will be delivered over a single quarter) and places it firmly in hyperscaler territory. For context, Meta's entire CapEx for 2025 is $72 billion, and the lion's share of that is reportedly assigned to AI hardware. AWS and Microsoft are also investing tens of billions of dollars in AI hardware every year. Assuming average costs of $5,000 to $10,000 per accelerator, the $10 billion could translate to 1-2 million XPUs, likely distributed across thousands of racks and tens of thousands of nodes. That scale rivals or exceeds many of today's largest AI inference clusters (including OpenAI's own gargantuan cluster), which will put OpenAI into a very competitive position going forward. OpenAI, which has historically run its workloads on Microsoft Azure using AMD or Nvidia GPUs, now appears to be shifting toward in-house infrastructure with custom Broadcom silicon, a strategic move that is, of course, aimed at cost control and inference optimization. As an added bonus, $10 billion not only buys a million (or two) custom AI accelerators, but can also give OpenAI quite some leverage when negotiating prices with AMD and Nvidia. That's assuming, of course, that the $10 billion deal with Broadcom was indeed signed by OpenAI. Notably, there is no official confirmation as of yet.
[5]
OpenAI Teams With Broadcom on AI Chip to Challenge Nvidia | The Pulse 9/05/2025
Broadcom is helping OpenAI design and produce an artificial intelligence accelerator from 2026, getting into a lucrative sphere dominated by Nvidia. The two firms plan to ship the first chips in that lineup starting next year, a person familiar with the matter said, asking to remain anonymous discussing a private deal. OpenAI will initially use the chip for its own internal purposes, the Financial Times reported earlier. In trade news, China slapped preliminary duties on pork imports from the European Union, a move set to disrupt shipments from one of the world's biggest suppliers and further stoke trade tensions between Brussels and Beijing. Today's guests: Emmanuel Cau, Barclays Head of European Equity Strategy, Sander Tordoir, Centre for European Reform Chief Economist, Michael Haigh, SGCIB Head of FIC & Commodities Research (Source: Bloomberg)
[6]
OpenAI reportedly taps Broadcom to launch AI chips in 2026
Whatever happened to that Baltra thing Tan and crew were helping Apple cook up? Analysis OpenAI is allegedly developing a custom AI accelerator with the help of Broadcom in an apparent bid to curb its reliance on Nvidia and drive down the cost of its GPT family of models. Citing, you guessed it, unnamed sources familiar with the matter, the Financial Times reports that Broadcom's $10 billion mystery customer teased by CEO Hock Tan on Thursday's earnings call was none other than Sam Altman's AI hype factory OpenAI. While Broadcom isn't in the habit of disclosing its customers, it's an open secret that the company's intellectual property forms the foundation of much of the custom cloud silicon out there. Speaking to analysts on Thursday's call, Tan said Broadcom is currently serving three XPU customers with a fourth on the way. "Last quarter, one of these prospects released production orders to Broadcom, and we have accordingly characterized them as a qualified customer for XPUs and, in fact, have secured over $10 billion of orders of AI racks based on our XPUs," he said. "And reflecting this, we now expect the outlook for our fiscal 2026 AI revenue to improve significantly from what we had indicated last quarter." OpenAI has been rumored to be developing an in-house alternative to Nvidia and AMD's GPUs for some time now. The chip is now expected to make its debut sometime next year, but will primarily be used internally rather than being made available to external customers, sources told FT. Whether this means that the chip will be used for training as opposed to inference or simply that it'll power OpenAI's inference and API servers but not virtual machines based on the accelerator (as Google and AWS have done with their TPUs and Trainium accelerators) remains an open question. While we don't know exactly how OpenAI intends to use its first-gen silicon, Broadcom's involvement offers some clues as to what it may end up looking like. Broadcom produces a number of foundational technologies necessary for building AI compute systems at scale. These technologies range from the serializer-deserializers (SerDes) used to move data from one chip to another to the network switches and co-packaged optical interconnects required to scale from one chip to thousands to the 3D packaging tech required to build multi-die accelerators. If you're curious, we've explored each in depth here. OpenAI is likely to use some combination of all of these with Broadcom's 3.5D eXtreme Dimension System in Package tech (3.5D XDSiP), a likely candidate for the accelerators themselves. The architecture is reminiscent in many ways of the AMD MI300-series of accelerators more than anything we've seen from Nvidia just yet and involves stacking advanced compute tiles atop a base die containing the chip's lower-level logic and memory controllers. Meanwhile, package to package communication happens via discrete I/O dies. This modular approach means customers can bring as much or as little of their own intellectual property to the design as they like and let Broadcom fill in the gaps. The largest of Broadcom's 3.5D XDSiP designs will support a pair of 3D stacks, two I/O, and up to 12 HBM stacks on a single 6,000 mm2 package. The first of these is expected to begin shipping next year, right in line with when OpenAI is said to bring its first chips to market. Alongside Broadcom's XDSiP tech, we wouldn't be surprised to see OpenAI leverage Broadcom's Tomahawk 6 family of switches and co-packaged optical chiplets for scale up and scale out networking, a topic we've explored in depth here. However, Broadcom's focus on Ethernet as the protocol of choice for both these network paradigms means that they wouldn't have to use Broadcom for everything. While Broadcom's 3.5D XDSiP seems a likely candidate for OpenAI's first in-house chips, it's not a complete solution on its own. The AI startup will still need to provide or, at the very least, license a compute architecture equipped with some seriously high-performance matrix multiply-accumulate (MAC) units, sometimes called MMEs or Tensor cores. The compute units will need some other control logic and ideally some vector units as well, but for AI, the main thing is a beefy enough matrix unit with access to an ample supply of high-bandwidth memory. Because Broadcom would be responsible for providing just about everything else, OpenAI's chip team could focus completely on optimizing its compute architecture for its internal workloads, making the entire process far less daunting than it otherwise might be. This is why cloud providers and hyperscalers have gravitated to licensing large chunks of their accelerator designs from the merchant silicon slinger. There's no sense wasting resources reinventing the wheel when you could reinvest those resources in your core competencies. With Altman laying out plans to plow hundreds of billions of dollars of what's mostly other people's money into AI infrastructure under his Stargate initiative, the idea that Broadcom's new $10 billion customer would be OpenAI wouldn't be surprising. However, the startup isn't the only company rumored to be working with Broadcom on custom AI accelerators. You may recall that late last year The Information reported that Apple would be Broadcom's next big XPU customer with a chip due out in 2026 codenamed "Baltra." Since then, Apple has committed to investing $500 billion and hiring 20,000 to bolster its domestic manufacturing capacity. Among these investments is a Texas-based manufacturing plant that'll produce AI-accelerated servers based on its own in-house silicon. ®
[7]
OpenAI set to start mass production of its own AI chips with Broadcom
OpenAI is set to produce its own artificial intelligence chip for the first time next year, as the ChatGPT maker attempts to address insatiable demand for computing power and reduce its reliance on chip giant Nvidia. The chip, co-designed with US semiconductor giant Broadcom, would ship next year, according to multiple people familiar with the partnership. Broadcom's chief executive Hock Tan on Thursday referred to a mystery new customer committing to $10bn in orders. OpenAI's move follows the strategy of tech giants such as Google, Amazon and Meta, who have designed their own specialised chips to run AI workloads. The industry has seen huge demand for the computing power to train and run AI models. OpenAI planned to put the chip to use internally, according to one person close to the project, rather than make them available to external customers. OpenAI last year began an initial collaboration with Broadcom, according to reports at the time, but the timeline for mass production of a successful chip design had previously been unclear. On a call with analysts, Tan announced that Broadcom had secured a fourth major customer for its custom AI chip business, as it reported earnings that topped Wall Street estimates. Broadcom does not disclose the names of these customers, but people familiar with the matter confirmed OpenAI was the new client. Broadcom and OpenAI declined to comment. Tan said the deal had lifted the company's growth prospects by bringing "immediate and fairly substantial demand", shipping chips for that customer "pretty strongly" from next year. The prospect that custom AI chips -- termed "XPUs" to contrast them with the "GPUs" designed by off-the-shelf chip suppliers such as Nvidia and AMD -- will take a growing share of the booming AI infrastructure market that has helped propel Broadcom's shares more than 30 per cent higher this year. Its shares rose 4.5 per cent in after market trading following the positive earnings report. Broadcom partnered with Google, for example, in developing its custom "TPU" AI chips. Ahead of Broadcom's earnings, HSBC analysts wrote that they expected to see much a higher growth rate from its custom chip business compared to Nvidia's GPU business in 2026. Nvidia continues to dominate the AI hardware space, with the Big Tech "hyperscalers" still representing a significant share of its customer base. But its growth has slowed relative to the astronomical figures it saw at the start of the AI investment boom. OpenAI chief executive Sam Altman has been vocal about the demand for more computing power to serve the number of businesses and consumers using products such as ChatGPT, as well as to train and run AI models. The company was one of the earliest customers for Nvidia's AI chips and has since proven to be a voracious consumer of its hardware. Last month, Altman said the company was prioritising compute "in light of the increased demand from [OpenAI's latest model] GPT-5" and planned to double its compute fleet "over the next 5 months".
[8]
OpenAI to launch its first AI chip in 2026 with Broadcom, FT reports
Sept 4 (Reuters) - OpenAI is set to produce its first artificial intelligence chip next year in partnership with U.S. semiconductor giant Broadcom (AVGO.O), opens new tab, the Financial Times reported on Thursday, citing people familiar with the matter. OpenAI plans to put the chip to use internally rather than make it available to external customers, the FT report said, citing one person close to the project. Reuters could not immediately verify the report. OpenAI and Broadcom did not immediately respond to Reuters' requests for comment after regular business hours. OpenAI, which helped commercialize generative AI capable of producing human-like responses to queries, relies on substantial computing power to train and run its systems. Last year, Reuters reported that OpenAI was working with Broadcom and Taiwan Semiconductor Manufacturing Co (2330.TW), opens new tab to develop its first in-house chip to power its artificial intelligence systems, while also incorporating AMD (AMD.O), opens new tab chips alongside Nvidia (NVDA.O), opens new tab chips to meet the surge in infrastructure demands. At the time, OpenAI had examined a range of options to diversify chip supply and reduce costs. In February, Reuters reported about OpenAI pushing ahead on its plan to reduce its reliance on Nvidia, for its chip supply by developing its first generation of in-house AI silicon. The ChatGPT maker was finalizing the design for its first in-house chip in the next few months and planned to send it for fabrication at TSMC, sources had told Reuters. Broadcom CEO Hock Tan said on Thursday that the company expects artificial intelligence revenue growth for fiscal 2026 to "improve significantly", after securing more than $10 billion in AI infrastructure orders from new customer, without naming it. A new prospect placed a firm order last quarter, making it into a qualified customer, Tan said on an earnings call. Tan earlier this year had hinted at four new potential customers who were "deeply engaged" with the company to create their own custom chips, in addition to its three existing large clients. OpenAI's move follows efforts by Google (GOOGL.O), opens new tab, Amazon (AMZN.O), opens new tab and Meta (META.O), opens new tab, which have built custom chips to handle AI workloads, as demand for computing power to train and operate AI models surges. Reporting by Disha Mishra in Bengaluru; Editing by Alan Barona and Sherry Jacob-Phillips Our Standards: The Thomson Reuters Trust Principles., opens new tab
[9]
OpenAI is reportedly producing its own AI chips starting next year
OpenAI is gearing up to start the mass production of its own AI chips next year to be able to provide the massive computing power its users need and to lessen its reliance on NVIDIA, according to the Financial Times. The company reportedly designed the custom AI chip with US semiconductor maker Broadcom, whose CEO recently announced that it has a new client that put in a whopping $10 billion in orders. It didn't name the client, but the Times' sources confirmed that it was OpenAI, which apparently doesn't have plans to sell the chips and will only be using them internally. Reuters reported way back in 2023 that OpenAI was already exploring the possibility of making its own AI chips after Sam Altman blamed GPU shortages for the company API's speed and reliability. The news organization also previously reported that OpenAI was working with both Broadcom and Taiwan Semiconductor Manufacturing Co. (TSMC) to create its own product. The Times didn't say whether OpenAI still has an ongoing partnership with TSMC. After GPT-5 came out, Altman announced the changes OpenAI is implementing in order to keep up with "increased demand." In addition to prioritizing paid ChatGPT users, he said that OpenAI was going to double its compute fleet "over the next 5 months." Making its own chips will address any potential GPU shortages the company may encounter in doubling its fleet, and it could also save the company money. The Times says custom AI chips called "XPUs" like the one OpenAI is reportedly developing will eventually take a big share of the AI market. At the moment, NVIDIA is still the leading name in the industry. It recently revealed that its revenue for the second quarter ending on July 27 rose 56 percent compared to the same period last year, and it didn't even have to ship any H20 chips to China.
[10]
OpenAI Spends $10 Billion to Get Into the Chip Business
OpenAI would like to stop being so reliant on Nvidia to handle its processing needs. To address that, the artificial intelligence startup is reportedly teaming up with Broadcom to develop its own chips, set to be available starting next year, according to the Financial Times. The Wall Street Journal reports that OpenAI's deal with the US-based semiconductor firm will see the two work together to create custom artificial intelligence chips, which will be used internally by OpenAI to train and run its new ChatGPT models and other AI products. The deal will reportedly put $10 billion into the pockets of Broadcom, which had announced a mystery deal on Thursday that apparently didn't stay all that mysterious for long. The deal probably shouldn't be too big a surprise, just given the sheer volume of demand that Nvidia is currently tasked with fulfilling. The company has been the go-to for hyperscalers in the AI space looking to build quickly, producing chips that have become the standard for Amazon Web Services, Google, Microsoft, and Oracle. In fact, Oracle just announced plans to buy more than $40 billion worth of Nvidia chips for use in a new data center that will reportedly be a part of the Stargate Project, a joint effort by AI firms to expand computing infrastructure. Plus, there were hints that OpenAI was working on an in-house chip earlier this year. It appears those plans are now coming to fruition. OpenAI isn't the only company trying to wean itself off of its need for Nvidia's supply of compute. Google has reportedly been calling around to data centers and offering its own custom chips to help handle AI-related processing, according to The Information. Amazon is reportedly working on its own AI chips, and Microsoft has gotten into the chipmaking business, as well. Nvidia likely won't be short on demand even with some of the big players attempting to go their own way. Just last week, the company reported that its sales were up 56% in the most recent quarter, suggesting that demand isn't slowing down. There were also reports last month that the Trump administration may be loosening some of its trade tensions with China and other countries in a way that would allow Nvidia to sell its latest chips overseas, opening the company back up to some major international markets that have been complicated by the trade wars initiated by Trump and company.
[11]
OpenAI set to start mass production of its own AI chips with Broadcom in 2026, FT reports
Sept 4 (Reuters) - OpenAI is set to produce its first artificial intelligence chip next year in partnership with U.S. semiconductor giant Broadcom (AVGO.O), opens new tab, the Financial Times reported on Thursday, citing people familiar with the partnership. OpenAI and Broadcom did not immediately respond to Reuters' requests for comment after regular business hours. Reuters could not immediately verify the report. OpenAI planned to put the chip to use internally rather than make it available to external customers, the report added, citing one person close to the project. Reuters had reported in February about OpenAI pushing ahead on its plan to reduce its reliance on Nvidia (NVDA.O), opens new tab, opening a new tab for its chip supply by developing its first generation of in-house AI silicon. Broadcom CEO Hock Tan said earlier on Thursday the company had secured more than $10 billion in AI infrastructure orders from a new customer, without naming it. OpenAI's move follows efforts by Google, Amazon and Meta, which have built custom chips to handle AI workloads, as demand for computing power to train and operate AI models surges. Reporting by Disha Mishra in Bengaluru; Editing by Alan Barona Our Standards: The Thomson Reuters Trust Principles., opens new tab
[12]
OpenAI set to start mass production of its own AI chips with Broadcom, FT reports
Sept 4 (Reuters) - OpenAI is set to produce its own artificial intelligence chip for the first time next year, as the ChatGPT maker attempts to address insatiable demand for computing power and reduce its reliance on chip giant Nvidia (NVDA.O), opens new tab, the Financial Times reported on Thursday. The chip, co-designed with U.S. semiconductor giant Broadcom (AVGO.O), opens new tab, would ship next year, the report added, citing multiple people familiar with the partnership. Reporting by Disha Mishra in Bengaluru; Editing by Alan Barona Our Standards: The Thomson Reuters Trust Principles., opens new tab
[13]
OpenAI has teamed up with Broadcom to make its own AI processors, according to a report, possibly as part of a long-term plan to move away from Nvidia's GPUs
Broadcom recently announced a $10 billion deal with one unnamed customer. Hmmm. Whether you love it, loathe it, or just have no real feelings on the matter, OpenAI's hungry-hippo-for-dollars ChatGPT isn't going to disappear any time soon. Neither will the need for it to be powered by Nvidia's mega-GPUs. However, that doesn't mean its creators aren't looking to cut costs somewhere, and one report claims a potential solution to this could be to use its own AI chips. That's essentially what's being suggested in a report by The Financial Times (via Reuters), which claims that OpenAI has signed a deal with US semiconductor firm Broadcom to design and manufacture its own machine learning processors, for internal use next year. This is backed up by a statement from Broadcom's CEO, Hock Tan, who said that the company had secured $10 billion in AI system orders from a 'new customer', which should result in significant revenue growth in 2026. As things currently stand, OpenAI uses large-scale computer systems with Nvidia chips for AI model training and inference. It's not the only company to do so, of course, which is why Nvidia's data centre division pulled in a total $115.2 billion last year -- more than AMD and Intel's entire revenues combined. And it's not just a case that Team Green is shifting vast numbers of its megachips, as they're also very expensive. Precisely how expensive isn't accurately known, but OpenAI, Meta, and Microsoft have all forked out billions of dollars for Hopper and Blackwell processors. Such expenditure isn't sustainable unless the end users of their AI systems can be leveraged into paying for it all in some way. A logical alternative is to simply spend less money on AI chips in the first place. The thing is, similar processors from AMD or Intel are just as pricey, hence why the likes of Amazon and Google went down a route of using in-house components. Both firms have the financial resources to do this directly, but OpenAI doesn't, which is why the claim that Broadcom is doing it all for them is almost certainly true -- after all, it already has such products in its portfolio, such as the snappily-titled 3.5D XDSiP. Even if this all comes to pass, OpenAI is still going to be reliant on Nvidia's GPU for a good while yet. That's because all of its current software stack is written with that hardware in mind, so it will take many moons to shift it across to Broadcom's platform and even more rotations of our solar system to get it working as smoothly as it currently does. Nvidia stopped being a gaming-first company a long time ago, and until the AI bubble bursts or deflates somewhat, its insatiable appetite for dollars means that Team Green will continue to focus almost all its efforts on machine learning megachips. OpenAI switching to Broadcom GPUs is certainly news, but it's absolutely not a glimmer of hope for wallet-beaten PC gamers hoping for a sliver of light at the end of the tunnel.
[14]
FT: OpenAI to make custom AI chips with Broadcom next year
Sam Altman said that OpenAI would use 'well over 1m GPUs' by the end of 2025. OpenAI has tapped chipmaker Broadcom to mass manufacture its own artificial intelligence chips set for shipping in 2026, the Financial Times has reported. In July, OpenAI CEO Sam Altman claimed that his company "will cross well over 1m GPUs brought online by the end of this year". To compare, Nvidia, last year, said that Elon Musk's xAI is enroute to doubling the number of Nvidia Hopper GPUs it uses to 200,000. OpenAI was one of the earliest customers for Nvidia's AI chips. Since, it has been a strong consumer of its hardware. This partnership would then reduce the ChatGPT-maker's reliance on Nvidia as it tries to feed its growing chip needs on its own. At yesterday's (4 September) earnings release, Broadcom CEO Hock Tan said that the company has secured a $10bn order for custom AI chips, although he didn't disclose who the client was. "One of these prospects released production orders to Broadcom, and we have accordingly characterised them as a qualified customer for XPUs," Tan said. Later, sources confirmed to the Financial Times that the new order was for OpenAI. Both the companies are yet to confirm the news. The publication also reports that OpenAI is planning to use the chip for internal purposes, rather than making them available for external customers. This is great news for Broadcom, which forecasts an increased AI revenue next year as a result of this large order. "We will ship pretty strongly beginning 2026," Tan said. The company reported strong Q3 earnings with AI revenue jumping 63pc during to period of $5.2bn. Tan said that he expects AI revenue to grow by 1bn this coming quarter. The company's chip sales also rose 57pc to more than $9.1bn in Q3. The company's stock prices went up by nearly 5pc to more than $320 per share following the earnings call. Google has previously partnered up with Broadcom for its Tensor Processing Unit AI chips. While the company also helps design Meta's AI chips. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[15]
Broadcom secures $10 billion ASIC contract, with Apple and xAI next in line for new AI chips
TL;DR: Broadcom secured a $10 billion custom ASIC contract from a major non-CSP client, with Apple, xAI, and ByteDance also in line. OpenAI's custom AI chip, produced with Broadcom and TSMC, aims to reduce NVIDIA dependence by 2026, reinforcing Broadcom as a strong competitor in AI semiconductor manufacturing. Broadcom has secured a huge $10 billion custom ASIC contract from a major new customer that is outside of the core hyperscale cloud service provider (CSP) segment, with Apple and xAI also next in line behind them. In a new report from Digitimes picked up by @Jukanrosleve on X, we're hearing that further orders from TikTok parent company ByteDance, Apple, and Elon Musk's xAI are already in the pipeline, with a development analysis seeing it as reinforcing Broadcom's position as a "credible challenger" to NVIDIA. According to industry sources, OpenAI's new custom ASIC will enter mass production in 2026, and will position the AI startup as Broadcom's fourth confirmed large-scale ASIC customer. Neither Broadcom nor OpenAI have commented on the deal yet, but insiders that are familiar with Broadcom's roadmap have confirmed the order is indeed genuine. We have had rumors and reports of OpenAI's in-house chip ambitions for a couple of years now, with the Sam Altman-led AI startup beginning to develop its own in-house custom silicon to meet its continuously rising AI workloads, but the company hit just a few roadblocks, which helped the design pass specification testing requirements rather quickly. OpenAI had very, very lofty ambitions of building their own semiconductor foundry, but that disappeared as quick as the rumors started, with the company rumored a while back to fab its first AI chips with Broadcom and TSMC, and now here we are. OpenAI was previously rumored to have its self-developed AI chip fabbed at TSMC on its new A16 process node, but it looks like Broadcom has won that $10B contract. This move also (ever-so-slightly) reduces OpenAI's dependence on NVIDIA, but I'm sure we're going to see reports with striking eye-grabbing headlines in the future with something like "OpenAI breaks up with NVIDIA, no longer needs their market-dominating AI GPU hardware". That change isn't going to happen overnight, but in the years and years to come, we'll see a slight reduction in dependence to NVIDIA. That's today, with Hopper and Blackwell AI GPUs like the H100, H200, B100, B200 AI GPUs... let alone the new Blackwell Ultra GB300, and the next-gen Rubin R100 AI GPU with next-gen HBM4 memory launching in 2026.
[16]
OpenAI to mass produce custom AI chip with Broadcom in 2025
OpenAI is reportedly poised to initiate mass production of its proprietary artificial intelligence chips in 2025. This move marks a significant step for the company as it seeks greater control over its AI infrastructure. The development involves a partnership with Broadcom, a semiconductor manufacturer. The collaboration focuses on creating a custom chip tailored for OpenAI's internal use. Sources familiar with the matter, who preferred to remain unnamed, confirmed that OpenAI and Broadcom have been working together on the chip's design. Broadcom's recent announcement of a $10 billion chip order from an undisclosed client has fueled speculation within the industry. Analysts widely suspect that OpenAI is the client in question. This potential partnership has been the subject of industry discussion for several months. OpenAI intends to use the custom-designed chips internally to power its AI models. The company does not plan to offer these chips for sale to external customers. This strategy mirrors those adopted by other major technology companies, including Google and Amazon, which have also invested in custom chip design to optimize performance, reduce operational costs, and ensure a stable supply chain. These companies are seeking to lessen their reliance on Nvidia, a dominant player in the chip manufacturing sector. The initiative underscores the substantial investments being made by leading technology firms in AI infrastructure. OpenAI and its peers continue to allocate significant resources to secure the necessary hardware, specifically chips, to support the processing demands of their rapidly evolving AI models.
[17]
OpenAI Said to Launch Its First AI Chip in 2026 With Broadcom
The AI firm is reportedly partnering with Broadcom to produce the chip OpenAI is set to produce its first artificial intelligence chip next year in partnership with U.S. semiconductor giant Broadcom, the Financial Times reported on Thursday, citing people familiar with the matter. OpenAI plans to put the chip to use internally rather than make it available to external customers, the FT report said, citing one person close to the project. Reuters could not immediately verify the report. OpenAI and Broadcom did not immediately respond to Reuters' requests for comment after regular business hours. OpenAI, which helped commercialize generative AI capable of producing human-like responses to queries, relies on substantial computing power to train and run its systems. Last year, Reuters reported that OpenAI was working with Broadcom and Taiwan Semiconductor Manufacturing Co to develop its first in-house chip to power its artificial intelligence systems, while also incorporating AMD chips alongside Nvidia chips to meet the surge in infrastructure demands. At the time, OpenAI had examined a range of options to diversify chip supply and reduce costs. In February, Reuters reported about OpenAI pushing ahead on its plan to reduce its reliance on Nvidia, for its chip supply by developing its first generation of in-house AI silicon. The ChatGPT maker was finalizing the design for its first in-house chip in the next few months and planned to send it for fabrication at TSMC, sources had told Reuters. Broadcom CEO Hock Tan said on Thursday that the company expects artificial intelligence revenue growth for fiscal 2026 to "improve significantly", after securing more than $10 billion in AI infrastructure orders from new customer, without naming it. A new prospect placed a firm order last quarter, making it into a qualified customer, Tan said on an earnings call. Tan earlier this year had hinted at four new potential customers who were "deeply engaged" with the company to create their own custom chips, in addition to its three existing large clients. OpenAI's move follows efforts by Google, Amazon and Meta, which have built custom chips to handle AI workloads, as demand for computing power to train and operate AI models surges.
[18]
OpenAI to move away from NVIDIA GPUs with new Broadcom partnership
TL;DR: OpenAI has partnered with Broadcom to develop custom AI accelerators, aiming to reduce reliance on costly NVIDIA GPUs for training future models. This strategic move seeks to lower expenses while maintaining AI performance, signaling a shift in the AI hardware market dominated by NVIDIA's expensive GPU solutions. OpenAI has reportedly signed a deal with US semiconductor firm Broadcom in a long-term plan to reduce its reliance on NVIDIA GPUs. OpenAI, the creators of the famed AI chatbot ChatGPT, have reportedly entered into a partnership with Broadcom to make custom AI accelerators slated to go into use next year. The report comes from The Financial Times and Reuters, which states OpenAI is looking to move away from NVIDIA's expensive AI GPUs for training future models and wants to create their own custom chip to reduce expenditure. NVIDIA has become the global wholesaler for AI GPUs, with the company's data center division generating $115.2 billion last year - more than AMD and Intel's entire company revenues combined. NVIDIA's dominance in the market has been a result of AI companies such as Meta, Microsoft, and OpenAI requiring more GPU horsepower to train more sophisticated AI models. However, the AI GPU prices are extremely expensive, and with more horsepower required it means larger investments. OpenAI wants to reduce its expenditure in this department and it reportedly believes that Broadcom can help. Notably, Broadcom CEO, Hock Tan, recently stated the company secured a $10 billion AI systems order from a "new customer," presumably this is OpenAI.
[19]
OpenAI set to start mass production of its own AI chips with Broadcom in 2026: FT - The Economic Times
OpenAI is set to produce its first artificial intelligence chip next year in partnership with U.S. semiconductor giant Broadcom, the Financial Times reported on Thursday, citing people familiar with the partnership. OpenAI and Broadcom did not immediately respond to Reuters' requests for comment after regular business hours. Reuters could not immediately verify the report. OpenAI planned to put the chip to use internally rather than make it available to external customers, the report added, citing one person close to the project. Reuters had reported in February about OpenAI pushing ahead on its plan to reduce its reliance on Nvidia, opening a new tab for its chip supply by developing its first generation of in-house AI silicon. Broadcom CEO Hock Tan said earlier on Thursday the company had secured more than $10 billion in AI infrastructure orders from a new customer, without naming it. OpenAI's move follows efforts by Google, Amazon and Meta, which have built custom chips to handle AI workloads, as demand for computing power to train and operate AI models surges.
[20]
OpenAI To Mass-Produce 1st Proprietary AI Chip With Broadcom, Secures $10 Billion Deal: Report - Broadcom (NASDAQ:AVGO)
OpenAI is set to begin mass production of its first proprietary artificial intelligence chip next year through a partnership with Broadcom Inc. AVGO, marking a strategic shift away from its reliance on Nvidia Corp. NVDA amid surging demand for artificial intelligence computing power. Broadcom Secures $10 Billion Customer Deal Broadcom CEO Hock Tan confirmed during Thursday's earnings call that the semiconductor giant secured a fourth major customer committing to $10 billion in orders. The Financial Times sources identified OpenAI as the new client. The deal delivered "immediate and fairly substantial demand" that will boost chip shipments "pretty strongly" starting next year, according to Tan. Broadcom's shares jumped 4.58% to $306.10 in after-market trading following the positive earnings report. OpenAI and Broadcom did not immediately respond to Benzinga's request for comment. Strategic Move Toward Computing Independence OpenAI's custom chip initiative follows the playbook of tech giants including Alphabet Inc. GOOGL GOOG, Amazon.com Inc. AMZN and Meta Platforms Inc. META, which have developed specialized processors to handle AI workloads more efficiently than standard graphics processing units. The partnership began last year, but the timeline for mass production remained unclear until this week's announcement. OpenAI plans to use the chips internally rather than making them available to external customers, according to the FT report, addressing its massive compute requirements for ChatGPT's 700 million weekly users. See Also: Mitch McConnell Says Trump Tariffs-Ushered Era Has 'Similarities' With The 1930s: 'This Is The Most Dangerous Period Since Before World War Two' Market Impact on AI Infrastructure The custom chip development signals a potential shift in the AI hardware landscape, where Nvidia has maintained dominance. According to FT, HSBC analysts expect custom chip businesses to achieve higher growth rates than Nvidia's GPU business by 2026. OpenAI CEO Sam Altman recently emphasized the company's compute priorities, stating plans to double its compute fleet over the next five months to meet increased demand from the upcoming GPT-5 model. The development comes as OpenAI reportedly moves forward with a $10.3 billion secondary share sale at a $500 billion valuation, up from its previous $300 billion valuation established in April. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo courtesy: Meir Chaimowitz on Shutterstock.com AVGOBroadcom Inc $320.115.86% Stock Score Locked: Want to See it? Benzinga Rankings give you vital metrics on any stock - anytime. Reveal Full Score Edge Rankings Momentum 92.27 Growth 34.83 Quality 88.65 Value 9.54 Price Trend Short Medium Long Overview AMZNAmazon.com Inc $235.614.26% GOOGAlphabet Inc $232.490.60% GOOGLAlphabet Inc $232.210.67% METAMeta Platforms Inc $749.001.62% NVDANVIDIA Corp $171.300.40% Market News and Data brought to you by Benzinga APIs
[21]
OpenAI to Launch First In-House AI Chip with Broadcom in 2026
Alibaba Stock Analysis: Impact of AI Chip Testing on Market Performance The is expected to be made by Taiwan Semiconductor Manufacturing Company (TSMC), the world's largest contract chipmaker. According to earlier reports, OpenAI has been finalizing the design and preparing it for production. Alongside this effort, OpenAI continues to use AMD and NVIDIA chips while building a more diverse supply chain. The high-level processors are in growing demand, thanks to generative AI powering applications such as ChatGPT. While NVIDIA still maintains supremacy with its GPUs in that space, OpenAI's bold step is cementing a new dawn for the hardware-independent model. Building its own AI chip puts it in the position of being able to guard its performance, pricing, and supply stability. Still, making chips is not easy. It requires high investment, long timelines, and technical risks. Other companies have faced challenges in meeting speed or efficiency goals when designing their own silicon. Regardless, this project demonstrates to prepare for the future with tools it can fully control. By 2026, when its , the AI giant might make a huge impact on how systems run and improve cost efficiency to engage more directly with the existing competitors participating in the tech race.
[22]
OpenAI set to start mass production of its own AI chips with Broadcom in 2026, FT reports
(Reuters) -OpenAI is set to produce its first artificial intelligence chip next year in partnership with U.S. semiconductor giant Broadcom, the Financial Times reported on Thursday, citing people familiar with the partnership. OpenAI and Broadcom did not immediately respond to Reuters' requests for comment after regular business hours. Reuters could not immediately verify the report. OpenAI planned to put the chip to use internally rather than make it available to external customers, the report added, citing one person close to the project. Reuters had reported in February about OpenAI pushing ahead on its plan to reduce its reliance on Nvidia, opening a new tab for its chip supply by developing its first generation of in-house AI silicon. Broadcom CEO Hock Tan said earlier on Thursday the company had secured more than $10 billion in AI infrastructure orders from a new customer, without naming it. OpenAI's move follows efforts by Google, Amazon and Meta, which have built custom chips to handle AI workloads, as demand for computing power to train and operate AI models surges. (Reporting by Disha Mishra in Bengaluru; Editing by Alan Barona)
[23]
OpenAI to launch its first in-house AI chip in 2026 with Broadcom: Here's what we know
Broadcom hints at a $10B AI chip deal, speculated to be with OpenAI. OpenAI is reportedly gearing up to produce its first artificial intelligence chip next year, in collaboration with US semiconductor giant Broadcom, according to a report by the Financial Times. Sources familiar with the matter suggest that the chip will be deployed exclusively for OpenAI's internal operations, rather than being offered commercially. This will mark a significant step for the ChatGPT-maker to reduce reliance on Nvidia, the dominant player in the AI chip market. Training and running large language models requires massive computing power, and OpenAI has long been exploring ways to secure a steady and cost-effective supply of chips. Last year, reports suggested that OpenAI was working with both Broadcom and Taiwan Semiconductor Manufacturing Co. (TSMC) to design its first in-house silicon. The company was also integrating AMD hardware alongside Nvidia GPUs to meet its surging infrastructure demands. By early 2024, insiders revealed that OpenAI was finalizing the design of its first-generation chip, with plans to send it to TSMC for fabrication. Also read: Samsung Galaxy S26 lineup leak hints at Qi2 magnets, new Edge design inspired by iPhone 17 Meanwhile, Broadcom's CEO Hock Tan disclosed that the company has secured over $10 billion in AI infrastructure orders from a new, unnamed customer, widely speculated to be OpenAI. Tan added that the new client had placed a firm order last quarter, officially becoming a qualified customer. Earlier this year, he had hinted that Broadcom was working closely with several new partners to develop custom silicon solutions, in addition to its existing major clients. If true, OpenAI will join Google, Amazon, and Meta, who already have developed their own AI chips to handle increasing computational demands. Building custom hardware will help the AI platform reduce dependency on external suppliers, but also offer performance optimisations designed to specific workloads. However, neither OpenAI nor Broadcom has shared any official details on the same.
Share
Share
Copy Link
OpenAI is set to produce its own AI chips in collaboration with Broadcom, aiming to address the growing demand for computing power and reduce reliance on Nvidia. The custom-designed chips are expected to ship in 2026, marking a significant shift in OpenAI's hardware strategy.
OpenAI, the company behind ChatGPT, is making a significant leap into custom chip production. In a partnership with US semiconductor giant Broadcom, OpenAI is set to produce its own artificial intelligence chips for the first time, with shipments expected to begin in 2026
1
. This move marks a strategic shift for OpenAI, as it aims to address the insatiable demand for computing power and reduce its reliance on chip giant Nvidia.Source: Analytics Insight
Broadcom's CEO, Hock Tan, recently announced a $10 billion order from a new, unnamed customer for custom AI chips
2
. Industry analysts widely believe this customer to be OpenAI. The partnership has already boosted Broadcom's growth prospects, with its shares rallying almost 9 percent in pre-market trading following the announcement1
.While specific details remain undisclosed, OpenAI's custom AI processor is expected to utilize a systolic array architecture, optimized for matrix and vector operations. The chip will likely feature HBM memory and be manufactured using TSMC's 3nm-class process technology
4
. The $10 billion deal could translate to between 1 and 2 million AI processors, potentially rivaling or exceeding many of today's largest AI inference clusters.Source: Tom's Hardware
OpenAI's move mirrors the strategy of tech giants like Google, Amazon, and Meta, which have designed their own specialized chips to run AI workloads
3
. This trend is driven by the need to cut costs, shore up supply, and reduce dependence on third-party semiconductor companies, particularly Nvidia, which has dominated the AI hardware space.Related Stories
The development of custom chips is expected to give OpenAI more control over its AI infrastructure and potentially lead to cost savings and performance improvements. OpenAI CEO Sam Altman has been vocal about the need for more computing power to serve the increasing demand for products like ChatGPT and to train and run advanced AI models like GPT-5
1
.Source: engadget
As OpenAI transitions from using primarily AMD or Nvidia GPUs on Microsoft Azure to its own custom Broadcom silicon, it positions itself as a major player in AI infrastructure. This move could potentially give OpenAI leverage in future negotiations with other chip suppliers and accelerate the company's AI development capabilities
4
. The success of this venture could reshape the competitive landscape of the AI chip market and influence the strategies of other AI companies and chip manufacturers in the coming years.Summarized by
Navi
[1]
[2]
[4]
[5]
19 Jul 2024
30 Oct 2024•Technology
27 Dec 2024•Technology