19 Sources
19 Sources
[1]
Five months later, Nvidia's $100 billion OpenAI investment plan has fizzled out
In September 2025, Nvidia and OpenAI announced a letter of intent for Nvidia to invest up to $100 billion in OpenAI's AI infrastructure. At the time, the companies said they expected to finalize details "in the coming weeks." Five months later, no deal has closed, Nvidia's CEO now says the $100 billion figure was "never a commitment," and Reuters reports that OpenAI has been quietly seeking alternatives to Nvidia chips since last year. Reuters also wrote that OpenAI is unsatisfied with the speed of some Nvidia chips for inference tasks, citing eight sources familiar with the matter. Inference is the process by which a trained AI model generates responses to user queries. According to the report, the issue became apparent in OpenAI's Codex, an AI code-generation tool. OpenAI staff reportedly attributed some of Codex's performance limitations to Nvidia's GPU-based hardware. After the Reuters story published and Nvidia's stock price took a dive, Nvidia and OpenAI have tried to smooth things over publicly. OpenAI CEO Sam Altman posted on X: "We love working with NVIDIA and they make the best AI chips in the world. We hope to be a gigantic customer for a very long time. I don't get where all this insanity is coming from." What happened to the $100 billion? The September announcement described a wildly ambitious plan: 10 gigawatts of Nvidia systems for OpenAI, requiring power output equal to roughly 10 nuclear reactors. Nvidia CEO Jensen Huang told CNBC at the time that the project would match Nvidia's total GPU shipments for the year. "This is a giant project," Huang said. But the deal was always a letter of intent, not a binding contract. And in recent weeks, Huang has been walking back the number. On Saturday, he told reporters in Taiwan that the $100 billion was "never a commitment." He said OpenAI had invited Nvidia to invest "up to" that amount and that Nvidia would "invest one step at a time." "We are going to make a huge investment in OpenAI," Huang said. "Sam is closing the round, and we will absolutely be involved. We will invest a great deal of money, probably the largest investment we've ever made." But when asked if it would be $100 billion, Huang replied, "No, no, nothing like that." A Wall Street Journal report on Friday said Nvidia insiders had expressed doubts about the transaction and that Huang had privately criticized what he described as a lack of discipline in OpenAI's business approach. The Journal also reported that Huang had expressed concern about the competition OpenAI faces from Google and Anthropic. Huang called those claims "nonsense." Nvidia shares fell about 1.1 percent on Monday following the reports. Sarah Kunst, managing director at Cleo Capital, told CNBC that the back-and-forth was unusual. "One of the things I did notice about Jensen Huang is that there wasn't a strong 'It will be $100 billion.' It was, 'It will be big. It will be our biggest investment ever.' And so I do think there are some question marks there." In September, Bryn Talkington, managing partner at Requisite Capital Management, noted the circular nature of such investments to CNBC. "Nvidia invests $100 billion in OpenAI, which then OpenAI turns back and gives it back to Nvidia," Talkington said. "I feel like this is going to be very virtuous for Jensen." Tech critic Ed Zitron has been critical of Nvidia's circular investments for some time, which touch dozens of tech companies, including major players and startups. They are also all Nvidia customers. "NVIDIA seeds companies and gives them the guaranteed contracts necessary to raise debt to buy GPUs from NVIDIA," Zitron wrote on Bluesky last September, "Even though these companies are horribly unprofitable and will eventually die from a lack of any real demand." Chips from other places Outside of sourcing GPUs from Nvidia, OpenAI has reportedly discussed working with startups Cerebras and Groq, both of which build chips designed to reduce inference latency. But in December, Nvidia struck a $20 billion licensing deal with Groq, which Reuters sources say ended OpenAI's talks with Groq. Nvidia hired Groq's founder and CEO Jonathan Ross along with other senior leaders as part of the arrangement. In January, OpenAI announced a $10 billion deal with Cerebras instead, adding 750 megawatts of computing capacity for faster inference through 2028. Sachin Katti, who joined OpenAI from Intel in November to lead compute infrastructure, said the partnership adds "a dedicated low-latency inference solution" to OpenAI's platform. But OpenAI has clearly been hedging its bets. Beyond the Cerebras deal, the company struck an agreement with AMD in October for six gigawatts of GPUs and announced plans with Broadcom to develop a custom AI chip to wean itself off of Nvidia dependence. When those chips will be ready, however, is currently unknown.
[2]
Nvidia nears deal to invest $20 billion in OpenAI funding round, source says
Feb 3 (Reuters) - Nvidia (NVDA.O), opens new tab is nearing a deal to invest roughly $20 billion in OpenAI as part of its latest funding round, a person familiar with the matter told Reuters on Tuesday. ChatGPT maker OpenAI is looking to raise up to $100 billion in its latest funding round, valuing it at about $830 billion, Reuters had reported last week. Companies including Amazon (AMZN.O), opens new tab and SoftBank Group Corp (9984.T), opens new tab are racing to forge partnerships with OpenAI, betting that closer ties with the artificial-intelligence startup would give them a competitive edge in the AI race. The Nvidia-OpenAI deal is not finalised yet, the source said. Bloomberg News reported earlier in the day that Nvidia was nearing a deal with OpenAI. The news comes days after the Wall Street Journal reported that Nvidia's September plan to invest $100 billion in OpenAI and supply it with data center chips had stalled after the chipmaker expressed doubts about the deal. The deal had been expected to close within weeks but negotiations have dragged on for months. Nvidia CEO Jensen Huang has denied claims he was unhappy with the ChatGPT maker and said on Saturday that the company plans to make a "huge" investment in OpenAI, probably its largest ever. Huang also told CNBC earlier on Tuesday that Nvidia would consider investing in OpenAI's next fundraising round and the startup's eventual initial public offering. Reuters reported on Monday that OpenAI is unsatisfied with some of Nvidia's latest AI chips, and it has sought alternatives since last year, potentially complicating their relationship. OpenAI Chief Executive Sam Altman said after the Reuters report that Nvidia makes "the best AI chips in the world" and that the company hopes to remain a "gigantic customer for a very long time". Reporting by Kritika Lamba in Bengaluru, Carlos Méndez in Mexico City, Deepa Seetharaman in San Francisco; Editing by Rashmi Aich and Subhranshu Sahu Our Standards: The Thomson Reuters Trust Principles., opens new tab
[3]
Nvidia, OpenAI appear stalled on their mega deal. But the AI giants still need each other
Five months later, no contract has been signed and no money has changed hands. More concerning to investors, the two companies are seemingly at odds. The Wall Street Journal on Friday reported that the negotiations between the companies were "on ice" after some within Nvidia expressed doubts about OpenAI's business model. It's been a major topic of conversation in AI since November, when Nvidia warned in the risk factors of its quarterly filing that, "There is no assurance that we will enter into definitive agreements with respect to the OpenAI opportunity or other potential investments." Despite the reported friction, Nvidia and OpenAI still need each other. Altman has said OpenAI requires a massive number of Nvidia's AI chips to hit its growth targets for revenue, while Huang relies on customers like OpenAI to create services that wow customers and continue driving sales of its costly systems. Soaring demand and industry hype drove Nvidia's market cap past $5 trillion at its peak in October, though the stock is down 15% from its high, pushing the valuation to $4.4 trillion. OpenAI, meanwhile, was valued on the private market at $500 billion late last year and is reportedly eyeing a valuation of over $800 billion as it pursues another round of cash. "We are looking forward to Sam closing it and he's doing terrifically," Huang told CNBC's Jim Cramer on Tuesday. "And we will invest in the next round. There is no question about that." Nvidia first invested in OpenAI in October 2024, as part of a $6.6 billion funding round.
[4]
All the Messy Drama Between OpenAI and Nvidia, Explained
OpenAI and Nvidia, the two darlings of AI hype and long-term partners, seem to be having a bit of a falling out. At the center of this rift is a $100 billion Nvidia investment in OpenAI announced in September 2025. As part of the deal, Nvidia would build 10 gigawatts of AI data centers for OpenAI and invest $100 billion in the company in 10 installments, as each gigawatt comes online. In turn, OpenAI is reportedly planning on using the billions of dollars of investment from Nvidia to lease Nvidia chips. At the time, the investment sparked worries of circular dealmaking in the AI industry and an intricately woven web of financial dependencies that could be a sign of potential instability, echoing that of the dotcom bubble. That is, if even one cog is faulty and demand doesn't pan out as expected, it could create a domino effect that takes the whole system down. In the September announcement, the companies said that the first gigawatt of computing power would come online in the second half of 2026 and that any other details would be finalized in the coming weeks. But in an Nvidia SEC filing from November, the OpenAI investment was still characterized as just "a letter of intent with an opportunity to invest." Flash forward a couple of months, and a Wall Street Journal report from last week claims that the talks have still not progressed beyond the early stages and that Nvidia CEO Jensen Huang has been privately criticizing a so-called lack of discipline in OpenAI's business approach. Huang has reportedly spent the last few months privately emphasizing to industry associates that the $100 billion agreement was nonbinding and not finalized. Following that report, Huang tried to reassure reporters in Taipei, Taiwan, by praising OpenAI and saying that Nvidia will "absolutely be involved" in the company's latest funding round ahead of a rumored IPO later this year. Huang described the planned investment as "probably the largest investment we've ever made," but when asked if it would be over $100 billion, he said, "No, no, nothing like that." But that was not enough to quell investor fears, because another anonymously sourced report dropped a few days later. Turns out, OpenAI is not happy with the speed at which Nvidia chips can compute inference for some ChatGPT requests, and has been looking for alternative chip providers (such as startups Cerebras and Groq) to take on 10% of its inference needs, according to a Reuters report on Tuesday. The report also claims that OpenAI has blamed some of its AI coding assistant Codex's weaknesses on the Nvidia hardware. In response, it was now OpenAI executives' turn to praise Nvidia. CEO Sam Altman took to X to say that Nvidia makes "the best AI chips in the world," and infrastructure executive Sachin Katti said that Nvidia is OpenAI's "most important partner for both training and inference." But it seems that inference and its hefty memory requirements have been weighing heavily on Nvidia lately as well. The importance of inference has been outgrowing that of training as models mature. The agentic AI hype has also increased the amount of data managed by an AI system during the inference stage, further pushing the importance of memory. To account for this, Nvidia bought Groq (no, not Grok), the AI chips startup reportedly eyed by OpenAI, in its largest purchase ever. Then, last month, Nvidia unveiled its new Rubin platform, with a presentation that boasted inference and memory bandwidth wins. Reportedly, at the center of both Nvidia and OpenAI's fears about each other is increasing competition, posed particularly by Google. Late last year, Google became an even fiercer competitor to both leading AI developer OpenAI and top hardware infrastructure giant Nvidia. First came tensor processing units, Google's custom AI chips that are designed for inference, and for some tasks are deemed better than GPU chips dominated by Nvidia offerings. Google's TPUs are not only used by its own AI models, but are also deployed by OpenAI competitor Anthropic and potentially Meta. According to the Wall Street Journal report from last week, Huang is also worried about the competition both Google and Anthropic pose to OpenAI's market dominance. Huang reportedly fears that if OpenAI falls behind, it could impact Nvidia's sales because the company is one of the chipmaker's largest customers. OpenAI had to declare "code red" in December, just a few weeks after Google's latest release, Gemini 3, was considered to outperform ChatGPT. Meanwhile, the company has also been making significant efforts to scale Codex to beat competitor Anthropic's highly popular coding agent Claude Code. If investor fears are indeed realized, the deal doesn't go through as planned, and OpenAI is unable to pay for its towering financial commitments, then the implications would go far beyond just OpenAI and Nvidia. That's because both companies sit at the center of an intricate, tangled web of AI dealmaking, with numerous multibillion-dollar deals among a handful of companies, including a $300 billion OpenAI-Oracle cloud deal even bigger than the Nvidia commitment. These deals have been a considerable boon for the American economy, and if one deal goes down, it could take everything else with it.
[5]
Exclusive: OpenAI is unsatisfied with some Nvidia chips and looking for alternatives, sources say
SAN FRANCISCO, Feb 2 (Reuters) - OpenAI is unsatisfied with some of Nvidia's latest artificial intelligence chips, and it has sought alternatives since last year, eight sources familiar with the matter said, potentially complicating the relationship between the two highest-profile players in the AI boom. The ChatGPT-maker's shift in strategy, the details of which are first reported here, is over an increasing emphasis on chips used to perform specific elements of AI inference, the process when an AI model such as the one that powers the ChatGPT app responds to customer queries and requests. Nvidia remains dominant in chips for training large AI models, while inference has become a new front in the competition. This decision by OpenAI and others to seek out alternatives in the inference chip market marks a significant test of Nvidia's AI dominance and comes as the two companies are in investment talks. In September, Nvidia said it intended to pour as much as $100 billion into OpenAI as part of a deal that gave the chipmaker a stake in the startup and gave OpenAI the cash it needed to buy the advanced chips. The deal had been expected to close within weeks, Reuters reported. Instead, negotiations have dragged on for months. During that time, OpenAI has struck deals with AMD (AMD.O), opens new tab and others for GPUs built to rival Nvidia's. But its shifting product road map also has changed the kind of computational resources it requires and bogged down talks with Nvidia, a person familiar with the matter said. On Saturday, Nvidia CEO Jensen Huang brushed off a report of tension with OpenAI, saying the idea was "nonsense" and that Nvidia planned a huge investment in OpenAI. "Customers continue to choose NVIDIA for inference because we deliver the best performance and total cost of ownership at scale," Nvidia said in a statement. A spokesperson for OpenAI in a separate statement said the company relies on Nvidia to power the vast majority of its inference fleet and that Nvidia delivers the best performance per dollar for inference. Seven sources said that OpenAI is not satisfied with the speed at which Nvidia's hardware can spit out answers to ChatGPT users for specific types of problems such as software development and AI communicating with other software. It needs new hardware that would eventually provide about 10% of OpenAI's inference computing needs in the future, one of the sources told Reuters. The ChatGPT maker has discussed working with startups including Cerebras and Groq to provide chips for faster inference, two sources said. But Nvidia struck a $20-billion licensing deal with Groq that shut down OpenAI's talks, one of the sources told Reuters. Nvidia's decision to snap up at Groq looked like an effort to shore up a portfolio of technology to better compete in a rapidly changing AI industry, chip industry executives said. Nvidia, in a statement, said that Groq's intellectual property was highly complementary to Nvidia's product roadmap. NVIDIA ALTERNATIVES Nvidia's graphics processing chips are well-suited for massive data crunching necessary to train large AI models like ChatGPT that have underpinned the explosive growth of AI globally to date. But AI advancements increasingly focus on using trained models for inference and reasoning, which could be a new, bigger stage of AI, inspiring OpenAI's efforts. The ChatGPT maker's search for GPU alternatives since last year focused on companies building chips with large amounts of memory embedded in the same piece of silicon as the rest of the chip, called SRAM. Squishing as much costly SRAM as possible onto each chip can offer speed advantages for chatbots and other AI systems as they crunch requests from millions of users. Inference requires more memory than training because the chip needs to spend relatively more time fetching data from memory than performing mathematical operations. Nvidia and AMD GPU technology relies on external memory, which adds processing time and slows how quickly users can interact with a chatbot. Inside OpenAI, the issue became particularly visible in Codex, its product for creating computer code, which the company has been aggressively marketing, one of the sources added. OpenAI staff attributed some of Codex's weakness to Nvidia's GPU-based hardware, one source said. In a January 30 call with reporters, CEO Sam Altman said that customers using OpenAI's coding models will "put a big premium on speed for coding work." One way OpenAI will meet that demand is through its recent deal with Cerebras, Altman said, adding that speed is less of an imperative for casual ChatGPT users. Competing products such as Anthropic's Claude and Google's Gemini benefit from deployments that rely more heavily on the chips Google made in-house, called tensor processing units, or TPUs, which are designed for the sort of calculations required for inference and can offer performance advantages over general-purpose AI chips like the Nvidia-designed GPUs. NVIDIA ON THE MOVE As OpenAI made clear its reservations about Nvidia technology, Nvidia approached companies working on SRAM-heavy chips, including Cerebras and Groq, about a potential acquisition, the people said. Cerebras declined and struck a commercial deal with OpenAI announced last month. Cerebras declined to comment. Groq held talks with OpenAI for a deal to provide computing power and received investor interest to fund the company at a valuation of roughly $14 billion, according to people familiar with the discussions. Groq declined to comment. But by December, Nvidia moved to license Groq's tech in a non-exclusive all-cash deal, the sources said. Although the deal would allow other companies to license Groq's technology, the company is now focusing on selling cloud-based software, as Nvidia hired away Groq's chip designers. Reporting by Max A. Cherney, Krystal Hu and Deepa Seetharaman in San Francisco; editing by Kenneth Li, Peter Henderson and Nick Zieminski Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Business Max A. Cherney Thomson Reuters Max A. Cherney is a correspondent for Reuters based in San Francisco, where he reports on the semiconductor industry and artificial intelligence. He joined Reuters in 2023 and has previously worked for Barron's magazine and its sister publication, MarketWatch. Cherney graduated from Trent University with a degree in history. Krystal Hu Thomson Reuters Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.
[6]
What does the disappearance of a $100bn deal mean for the AI economy?
Apparent collapse of Nvidia-OpenAI tie-up raises questions about circular funding and who will bear the cost of AI's expansion Did the circular AI economy just wobble? Last week it was reported that a much-discussed $100bn deal - announced last September - between Nvidia and OpenAI might not be happening at all. This was a circular arrangement through which the chipmaker would supply the ChatGPT developer with huge sums of money that would largely go towards the purchase of its own chips. It is this type of deal that has alarmed some market watchers, who detect a whiff of the 1999-2000 dotcom bubble in these transactions. Now it seems that Nvidia was not as solid on this investment as had been widely believed, according to the Wall Street Journal. Negotiations had not progressed, with Jensen Huang, Nvidia's chief executive, privately emphasising that the deal was "non-binding" and "not finalised". Huang appeared to confirm this in Taipei on Saturday, telling reporters that Nvidia would make a "huge" investment into OpenAI's next funding round, but "nothing like" $100bn. A report from Reuters soon suggested that the feeling was mutual: OpenAI was "unsatisfied" with Nvidia's advanced AI chips, it said, and seeking alternatives. Nvidia's stock has taken a 10% hit so far this week, a flurry of headlines have ensued and both companies have stepped into damage control. "We love working with Nvidia and they make the best AI chips in the world," wrote Sam Altman, OpenAI's CEO, on X. "We hope to be a gigantic customer for a very long time." Even Oracle appears to be shaken: the software company, which is counting on a $300bn cloud computing deal with OpenAI, said it still expects the startup to be good for its commitment even if it does not receive the full amount from Nvidia. In total, OpenAI has committed to compute deals - the infrastructure for building and powering its AI tools - worth more than $1tn. "The Nvidia-OpenAI deal has zero impact on our financial relationship with OpenAI," Oracle posted on X. "We remain highly confident in OpenAI's ability to raise funds and meet its commitments." That a $100bn deal between two of the most crucial players in AI appears to have evaporated over a weekend is unsettling. But there are solid business reasons behind the apparent shake-up, said Alvin Nguyen, analyst at research firm Forrester. OpenAI's ambitious growth trajectory means it will be difficult for the company to stick with a single vendor, especially as it plans new, computationally demanding AI models, he said. "They need chips. They need as many as possible." As for Nvidia, its commitment to the $100bn may have been loose in the first place, even as it was widely reported. "They will not discourage people from overhyping. Why say something and immediately sucker punch your own share price?" For a giant startup like OpenAI, manoeuvring in and out of deals - for example, with chipmakers - may just be business as usual, said Nguyen: "You know [Altman's] background as a startup person, and you know the manoeuvres he's doing make sense from a startup perspective." For Nvidia, meanwhile, AI hype is part of selling chips. "You don't know what's going to happen," said Nguyen. "And so you let other people put numbers out there for you and let that drive the hype." The issue is, of course, that investors and other companies like Oracle may have taken widely reported $100bn commitments seriously. In response to a query from the Guardian, an OpenAI spokesperson referred to Altman's X post, and to remarks Huang made to CNBC on Tuesday, including: "There is no drama." The spokesperson added: "Our teams are actively working through details of our partnership. Nvidia technology has underpinned our breakthroughs from the start, powers our systems today, and will remain central as we scale what comes next." Nvidia and Oracle did not respond to requests for comment. This is all taking place against the backdrop of a changing investment landscape for AI, where hype is giving way to realities about what aspects of the technology are actually going to earn money. While investors ponder whether OpenAI is going to be able to pay for a $1.4tn compute deal, reality is biting further down the AI food chain. This week has seen a massive sell-off in certain software stocks, prompted in part by the launch of a new Anthropic AI tool that can carry out a number of professional services, which has led to fears that business models exposed to competition from AI products will be disrupted . This is the flip-side of "jagged AI", which is the term for advanced AI tools having uneven talents, such as being good at sifting through documents but less good at solving complex maths problems. If advanced systems are good at automating legal work, then legacy companies in service industries will suffer. The losers are beginning to emerge and are being picked up by investors. At the top of the AI pyramid the competitive effects are also biting. OpenAI's chatbot, ChatGPT, is losing ground to competitors. Data released on Tuesday show its market share has eroded from 69% to 45% owing to the rise of Google's Gemini, xAI's Grok and Anthropic's Claude. OpenAI appears to have retreated from soaring talk of super-intelligence in the past months, focusing instead on profitable mundanities such as adverts and adult content. The apparent evaporation of a $100bn deal may be of a piece with last year's sci-fi rhetoric meeting this year's practicalities. The question is, who might be left holding the bill? "I think there will be knock-on effects," said Nguyen. "I mean, it's that statement: the markets can stay irrational longer than you can stay solvent."
[7]
Nvidia describes OpenAI as one of most consequential companies ever
Nvidia will invest heavily in OpenAI despite not specifying exact figures Nvidia chief executive Jensen Huang rejected claims that the company was pulling back from OpenAI. "It's nonsense," he said speaking to reporters during a recent visit to Taipei, describing OpenAI as "one of the most consequential companies of our time" and confirming that Nvidia would "definitely participate" in the next funding round. Huang also stated Nvidia "will invest a great deal of money" because OpenAI is "such a good investment," emphasizing continued support for the partnership. In a statement to The Wall Street Journal, an OpenAI spokesperson backed this idea, stating that the companies are "actively working through the details of our partnership." Nvidia "has underpinned our breakthroughs from the start, powers our systems today, and will remain central as we scale what comes next." Despite the strong language, Huang declined to provide any figures. "Let Sam announce how much he's going to raise... It's for him to decide," he said, referring to OpenAI CEO Sam Altman. That reluctance is interesting because earlier reporting set expectations very high - back in December 2025, reports said OpenAI was exploring a $100 billion funding round. More recently, The New York Times said Nvidia, Microsoft, Amazon, and SoftBank were all discussing potential investments. At the same time, reports said Huang has begun emphasizing that Nvidia's earlier commitment of up to $100 billion was nonbinding. Recent discussions within the company have focused on scaling the investment, with some conversations centering on an equity stake measured in tens of billions of dollars. Financial investment is only one part of the relationship between Nvidia and OpenAI. Beyond the pledged funding, the two companies planned to build massive computing capacity, including tens of thousands of servers. OpenAI's systems rely heavily on Nvidia chips, and that dependence extends into cloud hosting environments where much of its AI work takes place. These operational links also support the development and deployment of AI tools that require continuous access to high-performance computing resources. Some claims say Huang has privately criticized aspects of OpenAI's business strategy, expressing concerns about competition from companies such as Anthropic and Google. None of these claims have independent confirmation, but a contrast remains between confident public quotes and reports that describe caution in private discussions. Via TechCrunch
[8]
Uh Oh... Nvidia's $100 Billion Deal With OpenAI Has Fallen Apart
AI chipmaker Nvidia has been at the center of the enormous AI hype wave that has gripped global markets, ascending to become the most valuable company in the world. Yet despite its dominating presence on Wall Street, OpenAI is getting cold feet about the company's offerings. After announcing a blockbuster $100 billion deal in September -- which escalated concerns of AI companies passing the same money around in circular dealmaking -- the ChatGPT maker may have changed its mind, as the Wall Street Journal reported last week. But sources told Reuters this week that the Sam Altman-led outfit has deemed Nvidia's latest chips not up to snuff, especially when it comes to AI inference, the process of using a machine learning model to generate new data, which has become a major focus for OpenAI. After months of negotiations, the deal with Nvidia was expected to close within weeks. In the meantime, OpenAI has signed major deals with competing chipmaker AMD, among others. Then, on Tuesday, Bloomberg reported that Nvidia was nearing a deal to invest $20 billion in OpenAI instead -- a mere fifth of what was originally on the table. That the larger deal fell apart highlights ongoing tensions as US software companies continue to grapple with investors getting cold feet over the AI industry's astronomical spending plans. Despite trillions of dollars of commitments to scale up AI infrastructure, companies aren't expected to make any profit for many years to come. Nvidia's dustup with OpenAI appeared to have hit a nerve, causing the former's stock price to continue its weeks-long plunge, dropping almost nine percent over the last five days. The company's stock has slid over seven percent over the last month. Both Nvidia CEO Jensen Huang and Altman have since publicly denied that there's been any strain on the relationship between the two companies. "We love working with NVIDIA and they make the best AI chips in the world. We hope to be a gigantic customer for a very long time," Altman tweeted after Reuters published its story on Monday. "I don't get where all this insanity is coming from." "We will definitely participate in the next round of financing because it's such a good investment," Huang told reporters over the weekend. As Ars Technica points out, the original $100 billion deal for ten gigawatts of compute, something that would require the equivalent of ten nuclear reactors to sustain, was never set in stone, as it was just a letter of intent. It's certainly possible the original figure was simply pulled out of thin air. As Huang told reporters, the sum was "never a commitment." "We are going to make a huge investment in OpenAI," he added. "Sam [Altman] is closing the round, and we will absolutely be involved." "We will invest a great deal of money," he added, arguing it would be the "largest investment we've ever made."
[9]
OpenAI reportedly isn't happy with Nvidia's GPUs while Nvidia's $100 billion investment plan in OpenAI is said to have 'stalled': Is the AI honeymoon over?
OpenAI reportedly isn't happy with the performance of Nvidia's GPUs. Meanwhile, Nvidia is having second thoughts about pumping $100 billion into OpenAI. These are the latest rumours around the two biggest players in AI. So, could their unholy alliance be faltering? Last week, the Wall Street Journal claimed that Nvidia is rethinking its previously announced plans to invest $100 billion in OpenAI over concerns regarding its ability to compete with the likes of Google and Anthropic. Then yesterday, Reuters posted a story detailing the reported dissatisfaction of OpenAI with Nvidia's GPUs, specifically for the task of inferencing AI models. If the latter story looks a lot like somebody at OpenAI hitting back at the original Wall Street Journal claims, the two narratives combined feel like just the sort of tit-for-tat off-the-record briefing that occurs when an alliance is beginning to falter. For now, none of this is official. It's all rumour. However, it is true that Nvidia's intention to invest $100 billion in OpenAI was announced in September and has yet to be finalised. The Wall Street Journal claims that Nvidia CEO Jensen Huang has "privately criticized what he has described as a lack of discipline in OpenAI's business approach and expressed concern about the competition it faces from the likes of Google and Anthropic." In public, Huang has defended Nvidia's intentions when it comes to investments in OpenAI, but has stopped short of explicitly reconfirming the $100 billion deal. "We will invest a great deal of money, probably the largest investment we've ever made," he said. But he also retorted, "no, no, nothing like that," when queried whether that investment would top $100 billion. As for OpenAI, Reuters says that it is, "unsatisfied with some of Nvidia's latest artificial intelligence chips, and it has sought alternatives since last year." It's claimed that OpenAI is shifting its emphasis away from training AI in favour of inference or running AI models as services for customers. It's for that latter task, inference, that OpenAI is said to have found Nvidia's GPUs wanting. "Seven sources said that OpenAI is not satisfied with the speed at which Nvidia's hardware can spit out answers to ChatGPT users for specific types of problems such as software development and AI communicating with other software," Reuters claims. It's certainly a somewhat plausible narrative. You could argue that Nvidia's GPUs are big, complex, relatively general-purpose hardware that's suboptimal for the specific task of inference. By way of example, Microsoft has recently announced its latest ASIC, or Application Specific Integrated Circuit, specifically for inferencing. ASICs are chips designed to do a single, narrowly designed task very efficiently. And it's probably fair to say that, in the long run, most industry observers think that AI inferencing, at the very least, will be run on ASICs rather than GPUs. A handy parallel case study of the power of ASICs is cryptocurrency mining. That too used to be done on GPUs. But ASICs are now far, far more effective. Anywho, it's perhaps inevitable that the OpenAI-Nvidia love-in would falter to some degree. Both companies have a whiff of "world domination" about them and, in the end, their interests are never going to align perfectly. As per the Wall Street Journal report, it's very likely Nvidia will still invest billions in OpenAI. And for now, no doubt OpenAI has little choice but to keep buying billions of dollars' worth of Nvidia GPUs. But if these stories have any truth in them, the honeymoon is probably over.
[10]
Nvidia nears deal to invest $20 billion in OpenAI funding round: Report
Nvidia CEO Jensen Huang told CNBC earlier in the day that the company would consider investing in OpenAI's next fundraising round and the startup's eventual IPO, following recent reports that the deal had stalled. Nvidia is nearing a deal to invest roughly $20 billion in OpenAI as part of its latest funding round, a person familiar with the matter told Reuters on Tuesday. ChatGPT maker OpenAI is looking to raise up to $100 billion in its latest funding round, valuing it at about $830 billion, Reuters had reported last week. Companies including Amazon and SoftBank Group Corp are racing to forge partnerships with OpenAI, betting that closer ties with the artificial-intelligence startup would give them a competitive edge in the AI race. The Nvidia-OpenAI deal is not finalised yet, the source said. Bloomberg News reported earlier in the day that Nvidia was nearing a deal with OpenAI. The news comes days after the Wall Street Journal reported that Nvidia's September plan to invest $100 billion in OpenAI and supply it with data center chips had stalled after the chipmaker expressed doubts about the deal. The deal had been expected to close within weeks but negotiations have dragged on for months. Nvidia CEO Jensen Huang has denied claims he was unhappy with the ChatGPT maker and said on Saturday that the company plans to make a "huge" investment in OpenAI, probably its largest ever. Huang also told CNBC earlier on Tuesday that Nvidia would consider investing in OpenAI's next fundraising round and the startup's eventual initial public offering. Reuters reported on Monday that OpenAI is unsatisfied with some of Nvidia's latest AI chips, and it has sought alternatives since last year, potentially complicating their relationship. OpenAI Chief Executive Sam Altman said after the Reuters report that Nvidia makes "the best AI chips in the world" and that the company hopes to remain a "gigantic customer for a very long time".
[11]
Nvidia Reportedly Nears Record $20 Billion OpenAI Investment As Jensen Huang And Sam Altman Deny Reports Of Strained Partnership - Amazon.com (NASDAQ:AMZN), NVIDIA (NASDAQ:NVDA)
Jensen Huang-led Nvidia Corp (NASDAQ:NVDA) is reportedly planning to invest $20 billion in ChatGPT-maker OpenAI. Nvidia's Biggest Bet Yet On The AI Boom As part of OpenAI's latest funding round, Nvidia is close to finalizing a $20 billion investment in the AI startup, reported Bloomberg on Tuesday, citing people familiar with the matter. This investment, if completed, would mark Nvidia's single largest investment in OpenAI to date. As per the report, the deal is not final and the terms could change. OpenAI and Nvidia did not immediately respond to Benzinga's request for comments. OpenAI Targets Up To $100 Billion In New Funding Previously, it was reported that OpenAI intends to raise upto $100 billion in its new funding round. Partnership Under Scrutiny Amid Chip Reports The latest report comes amid scrutiny over the partnership between OpenAI and Nvidia. Last week, it was reported that Nvidia's proposed plan to invest upto $100 billion in OpenAI, which was announced in September 2025, has stalled over internal concerns. On Monday, another report emerged stating that OpenAI is not happy with some of Nvidia's latest AI chips and has been seeking alternatives. However, both Huang and OpenAI CEO Sam Altman have pushed back on these reports. Price Action: During Tuesday's regular session, Nvidia closed down 2.84% at $180.34 and slipped another 0.58% to $179.30 in after-hours trading, according to Benzinga Pro. Nvidia shows a strong price trend across the short, medium and long terms, though it carries a weak value ranking, according to Benzinga's Edge Stock Rankings. Photo Courtesy: Mehaniq on Shutterstock.com Market News and Data brought to you by Benzinga APIs
[12]
The NVIDIA-OpenAI Fiasco Isn't About Compute, It's About Control; Here's How One of the World's Biggest AI Partnerships Is Playing Out
NVIDIA and OpenAI are all that's being talked about in the AI world, not because there have been changes in their commitments, but because the scale of the partnership is so immense that it captures all the market spotlight. Before we dive into the ongoing NVIDIA-OpenAI fiasco, it's important to note the fundamentals that underpin the partnership. Team Green is currently the world's largest AI infrastructure provider, and almost all hyperscalers are dependent on the company, not just for hardware, but also for financial commitments in the form of "collaborations" or whatever you call it. At the same time, NVIDIA has ramped up its external investments in frontier labs, such as Anthropic and OpenAI, mainly because Jensen claims their work is "revolutionary enough" to warrant investment. When you are as big as NVIDIA, it's important to keep key entities close, and in the case of OpenAI, Sam Altman has enjoyed an exclusive relationship with Jensen, not just in finance but also in compute access. This relation reached a decisive point when NVIDIA decided to invest up to $100 billion into a "non-binding", "inconclusive", "not final" arrangement, and it is really important to keep a focus on the words that I have highlighted previously. OpenAI's successful GPT-5 release drove NVIDIA's investment, but in recent days, market speculation and industry chatter suggest that internal sentiment towards OpenAI has changed. Now, there are two major aspects to this story that we'll cover. The first and more important is, of course, the compute factor, while the other is whether both sides are getting a "worthy" investment/collaboration. The second reason coexists with the first, but by highlighting it separately, we can discuss industry dynamics on a much broader scale, helping our readers realize that the actual situation is far bigger than what's being discussed. Let me define 'compute factor' more extensively. Since it's all about the infrastructure race, companies are racing to secure the best TCOs by either pursuing NVIDIA on attractive deals or exploring the ASIC route, hoping to lower operating costs or at least convince NVIDIA to get into an agreement. One of the major highlights of the NVIDIA-OpenAI arrangement was the supply of Vera Rubin clusters, in a deal worth $100 billion, which would bring on 10GW of capacity to power "OpenAI's next-generation AI infrastructure". On a surface level, the arrangement sounds optimal, since, as OpenAI, you are essentially getting exclusive access to and commitment from the world's largest GPU company, that too as you head into the pre-IPO phase. For NVIDIA, well, their next-gen hardware is validated by one of the world's largest frontier labs, allowing them to drive hyperscaler and the interests of other segments. But here's when things take a twist, and I'll justify this. With Vera Rubin, per GW capacity, it comes to around $10 billion, based on what we have seen with official PRs. Today's Reuters report suggests that OpenAI found NVIDIA's chips not 'worthwhile' enough, and that the company even had plans to explore deals with manufacturers like Groq and Cerebras, despite not being involved in the AI infrastructure race at all. While Sam Altman himself has denied such claims, there is no doubt that within the company's ranks, there is skepticism towards whether NVIDIA's partnership yields the optimal outcome, in terms of the $/GW capacity coming onboard. When you see OpenAI eying Groq or Cerebras, the idea, of course, is to leverage inference and latency over NVIDIA's tech stack by finding a middle ground. Reuters also suggested that OpenAI feels NVIDIA lags in inference, and that the AI lab would need "hardware that would eventually provide about 10% of OpenAI's inference computing needs". Cerebras is supplying OpenAI with 750MW of capacity at around $10 billion, which, yet again, isn't optimal when you look at per-GW figures versus NVIDIA. But the race here is defintely towards who gets the better deal on the compute front, as seen in today's Reuters report. Yet again, neither party has discussed this at all, and when both Jensen and Altman were asked about their commitments to each other, both said they are on track with the initial plan. The recent NVIDIA-OpenAI talks, especially regarding the commitments being switched up, are part of a "narrative" that NVIDIA has already discussed. We double-checked NVIDIA's PR, 10-Q filing, and CFO Colette Kress's statements, and realized that NVIDIA never actually decided to invest $100 billion in OpenAI directly; instead, it was a multi-GW plan divided into multiple milestones. With each milestone, NVIDIA would ramp up its investments, and the total would reach $100 billion; hence, there wasn't a one-time payment commitment. To support the partnership, NVIDIA intends to invest up to $100 billion in OpenAI progressively as each gigawatt is deployed. (NVIDIA's PR) There is no assurance that we will enter into definitive agreements with respect to the OpenAI opportunity or other potential investments, or that any investment will be completed on expected terms (10-Q filing) A reporter asked NVIDIA's CEO Jensen Huang about the status of the OpenAI deal, and many viewers over the internet felt that Huang was 'agitated' by the questions, claiming that the reporter was "putting words in his mouth", which expresses his frustration towards recent market rumors. Huang also stated that it wouldn't be wise to commit to OpenAI, and that the company will still make its largest-ever investment into the AI lab. We never said we would invest $100B in one round. There was never a commitment. They invited us to invest up to $100B. We will invest one step at a time. I told you just now. You keep putting words in my mouth. - NVIDIA's Jensen Huang At NVIDIA's front, the idea that the OpenAI deal was a 'non-binding' agreement seems solidified, so on the other side, let's look at what's up at Sam Altman's camp. Well, first off, the company is losing the race in the agentic AI era right now, as Anthropic's Claude takes a lead, credited to its robust 'applications layer' with Claude Code, Claude Cowork, and many wrappers built around Opus 4.5. Given that OpenAI has held a lead in the AI market for several years now, the sudden competition has sparked speculation about the AI lab's future. More importantly, OpenAI is racing towards an IPO this year, aiming to raise immense capital to become the first AI lab to go public and potentially cross the $500 billion market capitalization threshold. The pre-IPO phase is proving difficult for OpenAI at the moment, as revenue projections are falling, raising concerns about whether the company's $1.4 trillion in commitments over the next decade can be fulfilled. Combine all the above talks, and you'll realize that the NVIDIA-OpenAI story is all around speculation for now. There are many industry elements and strategies to keep in mind when you look at the current AI landscape, and when you factor in politics within businesses, you'll realize that the OpenAI-NVIDIA discussions carry a lot of weight. For now, both parties are fully committed, but it would be interesting to see how the future unfolds.
[13]
OpenAI seeks alternatives to Nvidia for AI inference, testing chipmaker's dominance
OpenAI is unsatisfied with some of Nvidia's latest artificial intelligence chips, and it has sought alternatives since last year, eight sources familiar with the matter said, potentially complicating the relationship between the two highest-profile players in the AI boom. The ChatGPT-maker's shift in strategy, the details of which are first reported here, is over an increasing emphasis on chips used to perform specific elements of AI inference, the process by which an AI model, such as the one that powers the ChatGPT app, responds to customer queries and requests. Nvidia remains dominant in chips for training large AI models, while inference has become a new front in the competition. This decision by OpenAI and others to seek out alternatives in the inference chip market marks a significant test of Nvidia's AI dominance and comes as the two companies are in investment talks. In September, Nvidia said it intended to pour as much as $100 billion into OpenAI as part of a deal that gave the chipmaker a stake in the startup and gave OpenAI the cash it needed to buy the advanced chips. The deal had been expected to close within weeks, Reuters reported. Instead, negotiations have dragged on for months. During that time, OpenAI has struck deals with AMD AMD.O and others for GPUs built to rival Nvidia's. But its shifting product road map also has changed the kind of computational resources it requires and bogged down talks with Nvidia, a person familiar with the matter said. On Saturday, Nvidia CEO Jensen Huang brushed off a report of tension with OpenAI, saying the idea was "nonsense" and that Nvidia planned a huge investment in OpenAI. "Customers continue to choose NVIDIA for inference because we deliver the best performance and total cost of ownership at scale," Nvidia said in a statement. A spokesperson for OpenAI in a separate statement said the company relies on Nvidia to power the vast majority of its inference fleet and that Nvidia delivers the best performance per dollar for inference. After the Reuters story was published, OpenAI Chief Executive Sam Altman wrote in a post on X that Nvidia makes "the best AI chips in the world" and that OpenAI hoped to remain a "gigantic customer for a very long time." Seven sources said that OpenAI is not satisfied with the speed at which Nvidia's hardware can spit out answers to ChatGPT users for specific types of problems such as software development and AI communicating with other software. It needs new hardware that would eventually provide about 10% of OpenAI's inference computing needs in the future, one of the sources told Reuters. The ChatGPT maker has discussed working with startups including Cerebras and Groq to provide chips for faster inference, two sources said. But Nvidia struck a $20-billion licensing deal with Groq that shut down OpenAI's talks, one of the sources told Reuters. Nvidia's decision to snap up key talent at Groq looked like an effort to shore up a portfolio of technology to better compete in a rapidly changing AI industry, chip industry executives said. Nvidia, in a statement, said that Groq's intellectual property was highly complementary to Nvidia's product roadmap. Nvidia alternatives Nvidia's graphics processing chips are well-suited for massive data crunching necessary to train large AI models like ChatGPT that have underpinned the explosive growth of AI globally to date. But AI advancements increasingly focus on using trained models for inference and reasoning, which could be a new, bigger stage of AI, inspiring OpenAI's efforts. The ChatGPT-maker's search for GPU alternatives since last year focused on companies building chips with large amounts of memory embedded in the same piece of silicon as the rest of the chip, called SRAM. Squishing as much costly SRAM as possible onto each chip can offer speed advantages for chatbots and other AI systems as they crunch requests from millions of users. Inference requires more memory than training because the chip needs to spend relatively more time fetching data from memory than performing mathematical operations. Nvidia and AMD GPU technology relies on external memory, which adds processing time and slows how quickly users can interact with a chatbot. Inside OpenAI, the issue became particularly visible in Codex, its product for creating computer code, which the company has been aggressively marketing, one of the sources added. OpenAI staff attributed some of Codex's weakness to Nvidia's GPU-based hardware, one source said. In a January 30 call with reporters, Altman said that customers using OpenAI's coding models will "put a big premium on speed for coding work." One way OpenAI will meet that demand is through its recent deal with Cerebras, Altman said, adding that speed is less of an imperative for casual ChatGPT users. Competing products such as Anthropic's Claude and Google's Gemini benefit from deployments that rely more heavily on the chips Google made in-house, called tensor processing units, or TPUs, which are designed for the sort of calculations required for inference and can offer performance advantages over general-purpose AI chips like the Nvidia-designed GPUs. Nvidia on the move As OpenAI made clear its reservations about Nvidia technology, Nvidia approached companies working on SRAM-heavy chips, including Cerebras and Groq, about a potential acquisition, the people said. Cerebras declined and struck a commercial deal with OpenAI, announced last month. Cerebras declined to comment. Groq held talks with OpenAI for a deal to provide computing power and received investor interest to fund the company at a valuation of roughly $14 billion, according to people familiar with the discussions. Groq declined to comment. But by December, Nvidia moved to license Groq's tech in a non-exclusive all-cash deal, the sources said. Although the deal would allow other companies to license Groq's technology, the company is now focusing on selling cloud-based software, as Nvidia hired away Groq's chip designers.
[14]
OpenAI is unsatisfied with some Nvidia chips and looking for alternatives, sources say
The ChatGPT-maker's shift in strategy, the details of which are first reported here, is over an increasing emphasis on chips used to perform specific elements of AI inference, the process when an AI model such as the one that powers the ChatGPT app responds to customer queries and requests. OpenAI is unsatisfied with some of Nvidia's latest artificial intelligence chips, and it has sought alternatives since last year, eight sources familiar with the matter said, potentially complicating the relationship between the two highest-profile players in the AI boom. Budget 2026 Critics' choice rather than crowd-pleaser, Aiyar saysSitharaman's Paisa Vasool Budget banks on what money can do for you bestBudget's clear signal to global investors: India means business The ChatGPT-maker's shift in strategy, the details of which are first reported here, is over an increasing emphasis on chips used to perform specific elements of AI inference, the process when an AI model such as the one that powers the ChatGPT app responds to customer queries and requests. Nvidia remains dominant in chips for training large AI models, while inference has become a new front in the competition. This decision by OpenAI and others to seek out alternatives in the inference chip market marks a significant test of Nvidia's AI dominance and comes as the two companies are in investment talks. In September, Nvidia said it intended to pour as much as $100 billion into OpenAI as part of a deal that gave the chipmaker a stake in the startup and gave OpenAI the cash it needed to buy the advanced chips. The deal had been expected to close within weeks, Reuters reported. Instead, negotiations have dragged on for months. During that time, OpenAI has struck deals with AMD and others for GPUs built to rival Nvidia's. But its shifting product road map also has changed the kind of computational resources it requires and bogged down talks with Nvidia, a person familiar with the matter said. On Saturday, Nvidia CEO Jensen Huang brushed off a report of tension with OpenAI, saying the idea was "nonsense" and that Nvidia planned a huge investment in OpenAI. "Customers continue to choose NVIDIA for inference because we deliver the best performance and total cost of ownership at scale," Nvidia said in a statement. A spokesperson for OpenAI in a separate statement said the company relies on Nvidia to power the vast majority of its inference fleet and that Nvidia delivers the best performance per dollar for inference. After the Reuters story was published, OpenAI Chief Executive Sam Altman wrote in a post on X that Nvidia makes "the best AI chips in the world" and that OpenAI hoped to remain a "gigantic customer for a very long time". Seven sources said that OpenAI is not satisfied with the speed at which Nvidia's hardware can spit out answers to ChatGPT users for specific types of problems such as software development and AI communicating with other software. It needs new hardware that would eventually provide about 10% of OpenAI's inference computing needs in the future, one of the sources told Reuters. The ChatGPT maker has discussed working with startups including Cerebras and Groq to provide chips for faster inference, two sources said. But Nvidia struck a $20-billion licensing deal with Groq that shut down OpenAI's talks, one of the sources told Reuters. Nvidia's decision to snap up key talent at Groq looked like an effort to shore up a portfolio of technology to better compete in a rapidly changing AI industry, chip industry executives said. Nvidia, in a statement, said that Groq's intellectual property was highly complementary to Nvidia's product roadmap. Nvidia alternatives Nvidia's graphics processing chips are well-suited for massive data crunching necessary to train large AI models like ChatGPT that have underpinned the explosive growth of AI globally to date. But AI advancements increasingly focus on using trained models for inference and reasoning, which could be a new, bigger stage of AI, inspiring OpenAI's efforts. The ChatGPT-maker's search for GPU alternatives since last year focused on companies building chips with large amounts of memory embedded in the same piece of silicon as the rest of the chip, called SRAM. Squishing as much costly SRAM as possible onto each chip can offer speed advantages for chatbots and other AI systems as they crunch requests from millions of users. Inference requires more memory than training because the chip needs to spend relatively more time fetching data from memory than performing mathematical operations. Nvidia and AMD GPU technology relies on external memory, which adds processing time and slows how quickly users can interact with a chatbot. Inside OpenAI, the issue became particularly visible in Codex, its product for creating computer code, which the company has been aggressively marketing, one of the sources added. OpenAI staff attributed some of Codex's weakness to Nvidia's GPU-based hardware, one source said. In a January 30 call with reporters, Altman said that customers using OpenAI's coding models will "put a big premium on speed for coding work." One way OpenAI will meet that demand is through its recent deal with Cerebras, Altman said, adding that speed is less of an imperative for casual ChatGPT users. Competing products such as Anthropic's Claude and Google's Gemini benefit from deployments that rely more heavily on the chips Google made in-house, called tensor processing units, or TPUs, which are designed for the sort of calculations required for inference and can offer performance advantages over general-purpose AI chips like the Nvidia-designed GPUs. Nvidia on the move As OpenAI made clear its reservations about Nvidia technology, Nvidia approached companies working on SRAM-heavy chips, including Cerebras and Groq, about a potential acquisition, the people said. Cerebras declined and struck a commercial deal with OpenAI announced last month. Cerebras declined to comment. Groq held talks with OpenAI for a deal to provide computing power and received investor interest to fund the company at a valuation of roughly $14 billion, according to people familiar with the discussions. Groq declined to comment. But by December, Nvidia moved to license Groq's tech in a non-exclusive all-cash deal, the sources said. Although the deal would allow other companies to license Groq's technology, the company is now focusing on selling cloud-based software, as Nvidia hired away Groq's chip designers.
[15]
Why OpenAI is Unhappy with Some Nvidia Chips and Searching for Alternatives - NVIDIA (NASDAQ:NVDA)
OpenAI is exploring alternatives to some of NVIDIA Corp's (NASDAQ:NVDA) latest AI chips, potentially altering the dynamics between two key players in the AI sector. This strategic move by OpenAI highlights the company's focus on improving AI inference performance, essential for applications like ChatGPT. OpenAI's decision arises from dissatisfaction with the speed of Nvidia's hardware in handling specific tasks. OpenAI is considering partnerships with companies such as Cerebras and Groq to enhance its inference capabilities, Reuters reports. OpenAI's Bold Move Against Nvidia's Dominance The shift in OpenAI's strategy comes amid prolonged negotiations with Nvidia over a potential $100 billion investment. While Nvidia remains a leader in training AI models, OpenAI's pursuit of alternatives in the inference chip market could test Nvidia's dominance. OpenAI Chief Executive Sam Altman expressed a desire to remain a significant customer of Nvidia, despite seeking alternatives. "Nvidia makes 'the best AI chips in the world,'" Altman stated, and as quoted by the outlet, emphasizing the company's reliance on Nvidia for most of its inference needs. Are Alternative AI Chips The Future? OpenAI's pursuit of alternative chips focuses on SRAM-heavy designs, which could offer speed advantages for AI applications. Nvidia's reliance on external memory in its GPUs adds processing time, a concern for OpenAI's coding product, Codex. According to Reuters, OpenAI's collaboration with Cerebras aims to meet the demand for faster performance in coding models. "Customers using OpenAI's coding models will 'put a big premium on speed for coding work,'" Altman noted during a recent call. Nvidia has also shown interest in acquiring companies like Cerebras and Groq to bolster its technology portfolio. However, Cerebras opted for a commercial deal with OpenAI, while Nvidia secured a licensing agreement with Groq. Photo: Prathmesh T on Shutterstock.com This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[16]
What's Up With OpenAI and Nvidia? Conflicting Signals About the $100 Billion Deal Emerge
On the one hand Jensen Huang has committed to investment more in OpenAI, but Sam Altman seems to be looking elsewhere for AI chips Are we witnessing the first signs of discord in the AI universe built on a bubble of circular investments and mutual admiration. If media reports are to be believed, all's not well in this Garden of Eden where the first made-for-each-other couple, OpenAI and Nvidia, are sending conflicting signals of a possible parting of ways. At the crux of the lovers' tiff is OpenAI's reported dissatisfaction with Nvidia's chips when it comes to inference tasks that's forcing it to look elsewhere for partners. However, Nvidia says all's well and even committed to participate in its partner's fundraising round as well as in their hyped-up initial public offering (IPO). "There's no drama involved. Everything's on track," Nvidia boss Jensen Huang told CNBC's Jim Cramer yesterday. He further revealed that Nvidia will invest in the next round besides also parking funds in any future OpenAI round, leading up to the eventual IPO. Now that sounds like a solid commitment from one partner to the other, in whose capabilities they've full faith. A day before this announcement, OpenAI boss Sam Altman took to X with a statement that only heightened the confusion. "We love working with NVIDIA and they make the best AI chips in the world," he wrote. "We hope to be a gigantic customer for a very long time. I don't get where all this insanity is coming from," he wrote. Just so that readers remember, Nvidia first invested in OpenAI back in October 2024, a part of the latter's early $6.6 billion dollar fundraise efforts. So, where exactly is it coming from? Who is fuelling the fire, if at all there is one? To be honest, neither parties can claim ignorance here. In September last year, Huang and Altman announced a letter of intent that would involve a $100 billion investment in AI labs by Nvidia with OpenAI building its AI infrastructure around it's partner's tech requiring up to 10 GW of power. Everyone wished them well and a happy future together. However, in an SEC filing two months later, Nvidia said the deal had not been finalised. In its quarterly financial report, the chipmaker told investors that all announcements weren't a contract. "There is no assurance that we will enter into definitive agreements with respect to the OpenAI opportunity or other potential investments, or that any investment will be completed on expected terms," Nvidia said. And then there was absolute silence on this point ever since. Nvidia continued its investment spree that included funding Intel, post the Trump administration investment, and in Anthropic, a competitor of OpenAI. However, there were timely reminders from Nvidia about its commitment with Huang even noting that "Everything that OpenAI does runs on Nvidia today." Then came the Wall Street Journal report that the Nvidia-OpenAI deal was "on ice". The issues between the two revolved around Nvidia's discomfiture over its partner's business model while OpenAI appeared to hit back suggesting that the AI chips isn't as good as they hoped it would be. In fact, a Reuters report claims Sam Altman was looking for options as far back as a year ago. Really? Then why get into bed with the same partner last September? Or was it just a show of support that seems to be driving up the AI bubble in recent times? A deep-dive into OpenAI's reported issues seem to boil down to just one word - inference. A key parameter around measuring AI success. AI models such as ChatGPT requires two phases of which one is training where the model is fed massive loads of data to learn patterns and associations. Nvidia's GPUs have powered this process so efficiently that they're considered to be the boss of all that they purvey in this field. However, when it comes to inference, the actual test of a good AI foundational model and its ability to fulfil the tasks it is required to do, OpenAI feels the GPUs aren't cutting it. They aren't fast enough for OpenAI's liking - especially in the specific areas of responding to queries around software development and AI-2-AI communication. Altman believes that his company requires chips that can provide 10% of its future inference computing power. And those in the know claim that OpenAI believes that Nvidia chips reliance on external memory is the root cause of this problem. Imagine having to leave the room to get a tool each time a new situation arises, says an AI expert we spoke with. Simply put, OpenAI wants chips with massive embedded memory, which it believes, is the only way ChatGPT can perform tasks that competitors like Claude are performing and gaining market share in the enterprise side of the business. The Reuters report says Altman's team approached other chipmakers like Cerebras and Groq while also striking deals with AMD and Broadcom. Till this point, everything appeared kosher. Commitment to one partner does not mean that one cannot look for an alliance for a specific purpose elsewhere. However, Nvidia did not take kindly to these developments. When OpenAI and Groq seemed close to a deal, the chipmaker came calling and signed a $20 billion licensing deal that effectively put paid to any arrangement between the other two. Jensen Huang swooped down and locked up Groq's technology and even took away their chip designers. However, Altman and team did manage to strike a deal with Cerebras last month. Commenting on the deal, Sachin Katti of OpenAI said the company adds a dedicated low-latency inference solution to our platform. "That means faster responses, more natural interactions, and a stronger foundation to scale real-time AI to many more people," he said. Even Altman publicly claimed that this deal would help him meet the speed demands for coding tasks. Which brings us to what exactly is happening to the AI universe as a whole. Industry is opening its eyes to the real challenge - that training AI models isn't the same as developing AI smarts. Gathering mountains of data to train foundational models is just the first part of the equation, which OpenAI committed to execute with the help of Nvidia. However, when it came to powering up the inference part of the AI puzzle, Huang's team appears to be facing a whole new challenge, coming from their oldest competitor - Google. And it is quite obvious out there as not just OpenAI, even competitors like Anthropic have shifted focus from Nvidia to Google's custom tensor processing units. Incidentally, Google is using its own power to fire up Gemini and users have already perceived the difference in the latter's inference capabilities as has been the case with Claude's coding prowess. Given these complexities, it is easy to infer staying loyal to one partner may not help in the frenetic AI race that all Big Tech companies are involved in. For now, Nvidia and OpenAI might say the right things, but there is no individual business interests. Huang would like his chips to dominate the market. OpenAI has no option but seek alternatives to GPUs. Neither wants to be stuck with a partner that proves to be a bottleneck to their individual security. For, the AI war is just getting started and those stuck with the wrong partners may face a quick and easy annihilation.
[17]
Sam Altman Pushes Back On Report Claiming OpenAI Unhappy With Jensen Huang-Led Company's AI Chips Alternatives: 'Love' Working With Nvidia - Advanced Micro Devices (NASDAQ:AMD), Broadcom (NASDAQ:AVGO)
On Monday, OpenAI CEO Sam Altman reaffirmed his company's close ties with Nvidia Corp (NASDAQ:NVDA) after a report suggested the AI startup was dissatisfied with some of the chipmaker's latest offerings and exploring alternatives. Altman Rejects Report, Reaffirms Nvidia Ties Altman addressed the speculation in a post on X, calling Nvidia the gold standard in artificial intelligence hardware and signaling that OpenAI has no plans to walk away from the partnership. "We love working with NVIDIA and they make the best AI chips in the world," Altman wrote. "We hope to be a gigantic customer for a very long time." Altman added that he did not understand "where all this insanity is coming from," an apparent reference to reports questioning the strength of the relationship. Report Points To Growing Pains, Not A Breakup Altman's comments followed a Reuters report that said OpenAI has been dissatisfied with some of Nvidia's newest AI chips and has been looking at alternatives since at least 2025. The report framed the issue as part of OpenAI's broader effort to meet rapidly rising compute needs rather than a wholesale shift away from Nvidia. The scrutiny intensified after The Wall Street Journal reported last week that talks around a proposed Nvidia investment of up to $100 billion in OpenAI had stalled. Nvidia Says $100 Billion Deal Was 'Never A Commitment' Nvidia CEO Jensen Huang addressed the report over the weekend, saying the massive investment was "never a commitment." However, he said that Nvidia still plans to invest "a great deal of money" in OpenAI. The proposed investment was first disclosed in September 2025 and raised questions about circular investing, given that Nvidia is OpenAI's largest supplier of AI processors. Separately, CNBC reporter Kristina Partsinevelos on Monday said that Nvidia is participating in OpenAI's latest funding round, separate from the earlier, much larger investment proposal. OpenAI: Nvidia Remains Core To Its Compute Stack Sachin Katti, a senior OpenAI executive, also took to X to underscore the depth of the partnership, calling Nvidia "our most important partner for both training and inference." "Our entire compute fleet runs on NVIDIA GPUs," Katti said, adding that OpenAI scaled available compute from 0.2 gigawatts in 2023 to roughly 1.9 gigawatts in 2025 as demand surged. Price Action: Nvidia shares closed down 2.89% at $185.61 on Monday and edged up 0.34% to $186.25 in after-hours trading, according to Benzinga Pro. NVDA maintains a stronger price trend over the short, medium and long terms with a poor value ranking. Additional performance details, as per Benzinga's Edge Stock Rankings. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo Courtesy: Camilo Concha on Shutterstock.com Market News and Data brought to you by Benzinga APIs
[18]
Nvidia nears $20 bln OpenAI investment in latest funding round - Bloomberg By Investing.com
Investing.com-- Nvidia (NASDAQ:NVDA) is close to finalising a roughly $20 billion investment in OpenAI as part of the ChatGPT maker's latest funding round, Bloomberg News reported on Tuesday, citing people familiar with the matter. The deal, which would mark Nvidia's largest-ever single investment, is nearing completion but is not yet final, and terms could still change, the report said. OpenAI is seeking to raise as much as $100 billion in fresh funding, with a significant portion expected to come from major technology companies, Bloomberg has reported. Amazon.com Inc (NASDAQ:AMZN) has discussed investing up to $50 billion, while SoftBank Group Corp. (TYO:9984) has held talks to invest as much as $30 billion. The Financial Times previously reported that Nvidia could invest up to $20 billion. The relationship between Nvidia and OpenAI, central players in the artificial intelligence boom, has drawn scrutiny amid reports of internal debate at the chipmaker over the scale of its commitment, the report added. Nvidia CEO Jensen Huang said over the weekend the company plans to participate in OpenAI's next financing round, calling it potentially the largest investment Nvidia has ever made.
[19]
OpenAI is unsatisfied with some Nvidia chips and looking for alternatives, sources say
SAN FRANCISCO, Feb 2 (Reuters) - OpenAI is unsatisfied with some of Nvidia's latest artificial intelligence chips, and it has sought alternatives since last year, eight sources familiar with the matter said, potentially complicating the relationship between the two highest-profile players in the AI boom. The ChatGPT-maker's shift in strategy, the details of which are first reported here, is over an increasing emphasis on chips used to perform specific elements of AI inference, the process when an AI model such as the one that powers the ChatGPT app responds to customer queries and requests. Nvidia remains dominant in chips for training large AI models, while inference has become a new front in the competition. This decision by OpenAI and others to seek out alternatives in the inference chip market marks a significant test of Nvidia's AI dominance and comes as the two companies are in investment talks. In September, Nvidia said it intended to pour as much as $100 billion into OpenAI as part of a deal that gave the chipmaker a stake in the startup and gave OpenAI the cash it needed to buy the advanced chips. The deal had been expected to close within weeks, Reuters reported. Instead, negotiations have dragged on for months. During that time, OpenAI has struck deals with AMD and others for GPUs built to rival Nvidia's. But its shifting product road map also has changed the kind of computational resources it requires and bogged down talks with Nvidia, a person familiar with the matter said. On Saturday, Nvidia CEO Jensen Huang brushed off a report of tension with OpenAI, saying the idea was "nonsense" and that Nvidia planned a huge investment in OpenAI. "Customers continue to choose NVIDIA for inference because we deliver the best performance and total cost of ownership at scale," Nvidia said in a statement. A spokesperson for OpenAI in a separate statement said the company relies on Nvidia to power the vast majority of its inference fleet and that Nvidia delivers the best performance per dollar for inference. Seven sources said that OpenAI is not satisfied with the speed at which Nvidia's hardware can spit out answers to ChatGPT users for specific types of problems such as software development and AI communicating with other software. It needs new hardware that would eventually provide about 10% of OpenAI's inference computing needs in the future, one of the sources told Reuters. The ChatGPT maker has discussed working with startups including Cerebras and Groq to provide chips for faster inference, two sources said. But Nvidia struck a $20-billion licensing deal with Groq that shut down OpenAI's talks, one of the sources told Reuters. Nvidia's decision to snap up key talent at Groq looked like an effort to shore up a portfolio of technology to better compete in a rapidly changing AI industry, chip industry executives said. Nvidia, in a statement, said that Groq's intellectual property was highly complementary to Nvidia's product roadmap. NVIDIA ALTERNATIVES Nvidia's graphics processing chips are well-suited for massive data crunching necessary to train large AI models like ChatGPT that have underpinned the explosive growth of AI globally to date. But AI advancements increasingly focus on using trained models for inference and reasoning, which could be a new, bigger stage of AI, inspiring OpenAI's efforts. The ChatGPT maker's search for GPU alternatives since last year focused on companies building chips with large amounts of memory embedded in the same piece of silicon as the rest of the chip, called SRAM. Squishing as much costly SRAM as possible onto each chip can offer speed advantages for chatbots and other AI systems as they crunch requests from millions of users. Inference requires more memory than training because the chip needs to spend relatively more time fetching data from memory than performing mathematical operations. Nvidia and AMD GPU technology relies on external memory, which adds processing time and slows how quickly users can interact with a chatbot. Inside OpenAI, the issue became particularly visible in Codex, its product for creating computer code, which the company has been aggressively marketing, one of the sources added. OpenAI staff attributed some of Codex's weakness to Nvidia's GPU-based hardware, one source said. In a January 30 call with reporters, CEO Sam Altman said that customers using OpenAI's coding models will "put a big premium on speed for coding work." One way OpenAI will meet that demand is through its recent deal with Cerebras, Altman said, adding that speed is less of an imperative for casual ChatGPT users. Competing products such as Anthropic's Claude and Google's Gemini benefit from deployments that rely more heavily on the chips Google made in-house, called tensor processing units, or TPUs, which are designed for the sort of calculations required for inference and can offer performance advantages over general-purpose AI chips like the Nvidia-designed GPUs. NVIDIA ON THE MOVE As OpenAI made clear its reservations about Nvidia technology, Nvidia approached companies working on SRAM-heavy chips, including Cerebras and Groq, about a potential acquisition, the people said. Cerebras declined and struck a commercial deal with OpenAI announced last month. Cerebras declined to comment. Groq held talks with OpenAI for a deal to provide computing power and received investor interest to fund the company at a valuation of roughly $14 billion, according to people familiar with the discussions. Groq declined to comment. But by December, Nvidia moved to license Groq's tech in a non-exclusive all-cash deal, the sources said. Although the deal would allow other companies to license Groq's technology, the company is now focusing on selling cloud-based software, as Nvidia hired away Groq's chip designers. (Reporting by Max A. Cherney, Krystal Hu and Deepa Seetharaman in San Francisco; editing by Kenneth Li, Peter Henderson and Nick Zieminski) By Max A. Cherney, Krystal Hu and Deepa Seetharaman
Share
Share
Copy Link
Five months after announcing plans for a $100 billion investment, Nvidia and OpenAI's mega-deal has stalled. The chipmaker now plans a $20 billion investment instead, while OpenAI quietly pursues alternative chip providers. The tension centers on inference performance issues, with OpenAI reportedly dissatisfied with Nvidia's GPU speed for coding tasks, prompting deals with Cerebras and AMD to reduce dependency.

What was supposed to be one of the AI industry's most ambitious partnerships has hit significant turbulence. In September 2025, Nvidia and OpenAI announced a letter of intent for Nvidia investment of up to $100 billion in OpenAI's AI infrastructure, promising 10 gigawatts of computing capacity requiring power output equal to roughly 10 nuclear reactors
1
. Five months later, the stalled investment deal has been dramatically scaled back, with Nvidia now nearing a deal to invest roughly $20 billion in OpenAI's latest OpenAI funding round, which values the ChatGPT maker at about $830 billion2
.The collapse of the original plan represents a significant shift in the Nvidia and OpenAI relationship, which has been central to the AI race among tech giants. Jensen Huang, Nvidia's CEO, now says the $100 billion figure was "never a commitment" and told reporters in Taiwan that Nvidia would "invest one step at a time"
1
. Despite the scaled-back investment, Huang told CNBC that Nvidia would "make a huge investment in OpenAI" and described it as "probably the largest investment we've ever made," though he clarified it would be "nothing like" $100 billion3
.Behind the stalled negotiations lies a more fundamental issue: OpenAI's dissatisfaction with Nvidia's inference performance. According to eight sources familiar with the matter, OpenAI is unsatisfied with the speed of some Nvidia AI chips for inference tasks—the process by which a trained AI model generates responses to user queries
5
. The issue became particularly visible in OpenAI's Codex, an AI code-generation tool, where OpenAI staff attributed some of the product's performance limitations to Nvidia's GPU-based hardware1
.This technical friction has prompted OpenAI to actively reduce its reliance on Nvidia by pursuing alternative chip providers. The company has discussed working with startups Cerebras and Groq, both of which build AI chips designed to reduce inference latency
1
. In January, OpenAI announced a $10 billion deal with Cerebras, adding 750 megawatts of computing capacity for faster inference through 20281
. Sachin Katti, who joined OpenAI from Intel in November to lead compute infrastructure, said the partnership adds "a dedicated low-latency inference solution" to OpenAI's platform.OpenAI has also struck an agreement with AMD in October for six gigawatts of GPUs and announced plans with Broadcom to develop custom AI chips to further diversify its hardware supply
1
. Seven sources told Reuters that OpenAI needs new hardware that would eventually provide about 10% of the company's inference computing needs5
.The tension between these AI giants signals a broader shift in AI chip market dominance, particularly as inference becomes increasingly critical. Nvidia's graphics processing chips excel at the massive data crunching necessary to train large AI models, but AI advancements increasingly focus on using trained models for inference and reasoning
5
. Inference requires more memory than training because chips need to spend relatively more time fetching data from memory than performing mathematical operations, and Nvidia and AMD GPU technology relies on external memory, which adds processing time5
.Competing products such as Anthropic's Claude and Google's Gemini benefit from deployments that rely more heavily on chips Google made in-house, called tensor processing units (TPUs), which are designed for the sort of calculations required for inference
5
. The Wall Street Journal reported that Jensen Huang has been privately criticizing what he described as a lack of discipline in OpenAI's business approach and has expressed concern about the competition OpenAI faces from Google and Anthropic1
.Related Stories
Despite the friction, both companies have attempted to smooth things over publicly. Sam Altman, OpenAI's CEO, posted on X: "We love working with NVIDIA and they make the best AI chips in the world. We hope to be a gigantic customer for a very long time"
1
. Yet the underlying dynamics reveal deeper concerns about the AI industry's financial structure and competitive landscape.The original deal sparked concerns about circular dealmaking—Nvidia invests $100 billion in OpenAI, which then uses those funds to lease Nvidia chips. Tech critic Ed Zitron has been critical of Nvidia's circular investments, which touch dozens of tech companies that are also all Nvidia customers
1
. The scaled-back investment and OpenAI's pursuit of alternative suppliers suggest both companies are hedging their bets in an increasingly competitive market.Nvidia shares fell about 1.1 percent on Monday following reports of the stalled deal
1
. Companies including Amazon and SoftBank Group Corp are racing to forge partnerships with OpenAI, betting that closer ties with the artificial-intelligence startup would give them a competitive edge in the AI race2
. As OpenAI pursues a valuation of over $800 billion ahead of a rumored IPO later this year, the company's ability to secure diverse hardware partnerships while maintaining its relationship with Nvidia will be critical to watch4
.Summarized by
Navi
22 Sept 2025•Business and Economy

02 Dec 2025•Business and Economy

06 Oct 2025•Business and Economy

1
Technology

2
Technology

3
Policy and Regulation
