Nvidia's $100 billion OpenAI investment shrinks to $20 billion as chip tensions surface

Reviewed byNidhi Govil

19 Sources

Share

Five months after announcing plans for a $100 billion investment, Nvidia and OpenAI's mega-deal has stalled. The chipmaker now plans a $20 billion investment instead, while OpenAI quietly pursues alternative chip providers. The tension centers on inference performance issues, with OpenAI reportedly dissatisfied with Nvidia's GPU speed for coding tasks, prompting deals with Cerebras and AMD to reduce dependency.

News article

Nvidia OpenAI Deal Shrinks From $100 Billion to $20 Billion

What was supposed to be one of the AI industry's most ambitious partnerships has hit significant turbulence. In September 2025, Nvidia and OpenAI announced a letter of intent for Nvidia investment of up to $100 billion in OpenAI's AI infrastructure, promising 10 gigawatts of computing capacity requiring power output equal to roughly 10 nuclear reactors

1

. Five months later, the stalled investment deal has been dramatically scaled back, with Nvidia now nearing a deal to invest roughly $20 billion in OpenAI's latest OpenAI funding round, which values the ChatGPT maker at about $830 billion

2

.

The collapse of the original plan represents a significant shift in the Nvidia and OpenAI relationship, which has been central to the AI race among tech giants. Jensen Huang, Nvidia's CEO, now says the $100 billion figure was "never a commitment" and told reporters in Taiwan that Nvidia would "invest one step at a time"

1

. Despite the scaled-back investment, Huang told CNBC that Nvidia would "make a huge investment in OpenAI" and described it as "probably the largest investment we've ever made," though he clarified it would be "nothing like" $100 billion

3

.

OpenAI Seeks Alternative Chip Providers Over Performance Concerns

Behind the stalled negotiations lies a more fundamental issue: OpenAI's dissatisfaction with Nvidia's inference performance. According to eight sources familiar with the matter, OpenAI is unsatisfied with the speed of some Nvidia AI chips for inference tasks—the process by which a trained AI model generates responses to user queries

5

. The issue became particularly visible in OpenAI's Codex, an AI code-generation tool, where OpenAI staff attributed some of the product's performance limitations to Nvidia's GPU-based hardware

1

.

This technical friction has prompted OpenAI to actively reduce its reliance on Nvidia by pursuing alternative chip providers. The company has discussed working with startups Cerebras and Groq, both of which build AI chips designed to reduce inference latency

1

. In January, OpenAI announced a $10 billion deal with Cerebras, adding 750 megawatts of computing capacity for faster inference through 2028

1

. Sachin Katti, who joined OpenAI from Intel in November to lead compute infrastructure, said the partnership adds "a dedicated low-latency inference solution" to OpenAI's platform.

OpenAI has also struck an agreement with AMD in October for six gigawatts of GPUs and announced plans with Broadcom to develop custom AI chips to further diversify its hardware supply

1

. Seven sources told Reuters that OpenAI needs new hardware that would eventually provide about 10% of the company's inference computing needs

5

.

AI Chip Market Dominance Faces New Competition

The tension between these AI giants signals a broader shift in AI chip market dominance, particularly as inference becomes increasingly critical. Nvidia's graphics processing chips excel at the massive data crunching necessary to train large AI models, but AI advancements increasingly focus on using trained models for inference and reasoning

5

. Inference requires more memory than training because chips need to spend relatively more time fetching data from memory than performing mathematical operations, and Nvidia and AMD GPU technology relies on external memory, which adds processing time

5

.

Competing products such as Anthropic's Claude and Google's Gemini benefit from deployments that rely more heavily on chips Google made in-house, called tensor processing units (TPUs), which are designed for the sort of calculations required for inference

5

. The Wall Street Journal reported that Jensen Huang has been privately criticizing what he described as a lack of discipline in OpenAI's business approach and has expressed concern about the competition OpenAI faces from Google and Anthropic

1

.

Why This Matters Despite Public Reassurances

Despite the friction, both companies have attempted to smooth things over publicly. Sam Altman, OpenAI's CEO, posted on X: "We love working with NVIDIA and they make the best AI chips in the world. We hope to be a gigantic customer for a very long time"

1

. Yet the underlying dynamics reveal deeper concerns about the AI industry's financial structure and competitive landscape.

The original deal sparked concerns about circular dealmaking—Nvidia invests $100 billion in OpenAI, which then uses those funds to lease Nvidia chips. Tech critic Ed Zitron has been critical of Nvidia's circular investments, which touch dozens of tech companies that are also all Nvidia customers

1

. The scaled-back investment and OpenAI's pursuit of alternative suppliers suggest both companies are hedging their bets in an increasingly competitive market.

Nvidia shares fell about 1.1 percent on Monday following reports of the stalled deal

1

. Companies including Amazon and SoftBank Group Corp are racing to forge partnerships with OpenAI, betting that closer ties with the artificial-intelligence startup would give them a competitive edge in the AI race

2

. As OpenAI pursues a valuation of over $800 billion ahead of a rumored IPO later this year, the company's ability to secure diverse hardware partnerships while maintaining its relationship with Nvidia will be critical to watch

4

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo