2 Sources
2 Sources
[1]
Microsoft CEO says the company doesn't have enough electricity to install all the AI GPUs in its inventory - 'you may actually have a bunch of chips sitting in inventory that I can't plug in'
Microsoft CEO Satya Nadella said during an interview alongside OpenAI CEO Sam Altman that the problem in the AI industry is not an excess supply of compute, but rather a lack of power to accommodate all those GPUs. He said this on YouTube in response to Brad Gerstner, the host of Bg2 Pod, when asked whether Nadella and Altman agreed with Nvidia CEO Jensen Huang, who said there is no chance of a compute glut in the next two to three years. "Well, I mean, I think the cycles of demand and supply in this particular case, you can't really predict, right? The point is: what's the secular trend? The secular trend is what Sam (the OpenAI CEO) said, which is, at the end of the day, because quite frankly, the biggest issue we are now having is not a compute glut, but it's power -- it's sort of the ability to get the builds done fast enough close to power," Satya said in the podcast. "So, if you can't do that, you may actually have a bunch of chips sitting in inventory that I can't plug in. In fact, that is my problem today. It's not a supply issue of chips; it's actually the fact that I don't have warm shells to plug into." AI's power consumption has been a topic many experts have discussed since last year. This came to the forefront as soon as Nvidia fixed the GPU shortage, and many tech companies are now investing in research in small modular nuclear reactors to help scale their power sources as they build increasingly large data centers. This has already caused consumer energy bills to skyrocket, showing how the AI infrastructure being built out is negatively affecting the average American. OpenAI has even called on the federal government to build 100 gigawatts of power generation annually, saying that it's a strategic asset in the U.S.'s push for supremacy in its AI race with China. This comes after some experts said Beijing is miles ahead in electricity supply due to its massive investments in hydropower and nuclear power. Aside from the lack of power, they also discussed the possibility of more advanced consumer hardware hitting the market. "Someday, we will make a[n] incredible consumer device that can run a GPT-5 or GPT-6-capable model completely locally at a low power draw -- and this is like so hard to wrap my head around," Altman said. Gerstner then commented, "That will be incredible, and that's the type of thing that scares some of the people who are building, obviously, these large, centralized compute stacks." This highlights another risk that companies must bear as they bet billions of dollars on massive AI data centers. While you would still need the infrastructure to train new models, the data center demand that many estimate will come from the widespread use of AI might not materialize if semiconductor advancements enable us to run them locally. This could hasten the popping of the AI bubble, which some experts like Pat Gelsinger say is still several years away. But if and when that happens, we will be in for a shock as even non-tech companies would be hit by this collapse, exposing nearly $20 trillion in market cap.
[2]
Microsoft CEO Doesn't Want to Buy NVIDIA's AI GPUs "Beyond One Generation," Hints at a Compute Glut Driven by Energy Constraints
Microsoft's Satya Nadella has revealed the situation regarding the firm's AI GPU arsenal, claiming that there isn't enough space or energy available to bring additional compute power onboard. Recently, a thesis has emerged suggesting that NVIDIA and the AI industry will ultimately reach a point where there's excess computing, or the per-unit compute achievements gained by employing AI chips won't be sustainable for Big Tech. Commenting on NVIDIA's CEO mentioning about excess compute being 'non existent' for the next two to three years, Microsoft's CEO Satya Nadella believes (via BG2 podcast) that the industry is currently facing a 'power glut', which leads to AI chips sitting in inventory that cannot be "plugged in", so basically, another form of a compute glut. I mean, even the point is, what's the secular trend? The secular trend is what Sam said, which is at the end of the day, because quite frankly, the biggest issue we are now having is not a compute glut, but it's a power. So if you can't do that, you may actually have a bunch of chips sitting in inventory that I cannot plug in. In fact, that is my problem today, right? It's not a supply issue of chips. It's actually the fact that I don't have warm shells to plug into. Well, it's clear that the race for a compute buildout has reached a point where companies like Microsoft cannot accommodate additional chips in their respective configurations. The underlying reason why Nadella mentions a power glut is that with each generation, NVIDIA's rack-scale configurations have brought in massive power requirements, to such an extent that from Ampere to the next-gen Kyber rack design, rack TDPs are expected to increase by 100 times. When examining scaling laws and NVIDIA's efforts to advance architectural capabilities, it is certain that the industry will eventually encounter a roadblock where the energy infrastructure will not allow for the expansion of data centers. And, by the statement from Microsoft's CEO, the energy-compute constraint is already being witnessed. This is a concern being discussed by several experts; however, the efforts made to scale up the energy infrastructure are currently insufficient. Would the lack of energy lead to NVIDIA's chips not being sold? Well, Satya actually answered this, claiming that the demand in the short term is difficult to predict and is subject to how the supply chain progresses.
Share
Share
Copy Link
Microsoft CEO Satya Nadella disclosed that the company has AI GPUs in inventory that cannot be deployed due to insufficient power infrastructure, highlighting a critical bottleneck in AI expansion beyond chip supply.
Microsoft CEO Satya Nadella has revealed a surprising bottleneck in the company's AI expansion efforts during a recent podcast interview alongside OpenAI CEO Sam Altman. Speaking on the BG2 Pod, Nadella disclosed that Microsoft currently has AI GPUs sitting unused in inventory due to insufficient power infrastructure rather than chip supply constraints
1
.
Source: Wccftech
"The biggest issue we are now having is not a compute glut, but it's power -- it's sort of the ability to get the builds done fast enough close to power," Nadella explained. "So, if you can't do that, you may actually have a bunch of chips sitting in inventory that I can't plug in. In fact, that is my problem today. It's not a supply issue of chips; it's actually the fact that I don't have warm shells to plug into" .
The power consumption challenge has intensified as AI hardware becomes increasingly demanding. According to industry analysis, NVIDIA's rack-scale configurations have brought massive power requirements, with rack TDPs expected to increase by 100 times from Ampere architecture to the next-generation Kyber rack design
2
. This exponential increase in power demands has created what Nadella describes as a "power glut" situation.The energy infrastructure constraints are already impacting consumers, with AI infrastructure buildout causing consumer energy bills to skyrocket across America . This situation has prompted tech companies to invest heavily in research for small modular nuclear reactors to help scale their power sources as they construct increasingly large data centers.
OpenAI has called on the federal government to build 100 gigawatts of power generation annually, positioning this as a strategic asset in the U.S.'s competition with China for AI supremacy . Some experts suggest Beijing maintains an advantage in electricity supply due to massive investments in hydropower and nuclear power infrastructure.
The power constraints also raise questions about future hardware procurement strategies. Nadella's comments suggest Microsoft may be hesitant to purchase NVIDIA's AI GPUs "beyond one generation," indicating a more cautious approach to hardware investments given current deployment limitations
2
.Related Stories
During the discussion, OpenAI's Altman highlighted potential future developments that could reshape the industry landscape. "Someday, we will make an incredible consumer device that can run a GPT-5 or GPT-6-capable model completely locally at a low power draw," Altman stated .
This possibility presents additional risks for companies investing billions in massive AI data centers. If semiconductor advancements enable local model execution, the anticipated data center demand from widespread AI adoption might not materialize, potentially accelerating what some experts predict could be an AI bubble burst. Industry analysts suggest such a collapse could impact nearly $20 trillion in market capitalization, affecting both tech and non-tech companies.
Summarized by
Navi
1
Business and Economy

2
Business and Economy

3
Technology
