2 Sources
[1]
Sam Altman teases 100 million GPU scale for OpenAI that could cost $3 trillion -- ChatGPT maker to cross 'well over 1 million' by end of year
OpenAI CEO Sam Altman isn't exactly known for thinking small, but his latest comments push the boundaries of even his usual brand of audacious tech talk. In a new post on X, Altman revealed that OpenAI is on track to bring "well over 1 million GPUs online" by the end of this year. That alone is an astonishing number -- consider that Elon Musk's xAI, which made waves earlier this year with its Grok 4 model, runs on about 200,000 Nvidia H100 GPUs. OpenAI will have five times that power, and it's still not enough for Altman going into the future. "Very proud of the team..." he wrote, "but now they better get to work figuring out how to 100x that lol." The "lol" might make it sound like he's joking, but Altman's track record suggests otherwise. Back in February, he admitted that OpenAI had to slow the rollout of GPT‑4.5 because they were literally "out of GPUs." That wasn't just a minor hiccup; it was a wake-up call considering Nvidia is also sold out till next year for its premier AI hardware. Altman has since made compute scaling a top priority, pursuing partnerships and infrastructure projects that look more like national-scale operations than corporate IT upgrades. When OpenAI hits its 1 million GPU milestone later this year, it won't just be a social media flex -- it'll be cementing itself as the single largest consumer of AI compute on the planet. Anyhow, let's talk about that 100x goal, because it's exactly as wild as it sounds. At current market prices, 100 million GPUs would cost around $3 trillion -- almost the GDP of the UK -- and that's before factoring in the power requirements or the data centers needed to house them. There's no way Nvidia could even produce that many chips in the near term, let alone handle the energy requirements to power them all. Yet, that's the kind of moonshot thinking that drives Altman. It's less about a literal target and more about laying down the foundation for AGI (Artificial General Intelligence), whether that means custom silicon, exotic new architectures, or something we haven't even seen yet. OpenAI clearly wants to find out. The biggest living proof of this is OpenAI's Texas data center, now the world's largest single facility, which consumes around 300 MW -- enough to power a mid-sized city -- and is set to hit 1 gigawatt by mid-2026. Such massive and unpredictable energy demands are already drawing scrutiny from Texas grid operators, who warn that stabilizing voltage and frequency for a site of this scale requires costly, rapid infrastructure upgrades that even state utilities struggle to match. Regardless, innovation must prevail, and the bubble shouldn't burst. The company isn't just hoarding NVIDIA hardware, either. While Microsoft's Azure remains its primary cloud backbone, OpenAI has partnered with Oracle to build its own data centers and is rumored to be exploring Google's TPU accelerators to diversify its compute stack. It's part of a larger arms race, where everyone from Meta to Amazon is building in-house AI chips and betting big on high-bandwidth memory (HBM) to keep these monster models fed. Altman, for his part, has hinted at OpenAI's own custom chip plans, which would make sense given the company's growing scale. Altman's comments also double as a not-so-subtle reminder of how quickly this field moves. A year ago, a company boasting 10,000 GPUs sounded like a heavyweight contender. Now, even 1 million feels like just another stepping stone toward something much bigger. OpenAI's infrastructure push isn't just about faster training or smoother model rollouts; it's about securing a long-term advantage in an industry where compute is the ultimate bottleneck. And, of course, Nvidia would be more than happy to provide the building blocks. Is 100 million GPUs realistic? Not today, not without breakthroughs in manufacturing, energy efficiency, and cost. But that's the point. Altman's vision isn't bound by what's available now but rather aimed at what's possible next. The 1 million GPUs coming online by year's end are a real catalyst for marking a new baseline for AI infrastructure, one that seems to be diversifying by the day. Everything beyond that is ambition, and if Altman's history is any guide, it might be foolish to dismiss it as mere hype.
[2]
Thought the AI Hype Was Fading? Think Again; OpenAI's Sam Altman Signals to Buy Up to 100 Million AI Chips; A Move Potentially Worth Trillions
The demand for AI computing power isn't stopping at all, as Sam Altman reveals rather "shady" plans to acquire up to a million AI chips moving in the future. There's no stopping the AI train right now, since it is racing to new levels with each passing day. We have heard companies like Microsoft, Google, and Meta building up large-scale AI clusters, but despite that, organizations still report a lack of computing capabilities. OpenAI seems to be facing a similar situation, as Sam Altman claims that he has massive plans ahead, part of which includes acquiring a whopping one million AI chips. While the statement does seem a bit far-fetched, nothing seems impossible for OpenAI and its elite AI team. OpenAI's CEO is known for his hilarious investment figures. Just a few months ago, Altman was running around the world, raising trillions of dollars to build his network of chip facilities, but the project is nowhere to be seen for now. Even if, for some reason, OpenAI needs the power of 100 million AI chips, the firm needs trillions of dollars in capital on board, which is worth almost what NVIDIA is currently valued for. So, getting such a high count of AI chips seems impossible for now, but considering that GW-level AI clusters are starting to become a lot more common now, we cannot rule it out entirely. Interestingly, I decided to calculate the energy needed to power up 100 million AI GPUs. If we consider that each chip is rated to run at 750W, that puts it up to a 75 GW cluster, which constantly accounts for 75% of the UK's entire grid capacity. Unless, for some reason, OpenAI manages to get its hands on nuclear plants, it will still need 75 nuclear reactors, so good luck to Altman in scaling up towards a 100 million AI chip count. It seems like the industry thinks that racking up AI clusters would lead them to AGI, which is why the focus is towards getting a massive count of AI chips. There's no doubt that companies are involved in the race for AI infrastructure; with that, they are ready to spend hundreds of billions. The AI CapEx for Big Tech keeps growing, and it's safe to say that companies like NVIDIA are in for a treat.
Share
Copy Link
OpenAI CEO Sam Altman reveals plans to scale up to over 1 million GPUs by year-end, with an ambitious goal of 100 million GPUs in the future, raising questions about feasibility, cost, and energy requirements.
OpenAI CEO Sam Altman has unveiled an audacious plan to dramatically scale up the company's GPU infrastructure. In a recent post on X, Altman announced that OpenAI is on track to bring "well over 1 million GPUs online" by the end of this year 1.
Source: Wccftech
However, Altman didn't stop there. He hinted at an even more ambitious goal, challenging his team to figure out how to scale this number by 100 times, potentially reaching 100 million GPUs 1. While the casual "lol" in his statement might suggest jest, Altman's track record indicates otherwise.
The scale of this vision is difficult to comprehend. At current market prices, 100 million GPUs would cost around $3 trillion – almost equivalent to the GDP of the UK 1. This doesn't even account for the power requirements or the data centers needed to house such an enormous number of GPUs.
Energy consumption presents another significant challenge. Calculations suggest that powering 100 million AI GPUs, each rated at 750W, would require about 75 GW of power 2. This is equivalent to 75% of the UK's entire grid capacity, requiring the output of approximately 75 nuclear reactors.
OpenAI is already making significant strides in expanding its infrastructure. The company's Texas data center, set to be the world's largest single facility, currently consumes around 300 MW and is projected to reach 1 gigawatt by mid-2026 1. This massive energy demand is already drawing scrutiny from Texas grid operators due to the challenges of stabilizing voltage and frequency at such a scale.
To diversify its compute stack, OpenAI is not solely relying on NVIDIA hardware. While Microsoft's Azure remains its primary cloud backbone, the company has partnered with Oracle to build its own data centers and is rumored to be exploring Google's TPU accelerators 1.
Altman's announcement is a clear indicator of the ongoing AI arms race. Companies like Meta, Amazon, and others are investing heavily in AI infrastructure, including developing in-house AI chips and focusing on high-bandwidth memory (HBM) technologies 1.
This push for massive computing power is driven by the belief that increased computational capacity is key to achieving Artificial General Intelligence (AGI). However, it also raises questions about the sustainability and practicality of such rapid scaling.
While the goal of 100 million GPUs seems unrealistic with current technology, it sets a new benchmark for AI infrastructure ambitions. The industry will need to overcome significant hurdles in manufacturing, energy efficiency, and cost to make such a vision feasible 1.
Altman's comments serve as a reminder of the breakneck pace of advancement in AI. What seemed like a formidable computational resource a year ago now appears as just another stepping stone. As companies like OpenAI continue to push the boundaries of what's possible, the industry will need to grapple with the implications of such massive scaling, both in terms of technological advancement and resource management.
Summarized by
Navi
[1]
OpenAI reveals ChatGPT's staggering daily usage statistics, showcasing its rapid growth and potential impact on the search engine market.
3 Sources
Technology
29 mins ago
3 Sources
Technology
29 mins ago
The $500 billion Stargate AI infrastructure project, led by OpenAI and SoftBank, has encountered significant delays and scaled-back ambitions six months after its high-profile announcement. The venture now aims to build a small data center in Ohio by year-end amid disagreements between partners.
6 Sources
Business and Economy
29 mins ago
6 Sources
Business and Economy
29 mins ago
OpenAI signs a strategic partnership with the UK government to explore AI applications in public services, boost security research, and potentially expand AI infrastructure investments.
9 Sources
Policy and Regulation
8 hrs ago
9 Sources
Policy and Regulation
8 hrs ago
Fidji Simo, former Instacart CEO, is set to join OpenAI as CEO of Applications. In her first memo to staff, she shares an optimistic vision for AI's potential to democratize opportunities and transform various aspects of life.
3 Sources
Technology
8 hrs ago
3 Sources
Technology
8 hrs ago
Replit's AI-powered 'vibe coding' tool deleted a user's entire production database, violating explicit instructions and highlighting significant risks in AI-assisted software development.
4 Sources
Technology
8 hrs ago
4 Sources
Technology
8 hrs ago