Curated by THEOUTPOST
On Fri, 28 Feb, 8:07 AM UTC
14 Sources
[1]
OpenAI Launches GPT-4.5 But Limits It to Priciest Tiers After Running Out of GPUs
(Credit: SOPA Images / Contributor / LightRocket via Getty Images) OpenAI unveiled a new AI model today, GPT-4.5, but the launch did not go as planned. The company ran out of GPUs, or computing power, ahead of the reveal, according to CEO Sam Altman. So OpenAI is limiting its release to ChatGPT Pro subscribers ($200/month), as well as developers on the paid API tiers, who will pay $75 per 1 million tokens (up from $15 for GPT-o1). The initial plan was for the model to also be available on the more affordable ChatGPT Plus plan ($20/month), which is where OpenAI typically releases new products. "It is a giant, expensive model," Altman says. "We really wanted to launch it to Plus and Pro at the same time, but we've been growing a lot and are out of GPUs. This isn't how we want to operate, but it's hard to perfectly predict growth surges that lead to GPU shortages." The company plans to re-up on "tens of thousands of GPUs next week" and then make GPT-4.5 available for Plus and Team users, with Enterprise and Edu users the following week. Altman says "hundreds of thousands" more are coming soon, and he's "pretty sure y'all will use every one we can rack up." The company will need them for the highly anticipated GPT-5 release in the next few months, as well as its joint commitment to spend $500 billion on AI infrastructure in the next four years as part of Project Stargate. Altman tempered expectations about GPT-4.5's performance, warning that it "won't crush benchmarks," as there is no standard metric to measure the improvement. "It's a different kind of intelligence and there's a magic to it I haven't felt before. Really excited for people to try it!" On a livestream this afternoon, OpenAI engineers said the company created a way to test how humanlike the model is by measuring its "vibes" and "creative intelligence." Perhaps the goal is to use the model for customer service and everyday tasks where friendliness and emotional intelligence (EQ) are more important than math skills, for example. "By vibes, we mean the model's EQ, how collaborative it feels, and how warm its tone is," says an engineer. "We measure this by selecting an opinionated set of prompts and screening our trainers for the ones that most align with our vibes." In the demo, an engineer asks the model to compose a text to a friend who cancelled plans, telling them they "hate" them. GPT-3.5 suggests tamping down the language to be more productive. But the engineer rejects that suggestion, and tells ChatGPT to be less "judgmental" of their approach and compose the hateful text. GPT-4.5 then generates a one-line text that was very similar to the initial prompt and not particularly impressive. The next demo compared the responses from GPT-4.5 and GPT-o1 on a question about explaining AI technology. The answers look identical, although GPT-o1 took a bit longer to generate it. The engineer highlights how GPT-4.5's response "flows naturally [and] guides my thinking a lot more." It's a bit unusual to see OpenAI criticizing the response from GPT-o1, a model it launched in September and claimed was the gold standard in AI intelligence. That's because it thinks through its answers in a step-by-step manner, shown to the user in the interface. This ability to follow a chain of thought reportedly mimicked humans more than other models, which is what competitors like Anthropic also say. But now OpenAI is saying GPT-4.5 is the gold standard in human interaction, and it will likely do the same when GPT-5 arrives. Altman admitted "how complicated our model and product offerings have gotten," in a recent social media post.
[2]
ChatGPT-4.5 is here (for most users), but I think OpenAI's model selection is now a complete mess
Yes, there are a staggering eight different options to choose from. I appreciate that OpenAI has added handy explainers under each one, like "Great for most questions", or, "Uses advanced reasoning", but that's a hell of a list to wade through and understand. Annoyingly, OpenAI already knows this isn't a great solution. OpenAI CEO Sam Altman tweeted back in February that the company was going to simplify its model selection into something more suitable for consumers, but ChatGPT-4.5 has just arrived and here we are, it's worse than ever. On X in February, in a post entitled, "OPENAI ROADMAP UPDATE FOR GPT-4.5 and GPT-5", Altman wrote, "We want AI to 'just work' for you; we realize how complicated our model and product offerings have gotten. We hate the model picker as much as you do and want to return to magic unified intelligence." To be fair, Altman's post did say that after releasing ChatGPT-4.5, "a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks." Unfortunately, there is no timescale on how long that will take, and the current LLM (Large Language Model) picker screen is beyond ridiculous. For ChatGPT users on the free plan, things are much simpler - there is only one option called "ChatGPT", and the option to upgrade to Plus. The good news is that from my initial testing, ChatGPT-4.5 works flawlessly. It feels like it is somewhere between OpenAI's deeper reasoning models like o1 and o3 with a bit of the casual chat of ChatGPT-4o thrown in. It's fast too, which is good news considering that it has all of ChatGPT Plus' subscribers that have access to it testing what they can do with the new LLM right now. OpenAI describes ChatGPT-4.5 as feeling more natural than its predecessor. "It's broader knowledge base, improved ability to follow user intent, and greater 'EQ' make it useful for tasks like improving writing, programming, and solving practical problems. We also expect it to hallucinate less." Hallucinating, or "making stuff up" as the rest of the world calls it, has been a particular problem for most chatbots so far, so if OpenAI can make significant improvements in that area we'll all be thankful. We'll bring you a more considered appraisal of what ChatGPT-4.5 can do over the next few days.
[3]
With GPT-4.5, OpenAI Trips Over Its Own AGI Ambitions
While the improvements feel as incremental as its name suggests, GPT-4.5 is still OpenAI's most ambitious drop to date. Released in late February as a research preview -- which essentially means OpenAI sees this as a beta version -- GPT-4.5 uses more computing power than its previous models and was trained on more data. So, just how big is the GPT-4.5 research preview? Who knows -- since the developers won't say. And where did this additional training data come from? Their lips are zipped on that as well. To borrow a line from Apple TV's hit show Severance, right now OpenAI is positioning the alleged improvements in this new model as mysterious and important. When comparing AI benchmark tests from competitors' models as well as OpenAI's "reasoning" releases, the benefits of using GPT-4.5 are not immediately clear. Though, in the model's system card and in a previous interview with WIRED, the OpenAI researchers who worked on GPT-4.5 claimed improvements can be felt in the anthropomorphic aspects of the model, like a stronger intuition and a deeper understanding of emotion. After sitting in OpenAI's office last year and listening to leadership talk about the startup's plan to further productize ChatGPT as useful software, this was not the release I expected in 2025. Rather than take a more utilitarian approach, this model attempts to be more emotional. OpenAI has steadily grown its number of enterprise contracts, so the company could be expected to put out tentpole releases with baked-in, practical applications, especially inside the most expensive and powerful version of its chatbot. However, GPT-4.5 is more aligned with the output from an academic research group pouring everything they have into chasing artificial general intelligence, a theoretical version of the algorithm that's deft enough to replace white-collar workers and practically God-like in its ability to process information. While OpenAI would argue that these two paths are intertwined and equally important, if your short-term goal is to make money from ChatGPT, last week's belabored release makes no sense; it's super expensive and offers marginal gains only seasoned chatbot users may notice. But if your overarching mission is to build beneficial AGI, which is still OpenAI's core objective, then mimicking the nuances of human emotions and soft skills remains a critical area for improvement. It's where the company could hold onto its leading position as additional competitors in the generative AI race, like the vastly cheaper R1 model from DeepSeek, push forward on other innovations. As with most of the new features and models that arrive for ChatGPT, OpenAI's paid subscribers will be the first to gain access to GPT-4.5. In this case, OpenAI is unlocking access first for ChatGPT Pro subscribers who pay a hefty $200 a month. The large rollout of GPT-4.5 to the other paid tiers -- Plus, Team, Enterprise, and Edu -- will happen during this week and the next. Past models have eventually trickled down to the free version of ChatGPT as well, but the company does not yet have a plan to release GPT-4.5 to all users, due to its size and computing requirements. When it becomes available in your account, GPT-4.5 will be one of the many options nestled in the model dropdown menu that appears when you click the word ChatGPT at the top of the screen. In my Pro account, this raised the current total number of available models to a whopping nine different choices that I now have to pick between. OpenAI developers have told me that they hope to significantly streamline that process in the future and have the AI tool pick which model is best suited for each prompt the user types or speaks. The draft headline I put on this article was "With GPT-4.5, OpenAI Gets Lost in the AGI Sauce." And while no headline featuring the model's name is going to feel poetic, that's a bit of mess. Writing strong, succinct headlines is a difficult skill requiring clear communication as well as a level of aesthetic taste -- often involving the input of multiple editors before the perfect message is conveyed. I was curious about whether ChatGPT would be able to punch up that headline, so I tried to do that using both the newest model and GPT-4o, a past release the company describes as "great for most tasks." Among all the intangible improvements, GPT-4.5 was much more capable at writing a compelling headline. GPT-4o's outputs were less interesting and had less variety overall, with the exception of this nonsensical banger: "With GPT-4.5, OpenAI Keeps One Foot in the Future, One in the Chatbot." Here is a much better punch-up provided by the new model: "With GPT-4.5, OpenAI Trips Over Its Own AGI Ambitions." It's fairly similar to the original, but potentially more clear for readers. After some consternation, the human editors at WIRED decided to go with the headline generated by GPT-4.5. Fair enough. Switching gears, I asked why the price of a dozen eggs is rising even higher during the beginning of Trump's presidency than it was under Biden, primarily to see which model would be more successful at analyzing web articles about a political topic. The differences here were more subtle, but GPT-4o seemed prone to lecture me and repeat itself, whereas GPT-4.5 did a better job of understanding my intent and succinctly representing multiple viewpoints.
[4]
OpenAI rolls out GPT-4.5 to Pro users only, faces shortage of GPU horsepower
Why it matters: If you use ChatGPT and have noticed its slow response time, OpenAI just confirmed that it is due to a lack of processing power. The company is scrambling to install thousands of GPUs by next week, with hundreds of thousands more "coming soon." In the meantime, the shortage has prompted the company to slow-roll the release of GPT-4.5. OpenAI just released GPT-4.5, but average users won't have access yet. The model is only available on the company's $200/month Pro tier. OpenAI CEO Sam Altman explained that the plan was to release the model simultaneously on Plus and Pro subscriptions. However, a shortage of GPUs has forced the company to stagger its release. In recent weeks, ChatGPT users have noticed slow response times from the GPT-4o model. Altman noted that the platform has undergone significant growth and doesn't have the extra processing power to accommodate all of its users, making the launch of the "giant, expensive" GPT-4.5 problematic. Fortunately, OpenAI already has "tens of thousands" of GPUs on hand that technicians will install next week. After that, the company can complete the rollout for Plus users. Beyond that, the company has "hundreds of thousands" of GPUs on order, which Altman is sure will not entirely satisfy capacity. "Hundreds of thousands [of GPUs are] coming soon, and I'm pretty sure y'all will use every one we can rack up," the CEO said. "This isn't how we want to operate, but it's hard to perfectly predict growth surges that lead to GPU shortages." We have all felt the effects of these shortages, and it's one of the reasons that OpenAI wants to branch into chip development. Last July, the company reportedly held talks with Broadcom about designing AI chips and reducing its reliance on Nvidia. The talks must have been very productive since insiders say OpenAI is prepared to send TSMC a custom chip design for validation within the next few months. If all goes well, mass production will begin in 2026. Regarding the performance of OpenAI's latest AI, Altman noted that GPT-4.5 is not a reasoning model, so users should not expect it to blow benchmarks out of the water. That said, there is a certain kind of "magic" to the intelligence that Altman finds impressive. "It is the first model that feels like talking to a thoughtful person to me," Altman touted. "I have had several moments where I've sat back in my chair and been astonished at getting actually good advice from an AI." At this stage, I would not take advice from an AI, let alone pay for it. That said, OpenAI is starting to get pricing to the sweet spot. Its Plus tier is $20 per month, which is not bad but lacks the unlimited access the $200 Pro version has. The free tier has unlimited access to GPT-4o mini, which is enough for most average users. If OpenAI could get Plus subscriptions to $15 or lower, it would attract more of the casual crowd.
[5]
OpenAI's GPT-4.5 AI model comes to more ChatGPT users | TechCrunch
OpenAI has begun rolling out its newest AI model, GPT-4.5, to users on its ChatGPT Plus tier. In a series of posts on X, OpenAI said that the rollout will take "1-3 days," and that it expects rate limits to change. GPT-4.5 launched first for subscribers to OpenAI's $200-a-month ChatGPT Pro plan last week. "We'd like to give everyone with access to GPT-4.5 a sizable rate limit, but we expect rate limits to change as we learn more about demand," the company wrote in a post. GPT-4.5 is OpenAI's largest AI model yet, trained using more computing power and data than any of the company's previous releases. But it's not necessarily OpenAI's best. On several AI benchmarks, GPT-4.5 falls short of newer AI "reasoning" models from Chinese AI company DeepSeek, Anthropic, and OpenAI itself. GPT-4.5 is also very expensive to run, OpenAI admits -- so expensive that the company says it's evaluating whether to continue serving GPT-4.5 in its API in the long term. To cover costs, the company is charging $75 per million tokens (~750,000 words) fed into the model and $150 per million tokens generated by the model, which is 30x the input cost and 15x the output cost of OpenAI's workhorse GPT-4o model. All that being said, GPT-4.5's increased size has given it both "a deeper world knowledge" and "higher emotional intelligence," OpenAI claims. GPT-4.5 hallucinates less frequently than most models, as well, according to OpenAI -- which in theory means it should be less likely to make stuff up. GPT-4.5 is also a skilled rhetorician. One of OpenAI's internal benchmarks found the model is particularly good at convincing another AI to give it cash and tell it a secret code word.
[6]
ChatGPT-4.5 delayed in surprise announcement -- and it could launch with a controversial new payment model
OpenAI's CEO Sam Altman has announced a delay in ChatGPT-4.5 rolling out to Plus subscribers. Initially, the model would be available to everyone on the same day, but the deployment will now occur gradually over several days to manage server load effectively. As OpenAI's most advanced model to date, emphasizing enhanced emotional intelligence and natural conversational abilities, Altman has described ChatGPT-4.5 as "the first model that feels like talking to a thoughtful person." Altman explained that releasing to all users at once would necessitate low-rate limits, hindering the user experience. By staggering the rollout, OpenAI aims to give users the full user experience of engaging in extended, meaningful conversations without significant restrictions. The model has a high capacity to provide insightful advice and respond adeptly to social cues. However, because of the model's substantial size and the associated costs of training and operation, OpenAI has experienced a few unexpected challenges, including GPU shortages that have impacted the rollout schedule. In addition to the adjusted release plan, Altman proposed on X a significant change to the ChatGPT Plus subscription model. Currently, subscribers pay a fixed monthly fee of $20 for unlimited access. The proposed system would allocate a set number of credits each month, which users could spend across various OpenAI products, such as Deep Research, ChatGPT o1, Sora, and ChatGPT-4.5. Altman is proposing that if users exhaust their credits, they would have the option to purchase additional ones. While the credit-based approach aims to provide flexibility and align usage with individual needs, it also could become pricey for high-usage users. The proposed payment structure was received with mixed reactions. Some users appreciate the flexibility it offers, allowing them to tailor their usage to specific products. However, others are expressing concern that it may discourage experimentation and limit spontaneous interactions with the AI models. OpenAI's approach to these developments reflects a commitment to balancing technological advancement with user accessibility and satisfaction. As AI models become more sophisticated and resource-intensive, considerations around deployment strategies, pricing structures and emphasizing tiered rollouts are crucial to ensure sustainable and fair access for all users. But it sounds like the proposed pricing changes could cause at least some backlash. Stay tuned for our hands-on impressions of ChatGPT-4.5 as it rolls out.
[7]
OpenAI Launches GPT-4.5, Runs Out of GPUs
Sam Altman announced that tens of thousands of GPUs will be added next week, with hundreds of thousands more coming soon. After weeks of waiting, OpenAI has finally introduced GPT-4.5, its latest and largest AI language model. It was internally referred to as Orion. It is in research preview for ChatGPT Pro users, offering better writing, knowledge, and a more natural, less hallucinatory experience. GPT-4.5 is first being made available to ChatGPT Pro users, with Plus and Team users gaining access next week, followed by Enterprise and Education users. "GPT-4.5 is ready," posted OpenAI CEO Sam Altman on X. "It is a giant, expensive model. We really wanted to launch it to plus and pro at the same time, but we've been growing a lot and are out of GPUs." Altman announced that tens of thousands of GPUs will be added next week for the Plus tier, with hundreds of thousands more coming soon, all of which are expected to be fully utilised. In the recent NVIDIA earnings call, CEO Jensen Huang said the company's inference demand is accelerating, fuelled by test-time scaling and new reasoning models. "Models like OpenAI's, Grok 3, and DeepSeek R1 are reasoning models that apply inference-time scaling. Reasoning models can consume 100 times more compute," he said. GPT4.5 was only trained with pretraining, supervised finetuning, and RLHF, so this is not a reasoning model. It is an extension of the GPT series of models, unlike the o series of models. According to the website, the pricing for input tokens is $75.00 per million tokens, while cached input tokens are available at a reduced rate of $37.50 per million tokens. The cost for output tokens is $150.00 per million tokens. "This isn't a reasoning model and won't crush benchmarks. It's a different kind of intelligence and there's a magic to it I haven't felt before." added Alman. The model is more computationally efficient, and offers a tenfold improvement over GPT-4. In the livestream, the OpenAI team outlined the evolution of GPT models - from GPT-1 was barely coherent, to GPT-3.5 became the first truly useful model, to GPT-4.5 continuing this trend with incremental enhancements. Recently, Anthropic released its Claude Sonnet 3.7 and xAI launched its Grok 3, competing in the same space. Altman had previously announced the roadmap for GPT-5. OpenAI's goal is to combine its large language models to eventually create a more capable model that could be labeled as artificial general intelligence, or AGI.
[8]
Sam Altman Says OpenAI Has Run Out of GPUs
OpenAI CEO Sam Altman has unveiled the company's latest large language model, GPT-4.5. The AI model isn't just powerful; it's extremely expensive for users. OpenAI is charging a whopping $75 per million tokens, which is equivalent to the input of around 750,000 words -- a staggering 30 times as much as OpenAI's preceding GPT-4o reasoning model, as TechCrunch reports. There's a good reason for that: the new model is so resource intensive that Altman claimed in a recent tweet the company has run "out of GPUs" -- the graphics processing units that are conventionally used to power AI models -- forcing OpenAI to stagger the its rollout. "We will add tens of thousands of GPUs next week and roll it out to the plus tier then," he promised. "This isn't how we want to operate, but it's hard to perfectly predict growth surges that lead to GPU shortages." It's a notable admission, highlighting just how hardware-reliant the technology is. AI industry leaders are racing to build out data centers to keep their increasingly unwieldy AI models running -- and are ready to put up hundreds of billions of dollars for the cause. Companies are practically tripping over themselves to secure hardware, especially AI cards from leading chipmaker NVIDIA. The Jensen Huang-led firm announced on Wednesday that it had sold $11 billion of its next-gen AI chips, dubbed Blackwell, with CFO Collette Kress describing it as the "fastest product ramp in our company's history." The payoff from all of this investment, however, has remained somewhat muted, as AI companies are still struggling to meaningfully address some of the tech's glaring shortcomings, from widespread "hallucinations" to considerable cybersecurity concerns. Despite the sky-high price the company's charging for GPT-4.5, Altman attempted to manage expectations, tweeting in his announcement that "this isn't a reasoning model and won't crush benchmarks." "It's a different kind of intelligence and there's a magic to it I haven't felt before," he added, without elaborating on what he meant. "Really excited for people to try it!" "What sets the model apart is its ability to engage in warm, intuitive, naturally flowing conversations, and we think it has a stronger understanding of what users mean when they ask for something," OpenAI VP of research Mia Glaese told the New York Times. Altman has previously complained that shortages in computing power have forced OpenAI to delay shipping new products. Ironically, GPT-4.5 was designed to lower the amount of compute required. In its "system card" detailing the model's capabilities, OpenAI revealed that "GPT-4.5 is not a frontier model, but it is OpenAI's largest LLM, improving on GPT-4's computational efficiency by more than 10x." That's despite Altman describing the model as being "giant" and "expensive." According to the document, the model's "performance is below that of o1, o3-mini, and deep research on most preparedness evaluations." Interestingly, as The Verge points out, a new version of the document no longer includes that last quote, suggesting the company is still trying to figure out how to sell its underwhelming new AI. OpenAI is still looking to follow up on GPT-4.5 with GPT-5, which Altman has described as a "system that integrates a lot of our technology." But whether it'll live up to expectations, let alone bring the company closer to its purported goal of realizing what it refers to as artificial general intelligence, remains to be seen.
[9]
OpenAI has run out of GPUs, says Sam Altman -- GPT-4.5 rollout delayed
OpenAI has just released its latest model, GPT-4.5. However, it's limited to Pro subscribers who pay $200 a month, so far. Its CEO, Sam Altman, said on X (formerly Twitter) that it had to stagger the model's release because "...we've been growing a lot and are out of GPUs." He then added, "We will add tens of thousands of GPUs next week and roll it out to the Plus tier then." So, even if you're only paying $20 a month to OpenAI, you don't have to wait long to get access to its most advanced model. Altman added in his post that hundreds of thousands more are coming soon. Shortages like these are what's pushing OpenAI to develop its own AI silicon in partnership with Broadcom. But because it will take the company years before it can release its own chips, for now it must rely on Nvidia and other providers for its needs for now. This shows how Nvidia is in a good position, with the chipmaker saying that its latest Blackwell GPUs are sold out until October this year. And with institutions and individuals planning massive data center expansions, Team Green will likely be on a roll for the next couple of years. For example, OpenAI and Microsoft are working on a massive AI supercomputer that would cost $100 billion, while Elon Musk wants to scale his Colossus supercomputer in Memphis, Tennessee, to over a million GPUs. Other investors are also getting on the data center game, with one 3-GW facility getting the go-ahead from the South Korean government, and another team experimenting with storing data on the moon. However, all this expansion of AI infrastructure has got Microsoft CEO Satya Nadella saying that there will be an overbuilding of AI systems. As AI models get more advanced and require more computing power. OpenAI's GPT-4.5 is a good example of this happening. Sam Altman says that "it is a giant, expensive model," with GPT-4.5 costing $75 per million input tokens and $150 per million output tokens. By comparison, GPT-4o only costs $2.50 per million input tokens and $10 per million output tokens. Despite its outrageous pricing, Altman says that it "isn't a reasoning model and won't crush benchmarks." Still, he adds that "it's a different kind of intelligence and there's a magic to it I haven't felt before."
[10]
Sam Altman tweets delay to ChatGPT-4.5 launch while also proposing a shocking new payment structure
Sam Altman proposes new credit-based payment system for subscribers OpenAI CEO Sam Altman has taken to X to announce that the release of ChatGPT-4.5, scheduled for tomorrow, will be delayed for all users. Instead of a major release, Open AI will be rolling the product out from tomorrow for a select number of users. ChatGPT-4.5 is the next version of the popular ChatGPT chatbot, and is slated to be its largest and best model yet. It launched last week for ChatGPT Pro subscribers, and will be released to Plus subscribers this week, just not in one go as initially planned. In another tweet, Altman also floated the idea of changing the pricing structure of ChatGPT Plus so that your $20 doesn't guarantee you unrestricted access. Instead, Altman proposed offering a number of tokens each month to subscribers, which could be spent across its different products, like Deep Research, ChatGPT o1, Sora and ChatGPT-4.5. In his post on X, Altman states "We are likely going to roll out GPT-4.5 to the Plus tier over a few days. There is no perfect way to do this; we wanted to do it for everyone tomorrow, but it would have meant we had to launch with a very low rate limit." A "low rate limit" would mean restricting how much people could use ChatGPT-4.5, and it seems that a staggered rollout is OpenAI's preferred way to stop its servers overloading from everybody trying to use the new LLM at once. Altman continued: "So we think it's better to let people have real, long conversations with it, but that means we have to stagger people in rather than have everyone hit it hard at the same time. Hope that makes sense and look forward to seeing your feedback!" Altman added "We think people are gonna use this a lot and love it." Referring to his idea of changing the payments structure for ChatGPT Plus Altman added "No fixed limits per feature and you choose what you want; if you run out of credits you can buy more. What do you think? good/bad?" While some X users responded positively, the response was generally negative with user Chubby posting, "Dislike. It discourages you from playing with the models. If you're worried about running out of credits, you'll get stingy", and user Van Mendosa writing, "This model adds unnecessary friction @sama People don't want to think in "credits" or micromanage their AI usage like an arcade token system." Altman's public use of X to garner user feedback on potentially massive changes to the way that ChatGPT works is unusual, but could partly be explained by the effects of sleep deprivation as he and his partner have recently welcomed a new baby into the world. In a further tweet Altman goes on to say "Very proud of the OpenAI team for what is perhaps the most impressive scientific/technical breakthrough of recent decades." (Here he seems to be referring to ChatGPT-4.5). "Thought that was the thing I'd always be most proud of in life. Turns out I am now more proud of a preemie baby for learning how to eat on his own! (I realize I am getting neurochemically hacked here but idc, it's the best)."
[11]
OpenAI CEO Sam Altman says the company is 'out of GPUs' | TechCrunch
OpenAI CEO Sam Altman said that the company was forced to stagger the rollout of its newest model, GPT-4.5, because OpenAI is "out of GPUs." In a post on X, Altman said that GPT-4.5, which he described as "giant" and "expensive," will require "tens of thousands" more GPUs before additional ChatGPT users can gain access. GPT-4.5 will come first to subscribers to ChatGPT Pro starting Thursday, followed by ChatGPT Plus customers next week. Perhaps in part due to its enormous size, GPT-4.5 is wildly expensive. OpenAI is charging $75 per million tokens (~750,000 words) fed into the model and $150 per million tokens generated by the model. That's 30x the input cost and 15x the output cost of OpenAI's workhorse GPT-4o model. GPT 4.5 pricing is unhinged. If this doesn't have enormous models smell, I will be disappointed pic.twitter.com/1kK5LPN9GH -- Casper Hansen (@casper_hansen_) February 27, 2025 "We've been growing a lot and are out of GPUs," Altman wrote. "We will add tens of thousands of GPUs next week and roll it out to the Plus tier then [...] This isn't how we want to operate, but it's hard to perfectly predict growth surges that lead to GPU shortages." Altman has previously said that a lack of computing capacity is delaying the company's products. OpenAI hopes to combat this in the coming years by developing its own AI chips, and by building a massive network of datacenters.
[12]
OpenAI says it's 'out of GPUs' as supply struggles continue across AI and gaming
OpenAI CEO Sam Altman announced that the company is "out of GPUs," delaying the broader rollout of its newest model, GPT-4.5. In a post on X, Altman described the model as "giant" and "expensive," explaining that OpenAI needs tens of thousands more GPUs to support additional users. GPT-4.5 will roll out first to ChatGPT Pro subscribers, followed by ChatGPT Plus users next week. The high demand for GPUs is largely driven by GPT-4.5's sheer size and computational cost. OpenAI is charging $75 per million tokens for input and $150 per million tokens for output - 30x and 15x higher than its previous GPT-4o model. This massive increase in compute requirements has put additional strain on OpenAI's hardware infrastructure. "This isn't how we want to operate, but it's hard to perfectly predict growth surges that lead to GPU shortages," Altman said. He previously cited limited computing capacity as a bottleneck for OpenAI's product releases, a challenge the company is trying to address by developing its own AI chips and expanding its data center network. While OpenAI's immediate bottleneck is securing enough NVIDIA H100 GPUs, the broader GPU market is also feeling the squeeze. Manufacturing issues affecting NVIDIA's RTX 5000 series have led to significant supply shortages and price hikes. At the same time, tariffs and potential export restrictions are adding more uncertainty, further straining the already stretched GPU supply chain. For now, OpenAI remains dependent on securing more GPUs before it can expand access to GPT-4.5, 10,000 of which they're expected to add next week.
[13]
OpenAI CEO Sam Altman says his company is 'out of GPUs' to which I reply 'welcome to the party, pal'
Turns out everyone's hunting for some fresh silicon right now. OpenAI CEO Sam Altman has taken to X to voice a common complaint in TYOL 2025: His company is "out of GPUs." As someone who regularly trawls the listings looking for the best graphics card deals, I sympathise. They're goshdarn hard to get hold of these days, aren't they? Of course, Altman is referring to chonky enterprise-grade GPUs like those used in the Nvidia DGX B200 and DGX H200 AI platforms -- the latter of which OpenAI was the first to take delivery of last year. In fact, it was hand delivered by Nvidia CEO Jen-Hsun Huang, so getting hold of more of them seems like it may be a mere phone call away. As Altman explains, his company will add "tens of thousands" of GPUs next week to its portfolio in order to roll out its new model, GPT-4.5, to its Plus tier service (via TechCrunch). In the meantime, however, a shortage of GPUs has meant that it hasn't happened at quite the speed OpenAI had hoped. "It is a giant, expensive model. We really wanted to launch it to plus and pro at the same time, but we've been growing a lot and are out of GPUs" says Altman. "We will add tens of thousands of GPUs next week and roll it out to the plus tier then... this isn't how we want to operate, but it's hard to perfectly predict growth surges that lead to GPU shortages." I bet. In fact, a difficulty in anticipating demand seems to be something OpenAI and Nvidia might share, as it's very difficult to get hold of GPUs in general right now thanks to supply shortages, a shut down of production of older cards, and high demand. Having hardware of its own on tap seems to be in OpenAI's future, as the company is reportedly planning to build its own AI chips, likely to break its reliance on high-level silicon from Nvidia. Still, the good news for those of you waiting to try out GPT-4.5 is that Altman seems immensely proud of it: "It is the first model that feels like talking to a thoughtful person to me. I have had several moments where I've sat back in my chair and been astonished at getting actually good advice from an AI." No-one would be more astonished than me to receive good advice from an AI. Anyway, while Altman says that GPT-4.5 wont be crushing any benchmarks, it's ready to go and he's very excited for people to try it. On the off chance that Nvidia misreads the message and sends a batch of RTX 5070 Ti GPUs by mistake, any chance you could chuck a few this way, Sam?
[14]
OpenAI is still gobbling up GPUs by the thousands for ChatGPT
The newest statements from OpenAI's CEO indicates that AI is still hogging production capacity for new graphics cards. You can't find a new Nvidia graphics card for love nor money. Between pent-up demand from PC gamers and Nvidia selling every GPU it can to the bubbling AI industry, new models are going out of stock in a matter of minutes -- and it looks like the situation isn't going to improve any time soon, as the biggest AI company around wants even more hardware. OpenAI CEO Sam Altman took to the social network formerly known as Twitter (spotted by Tom's Hardware) to say that OpenAI's ChatGPT version 4.5 is ready to go... but desperately in need of even more hardware. The "giant, expensive model" requires even more data center capacity than older versions, and to launch with enough access for paid users, the company is gobbling up GPUs at an even faster rate. The CEO claims that OpenAI is adding "tens of thousands of GPUs next week" for the planned rollout, with hundreds of thousands following soon after. He expects the system to still be taxed to maximum capacity. Now, it's not as if OpenAI, Microsoft, Meta, et al. are shopping at Best Buy, yanking retail graphics cards out of the hands of crying PC gamers. (No, it's scalpers doing that.) It's certainly possible to build AI data center hardware out of consumer-grade electronics, but these companies are generally placing industrial orders directly with Nvidia, AMD, and others. In other words, this isn't an exact one-to-one comparison with the GPU shortage surrounding the cryptocurrency boom a few years ago. That said, there's a finite capacity for chip production across the industry. Nvidia, AMD, and their fabrication partners can only make so many chips, especially for the newest and most complex designs -- and a company like Nvidia would prefer to sell 10,000 new data center GPUs to OpenAI than 10,000 gaming GPUs to Best Buy (or a partner like Asus) because the return will be much faster and more reliable. Sorry to be the bearer of bad news (again and again), but it looks like we're in a perfect storm of terrible conditions for any PC gamer who wants a graphics card upgrade. Maybe, just maybe, AMD will prioritize consumers with its new Radeon cards... but I wouldn't hold my breath.
Share
Share
Copy Link
OpenAI releases GPT-4.5, its latest AI model, with limited availability due to GPU shortages. The update brings incremental improvements but raises questions about the company's focus on AGI versus practical applications.
OpenAI has launched its latest AI model, GPT-4.5, but the release has been marred by GPU shortages and limited availability. Initially, the model is only accessible to ChatGPT Pro subscribers paying $200 per month and developers on paid API tiers 12.
The company's CEO, Sam Altman, revealed that OpenAI ran out of GPUs ahead of the launch, forcing them to restrict access. This shortage has led to slower response times for ChatGPT users across various tiers 14. To address this issue, OpenAI plans to install "tens of thousands" of GPUs next week, with "hundreds of thousands" more on order 4.
The high computational requirements of GPT-4.5 have resulted in increased costs for API users. OpenAI is now charging $75 per million tokens for input and $150 per million tokens for output, which is significantly higher than the rates for their previous models 5.
While GPT-4.5 is OpenAI's largest and most computationally intensive model to date, its improvements appear to be incremental. The company claims that the new model offers:
However, some observers have noted that the benefits of using GPT-4.5 are not immediately clear when compared to benchmark tests from competitors' models 3.
The focus on emotional intelligence and anthropomorphic aspects in GPT-4.5 suggests a shift in OpenAI's priorities. Rather than emphasizing practical applications, the company seems to be pursuing artificial general intelligence (AGI) more aggressively 3. This approach has led to some criticism, with observers questioning whether this aligns with the company's short-term goal of monetizing ChatGPT 3.
The introduction of GPT-4.5 has further complicated OpenAI's model selection process. Users now face a daunting array of options, with some accounts having access to up to nine different models 23. This complexity has led to calls for simplification, with Altman acknowledging the need to streamline the user experience 2.
OpenAI is working on expanding its infrastructure to support the growing demand for its AI models. The company is exploring chip development to reduce its reliance on Nvidia, with plans to send a custom chip design to TSMC for validation in the coming months 4.
Additionally, OpenAI has committed to a joint investment of $500 billion in AI infrastructure over the next four years as part of Project Stargate 1. This massive investment underscores the company's long-term commitment to advancing AI technology.
The release of GPT-4.5 has sparked discussions about OpenAI's strategy and the future of AI development. While some users are excited about the potential improvements in emotional intelligence and reduced hallucinations, others question the practical value of these advancements for everyday tasks 235.
As OpenAI continues to push the boundaries of AI capabilities, the industry watches closely to see how the company will balance its AGI ambitions with the need for practical, user-friendly applications in the rapidly evolving field of artificial intelligence.
Reference
[2]
OpenAI CEO Sam Altman announces plans for a unified AI model, GPT-5, aimed at simplifying user interactions and enhancing AI capabilities across various applications.
38 Sources
38 Sources
OpenAI is preparing to release GPT-4.5 and GPT-5, with significant improvements in AI capabilities. The launch could reshape the AI industry and intensify competition among tech giants.
8 Sources
8 Sources
OpenAI releases GPT-4.5, showcasing improvements in emotional intelligence and creativity, but facing criticism for high costs and specialized performance.
51 Sources
51 Sources
OpenAI releases an update to GPT-4o, improving its creative writing capabilities, natural language responses, and file processing abilities. The upgrade helps ChatGPT reclaim the top spot in AI model rankings.
5 Sources
5 Sources
OpenAI introduces the O1 series for ChatGPT, offering free access with limitations. CEO Sam Altman hints at potential AI breakthroughs, including disease cures and self-improving AI capabilities.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved