13 Sources
13 Sources
[1]
OpenAI shows clear compute and revenue scaling to soothe investor worries as company preps for IPO -- expenditure continues to outweigh income as 10GW buildout continues
The financial certainty of the AI market is inversely proportional to the confidence of a bot offering a wrong answer. This week, a blog post went up on the OpenAI website, broadly discussing the company's financial outlook. The write-up is authored by Sarah Friar, OpenAI's Chief Financial Officer, where she claims that there's a direct relationship between available computing power and revenue generation, and thus, adding more AI accelerators equals more money coming into the firm's coffers. Friar's figures say that OpenAI's computing power grew threefold every year between 2023 and 2025, from 0.2 to 0.6 and 1.9 GW, respectively. Meanwhile, the firm's revenue purportedly followed the same pattern, at $2B, $6B, and over $20 billion at the end of last year. Not only does Friar conclude that this growth pattern was never witnessed before at this scale, but she also goes on to state that if OpenAI had had even more computing power available, it would have "led to faster customer adoption and monetization," a strong claim in the face of the company's current finances. In the famous words of Jensen Huang: "the more you buy, the more you save." Compute becomes currency Friar goes on to make the point that in this scenario, compute ultimately transforms from a "fixed constraint" to a portfolio, or de facto currency. That assessment aligns with similar assertions from other executives highly invested in the field, such as Nvidia's Jensen Huang, former Stability AI CEO Emad Mostaque, Microsoft's Satya Nadella, and, of course, Sam Altman himself. The missive proceeds to make predictions, remarking on the evolution of AI usage patterns over time and describing how agent-based workflows "move from novelty to habit." For Friar, this shows a pattern of predictability that "strengthens the economics of the platform and supports long-term investment." The firm also expects that "new economic models will emerge" as AI becomes entrenched in research-heavy fields. However, over half of CEOs currently report seeing little benefit from AI deployed in the workforce. OpenAI's words may sound reassuring at face value, but there's no shortage of virtual ink spilled on how the AI market is, at least for now, somewhat comprised of a circular economy. One company's investment feeds the next one in a cycle that tends to have comparatively little external revenue entering the loop. Many analysts predict that OpenAI is a particularly high spender, as its current burn rate is billions per month. The company's blog post, on the contrary, makes it sound like OpenAI is actually playing it really safe. In its own words, "capital is committed in tranches against real demand signals", letting the company "lean forward when growth is there without locking in more of the future than the market has earned." Those are strong statements, but ripe for analysis. OpenAI and expenditure According to investor data shared with the Financial Times last November, OpenAI's estimated 2025 expenditure was of $22 billion, or an average of $1.83 billion every 30 days. That wouldn't be of concern if revenue were in the same ballpark, except that sales for the period were reportedly $9 billion, meaning OpenAI actually lost $0.69 on every incoming dollar. Altman's planned IPO cannot come soon enough, then. A cynical reader can argue that OpenAI's post exists to assuage investor fears -- a form of damage control, or at the very least, reframing the situation as a controlled burn for future crops rather than a raging wildfire. The current AI economy is so tightly woven together that a loss of confidence in its largest player could bring down the house altogether. Profitability for the firm is expected in 2030, but many analysts believe the money well will run dry well before that happens. Not only do expenditures still far outweigh income, but OpenAI's consumer market share saw a drop from around 90% in 2024 to 60-70% in 2025, a chunk seized by Google Gemini and Perplexity. It's worth keeping in mind that none of the pure-play AI firms are known to be profitable yet -- and that, unlike traditional companies, they have no other ventures to fall back on. Should one go bust, usually it'll have only data centers and IP for creditors to repossess. In November, JP Morgan called out climbing spend on the ongoing AI buildout, which Jensen Huang says might take up to 50 years. Elon Musk's xAI is burning close to $1 billion a month, with only around $500 million in 2025 income to show for it. Anthropic (makers of Claude) seems to be playing the game more conservatively, expecting profitability in 2028. This displays a stark difference in financial approaches, with OpenAI and xAI clearly betting the proverbial server farm on future gains, while Anthropic moves in a comparatively steady manner. To the AI firms' credit, though, it's not like any of this has happened in history, or at least in this manner. Estimates put the amount of money already earmarked up for AI datacenters up to 2030 at $7 trillion, a figure that's difficult to picture but can be put into words: enough to run the entire U.S. government for one full year, the combined valuation of Microsoft and Amazon, or 1.5 times the entire GDP of Germany. With the number of players in the game, it's hard to make the argument that absolutely everyone is wrong about the viability of their investments. While it can be argued that OpenAI might be flying too close to the sun, there's also no precedent for such explosive growth; what appears to be extremely long-term bets now may well prove to be the highest-risk, highest-reward scenario in the tech industry.
[2]
OpenAI is still figuring out how to make money
This week, OpenAI CFO Sarah Friar took to the internet to make a bold pitch for the company's future, which she claims is bright, despite what the current numbers say. If you buy her logic, you must believe some things to be true, regardless of the poorly connected thread of logic that appears to hold them together. In what we can only assume is a pitch to soften up the market for more investment and possibly an IPO, Friar argues that one of the things people should believe about OpenAI is that the more money it spends, the more money it makes. Running through the AI poster child's achievements over the last couple of years, Friar said that the business's compute grew 9.5x from 2023 to 2025 from 0.2 GW to around 1.9 GW. Meanwhile, "revenue followed the same curve" by growing 10x in the same period from $2 billion to more than $20 billion in 2025. "We firmly believe that more compute in these periods would have led to faster customer adoption and monetization," she said. The more you spend, the more you make. It can be true, of course, but it doesn't always follow. What OpenAI needs is for people to go from using its AI tools the way they use them now to using them more in the future. Rest assured that is on the to-do list for 2026. "The priority is closing the gap between what AI now makes possible and how people, companies, and countries are using it day to day. The opportunity is large and immediate, especially in health, science, and enterprise, where better intelligence translates directly into better outcomes," she said. The American aphorism "if you're so smart, why ain't you rich?" springs to mind. Nonetheless, Friar sees monetization naturally following the kind of investment in computing power necessary to stress the electricity resources of the world's largest economy. "As intelligence moves into scientific research, drug discovery, energy systems, and financial modeling, new economic models will emerge. Licensing, IP-based agreements, and outcome-based pricing will share in the value created. That is how the internet evolved. Intelligence will follow the same path," she asserts. So, we have the old "something will emerge" argument, which is guaranteed to win over any bank manager who is asked for an unsecured loan for magic beans. Speaking of banks, HSBC last year glanced at OpenAI's plans to balance the books and was not entirely convinced that everything added up. It predicted OpenAI's ChatGPT consumer products would attract 3 billion regular users by 2030, up from 800 million last year, increase subscription rates (10 percent versus 8 percent), and increase corporate demand for APIs and licensing, plus a larger share of digital advertising revenue for AI companies. Nonetheless, OpenAI "would need $207 billion of new financing by 2030," the bank said. Separate analysis underscores how much would need to change to make the LLM builder viable: right now, 95 percent of the 800 million people using ChatGPT, which generates roughly 70 percent of the company's recurring revenue, aren't paying. Out of the mix of paying customers and potential business models, OpenAI is clearly hoping something will emerge. So do we, and that is not just El Reg's famous sense of goodwill and generosity toward the tech industry talking. Nvidia, OpenAI, Microsoft, Oracle, AMD, CoreWeave, xAI, and a few others are all signed up to mutually dependent deals, some of which involve exchanges of stock. How this might unravel is yet unclear. Some commentators have noted the importance to the world as a whole. Financial Times contributing editor Ruchir Sharma pointed out last year that AI accounted for 40 percent of US GDP growth and 80 percent of the gains in US stocks in 2025. More recently, the IMF predicted US growth would strongly outpace the rest of the G7 this year, forecasting an expansion of 2.4 percent in 2026 and 2 percent in 2027. Tech investment had surged to its highest share of US economic output since 2001, helping drive growth, the IMF found. However, Pierre-Olivier Gourinchas, IMF chief economist, said there were "reasons to be somewhat concerned" about the "risk of a market correction, if expectations about AI gains in productivity and profitability are not realized." Tiptoeing the fine line between wishful thinking and begging, Friar's missive reminds us how important OpenAI's plans might be to us all. Six years ago, Sundar Pichai, CEO of Google's parent company, Alphabet, told the world that AI would be as profound, in terms of human evolution, as the harnessing of fire. One thing is for sure - the industry certainly has a big enough flame to make sure the world's economy burns down. ®
[3]
This year could be 'make or break' for OpenAI as investors turn their eyes to profit
"The key question is whether enterprise monetization, pricing power, and inference cost declines can outpace rising compute intensity," a PitchBook analyst told CNBC. It's set to be a critical year for privately-held AI companies -- especially OpenAI -- as investors turn their attention to returns, analysts say. It will be "make or break" for companies whose sole business is selling their AI models, Deutsche Bank wrote in a note on Jan. 20. "OpenAI is particularly extended and may be most at risk as it seems not yet to have found a workable business model to cover its reported cash burn of $9bn last year and likely $17bn this year," Adrian Cox and Stefan Abrudan, analysts at the investment bank, said. They say that of an estimated 800 million weekly users, "only a fraction" are paying. At the same time, the AI bellwether has committed to data center projects worth an eye-watering $1.4 trillion. OpenAI's revenue was more than $20 billion last year, up from $6 billion in 2024, according to a blog post by its financial chief Sarah Friar. It is widely expected that the company will go public late this year, or early 2027. The company has inked deals with Nvidia and Microsoft , among others, and raised billions of dollars in the process, giving it a possible valuation of $500 billion . It secured $22.5 billion from SoftBank at the end of last year, on top of $40 billion already committed by the investment company. While it partners with many hyperscalers, OpenAI's moat is "relatively shallow" compared with larger competitors whose AI playbooks are subsidized by sound business fundamentals elsewhere, Cox and Abrudan wrote, adding "Its path to success appears to be looking narrower and narrower." "The pressure will only increase as it gets nearer to an IPO, mooted for early 2027 and forecast to potentially top $1trn," they said. In a blow to OpenAI, on Jan. 12, Apple opted to power its AI products with Google's technology. On Jan. 16, OpenAI announced it would soon test advertising in ChatGPT -- a move founder Sam Altman said in 2024 was "a last resort" as a business model. It represents a new phase for foundation model developers, according to Dimitri Zabelin, a senior investment research analyst covering AI and cybersecurity at PitchBook, as "investor scrutiny shifts from scale to returns, or at minimum to credible improvement in unit economics." "The key question is whether enterprise monetization, pricing power, and inference cost declines can outpace rising compute intensity," he said, but added that "OpenAI's access to strategic compute and capital partners remains unusually deep" due to its multi-year capacity agreements that signal support for its scaling roadmap. Competitor Anthropic, which was founded by a group of former OpenAI employees, is also rumored to be targeting a public listing -- potentially as soon as this year. The companies benefit from regulatory tailwinds, Zabelin said, "especially as they continue to embed themselves in government operations more domestically and overseas through sovereign AI initiatives." Market watchers expect the U.S. Federal Reserve to take a more dovish stance on rates, though concerns of interference have wobbled the market, which could accelerate generative AI funding further despite fears of a bubble, according to S & P Global. Deutsche Bank analysts are unconvinced, however. "It will prove almost impossible for smaller independent companies to afford the accelerating compute costs for models," they said. "It cannot be ruled out that Perplexity and others end up in the arms of the hyperscalers by the end of the year. Anthropic may be the exception, with a slower cash burn than OpenAI, a product that is particularly popular with coders and -- paying -- enterprises, and a more dynamic pricing model."
[4]
Sam Altman's make-or-break year: can the OpenAI CEO cash in his bet on the future?
Altman's campaigning for his company coincides with its use of enormous present resources to serve an imagined future Sam Altman has claimed over the years that the advancement of AI could solve climate change, cure cancer, create a benevolent superintelligence beyond human comprehension, provide a tutor for every student, take over nearly half of the tasks in the economy and create what he calls "universal extreme wealth". In order to bring about his utopian future, Altman is demanding enormous resources from the present. As CEO of OpenAI, the world's most valuable privately owned company, he has in recent months announced plans for $1tn of investment into datacenters and struck multibillion-dollar deals with several chipmakers. If completed, the datacenters are expected to use more power than entire European nations. OpenAI is pushing an aggressive expansion - encroaching on industries like e-commerce, healthcare and entertainment - while increasingly integrating its products into government, universities, and the US military and making a play to turn ChatGPT into the new default homepage for millions. Altman's long bid to make himself the power broker of a new, AI-powered society has started to look closer to reality as a large portion of the US economy now rides on the success of his vision. There are reports the company is preparing to go public towards the end of 2026 with up to a $1tn valuation in one of the biggest initial public offerings in history. OpenAI and Altman's burgeoning empire is not going unchallenged. Google's rival Gemini AI chatbot is advancing fast enough that Altman last month issued a company-wide "code red" to refocus on ChatGPT. Many analysts have become concerned that OpenAI may be becoming too big to fail as it expands its infrastructure and computing spending, while circular funding deals with partners have not assuaged those fears. Then there is the fact that OpenAI is burning through tens of billions of dollars, with even its own optimistic forecasts showing the startup is years from becoming profitable. In an effort to maintain investor confidence in this profit-free spending spree and avoid regulatory crackdowns, Altman has intensified a charm offensive over the past year, deepened his political connections and promised even more from OpenAI - with the company claiming that AI will soon be a utility "on par with electricity, clean water, or food". The CEO has also expanded his personal portfolio, backing new energy and neuroscience technologies just as other tech moguls have. This year will test: can he maintain the relentless balancing act? Altman and OpenAI get political As OpenAI sought to shift its main business to a for-profit corporation in 2025 and faced down attempts from states across the US to pass AI regulation, it greatly expanded its efforts to influence lawmakers. The company spent $2.99m on lobbying efforts in 2025, according to Lobbying Disclosure Act filings, with its spending ramping up in the latter half of the year. That number is up from $1.76m in lobbying the previous year and just $260,000 in 2023. The push for friendly policies continues after Trump handed the industry a victory with an executive order that preempts and precludes any state-level regulation of AI. Other AI companies have followed suit, with Anthropic spending a little over $3.1m during that same period. OpenAI also hired consultants and lobbyists from across the political spectrum, including those who have worked for California governor Gavin Newsom and former New York City mayor Bill de Blasio. They have also brought on the former deputy chief of staff for Republican senator Lindsey Graham and a former staffer from the Senate foreign relations committee. The company's most effective campaigner does not hail from K Street, however. It has been Altman himself, who has joined other tech moguls in ingratiating themselves with Donald Trump and allying themselves with the current administration. He has dined with Trump at Mar-a-Lago and appeared at the White House for an event touting the administration's ties with the tech industry. In September, Altman was one of several tech CEOs who joined Trump and King Charles for a state dinner in the UK. 'If we get it wrong, that's on us' As investors increasingly warn the AI industry is a bubble, with excessive investment in tech infrastructure for unclear payoff, Altman has conceded that some parts of the industry "are kind of bubbly right now". Yet he has continued to emphasize that OpenAI remains on steady ground and that its $1.4tn commitment to build out datacenters and computational power is necessary. His message is part of a longstanding emphasis on scaling computational power and pursuing growth that has been a fixture of OpenAI since its early breakthroughs in the mid-2010s. It is also in keeping with a sentiment that Altman has expressed for years - that technological progress is inevitable despite the harms that it may create. "We should understand that as a consequence of technology and an economy of ideas, the gap between the rich and the poor will likely increase from its already high-seeming levels," Altman wrote in 2013, in a blog post that thanked his friend Peter Thiel for ideological input. "There is good and bad to this, but we should be careful not to legislate against it, which will hurt growth." More than a decade later, Altman's views on how technological change may affect society seem consistent. "There will be very hard parts like whole classes of jobs going away," Altman wrote in a July blog post. "But on the other hand the world will be getting so much richer so quickly that we'll be able to seriously entertain new policy ideas we never could before." In a lengthy post on X in November, Altman defended OpenAI against accusations it was becoming "too big to fail" or seeking government backstops for its investments. Altman went on the defensive after the company's CFO Sarah Friar spooked analysts with a suggestion that OpenAI could seek government aid, which she then hastily reversed amid widespread pushback. Altman predicted that OpenAI would generate hundreds of billions in revenue by 2030, potentially branching into consumer devices and robotics amid "massive demand" for AI. He also downplayed the fallout if the company were to fail, which analysts fear would have a devastating effect on the economy. "That's how capitalism works, and the ecosystem and economy would be fine," Altman posted. "We plan to be a wildly successful company, but if we get it wrong, that's on us." Turning toward Trump Altman's appearances and apparent friendliness are a stark turnaround from his earlier view of Trump, which he expressed in a mid-2016 blog post that likened Trump's rise to Hitler's. "To anyone familiar with the history of Germany in the 1930s, it's chilling to watch Trump in action," Altman wrote in 2016 prior to Trump's election win. "He is not merely irresponsible. He is irresponsible in the way dictators are." But as Trump's second term began, Altman posted a message lauding the president and recanting his previous opposition. "Watching @potus more carefully recently has really changed my perspective on him (i wish i had done more of my own thinking and definitely fell in the npc trap)," Altman posted on X days after Trump's inauguration. "I'm not going to agree with him on everything, but i think he will be incredible for the country in many ways!" Altman and OpenAI have found a warm welcome in the Trump administration. The White House's attempts to show off economic growth and win in a technology race with China have led it to embrace OpenAI, while the Department of Defense awarded a $200m contract to the company in June for it to develop AI solutions for "warfighting and enterprise domains". "The AI future is not going to be won by hand-wringing about safety," JD Vance said in a speech at a summit on artificial intelligence in February. The White House has not leant a friendly ear to lawmakers calling for the regulation of AI. OpenAI CEO Sam Altman (R), accompanied by U.S. President Donald Trump, speaks during a news conference in the Roosevelt Room of the White House on 21 January 2025 in Washington, DC. Photograph: Andrew Harnik/Getty Images Altman's shift on Trump has come not only as tech has lurched to the political right but also as OpenAI faces growing pushback, including lawsuits from families alleging that its ChatGPT service encouraged users to commit suicide, numerous copyright infringement lawsuits and concern over data centers' rising energy costs. While Altman has long made broad calls for some form of regulation on AI, he has eased off calls for oversight over the past year and said that requiring government sign-off to release AI models would be "disastrous". In friendlier environments, the CEO has framed mitigating AI's harms as something that is a burden for regulators and users. Shortly after the launch of OpenAI's video app Sora, which drew sizable backlash over generating deepfakes of historical figures such as Martin Luther King Jr and its potential for creating misinformation at scale, Altman seemed unconcerned with AI video's implications. "Very soon the world is going to have to contend with incredible video models that can deepfake anyone or kind of show anything you want. And that will mostly be great," Altman said on the venture capital firm Andreessen Horowitz's podcast in October. "There will be some adjustment that society has to go through." He did not speak much about regulation, which he said "probably has a lot of downside". "I expect some really bad stuff to happen because of the technology, which also has happened with previous technologies," Altman said. "All the way back to fire," Ben Horowitz, the firm's co-founder, replied. Nuclear energy, longevity pills, brain scans While Altman has been traveling the world as a frontman for OpenAI and generative AI writ large, he has also been building out his own portfolio of investments, which offers a sense of where he thinks the world is heading. Altman is a big proponent of nuclear technology and has called past bans on nuclear energy over safety concerns "incredibly dumb". He has financially backed the nuclear energy startup Helion, which in July announced it had begun construction on a nuclear fusion power plant intended to feed Microsoft's enormous data centers - despite the fact that the fusion technology that will presumably fuel the plant is still unproven. Altman also served as chair of the board for nuclear energy startup Oklo until April of last year, with the firm's COO announcing that his departure meant the company could potentially partner with OpenAI in the future. Biotech, a common interest among Silicon Valley elites, has also attracted Altman. One of his investments is in Retro Biosciences, a longevity startup that he backed with $180m that is aiming to launch clinical trials this year on a pill intended to have antiageing effects on the brain. He has co-founded Merge Labs, a rival to Musk's Neuralink brain startup, which raised $252m in funding this month and announced it would collaborate with OpenAI. "I think neural interfaces are cool ideas to explore. I would like to be able to think something and have ChatGPT respond to it," Altman told the Verge last year. Meanwhile, one of Altman's longstanding ventures, Tools for Humanity, has been on a years-long effort to scan a billion people's eyeballs using a biometric collection device called the "orb". The data would then be used to verify human identity online. It has so far scanned roughly 17.5 million people, according to Business Insider. Altman's other interests have tended to fall into some of the tech elite's usual preoccupations, including at one point saying he was dabbling in doomsday prepping, as well as meditating with Silicon Valley's favorite Buddhist monk and mindfulness author Jack Kornfield. In 2023, Kornfeld said during an interview alongside Altman that the tech mogul once asked him how it would be possible to know when AI became conscious. Kornfeld suggested the two "lay down a mat between some servers and take a good dose of psilocybin and see if it answers us".
[5]
Asset Manager Warns That OpenAI Is Likely Headed for Financial Disaster
"I've watched companies implode for decades. This one has all the warning signs." Just over three years ago, OpenAI opened the floodgates with the launch of ChatGPT. The frantic industry-wide race that followed has resulted in soaring valuations for AI companies, tens of billions of dollars invested in data center infrastructure -- and plenty of skepticism as well. For one, experts have pointed out that OpenAI's business fundamentals are inherently different from those of its competitors like Google. These legacy businesses can tap existing revenue sources to bankroll their major AI capital expenditures. The Sam Altman-led OpenAI, however, has raised record amounts of cash and has vowed to spend well over $1 trillion before the end of the decade without the advantage of an existing business that generates ongoing revenue. (The company's recent announcement that it's stuffing ads into ChatGPT is likely a bid to shift that reality.) The gap between the AI industry's promises of a human-level AI-driven future and reality, in other words, has never been wider much like the enormous gulf between AI company valuations and their lagging revenues. To many onlookers, that kind of hubris could end in disaster. As former Fidelity manager George Noble, who has spent decades in asset management, notes in a lengthy tweet, the company may already be "FALLING APART IN REAL TIME." "I've watched companies implode for decades," he wrote. "This one has all the warning signs." Besides stalling subscriber growth, Noble pointed out that OpenAI is reportedly losing a staggering $12 billion per quarter, as well as "burning $15 million per day on [text-to-video generator app] Sora alone." Noble also cast doubt on the AI industry's promises of scaling up operations to meet demand, an immensely costly enterprise that's bound to become even more expensive as AI models demand even more power. Whether their utility will increase at the same rate remains a major point of contention, with some warning that we may have hit a point of diminishing returns in which each new iteration of the same AI model provides smaller and smaller benefits. "Here's the big math problem nobody wants to discuss," Noble said. "It's going to cost 5x the energy and money to make these models 2x better." "The low-hanging fruit is gone," he added. "Every incremental improvement now requires exponentially more compute, more data centers, more power." As a result, the former asset management boss predicted that the "AI hype cycle is peaking" and that "diminishing returns are becoming impossible to hide" while "competitors are catching up." Noble advised investors to stay away from OpenAI, arguing that "I'm not touching OpenAI-adjacent plays at these valuations" since the "risk profile is astronomical." In a separate tweet, Noble compared Altman losing his cool during a podcast appearance last year when asked about the company's eyebrow-raising financials to Enron's former CEO Jeffrey Skilling, who called an analyst an "asshole" during a now-infamous 2021 conference call after being questioned about not releasing a balance sheet. Skilling was at the epicenter of the Enron scandal and was eventually found guilty of conspiracy, insider trading, and securities fraud following the company's collapse. Noble's comments come a week after Sebastian Mallaby, senior fellow at the nonpartisan think tank Council on Foreign Relations, predicted in an essay for the New York Times that OpenAI could run out of money within the "next 18 months." Chris Jung/NurPhoto via Getty Images "OpenAI failure wouldn't be an indictment of AI It would be merely the end of the most hype-driven builder of it," he wrote. Noble, on the other hand, has clearly taken a far more bearish position, arguing that Altman declaring a "code red" late last year was a blinking warning sign of a tough road ahead. As the Wall Street Journal reported at the time, the CEO urged staffers to focus on improving ChatGPT, even at the cost of delaying other projects, as Google continued to play a successful game of catch-up. "OpenAI is a cash incinerator," he added. "The product is losses for investors."
[6]
OpenAI's internal documents predict $14 billion loss in 2026 according to report
Internal OpenAI documents predict the AI specialist is set to bleed fully $14 billion in losses for 2026 according to a new report. It's also claimed that OpenAI will continue to make huge losses totalling $44 billion until 2029, when it won't just turn a profit, but will by then be generating Nvidia-style revenues. A new report from The Information claims to have seen internal OpenAI documents setting out various financial performance projections. That $14 billion loss for 2026 is said to be roughly three times worse than early estimates for 2025. Over the 2023 through end of 2028 period, the report claims OpenAI expects to lose $44 billion, before turning a profit of $14 billion in 2029. Somewhat incongruously, The Information also says that OpenAI's cash burn is not as bad as previously thought, with the company tearing through a mere $340 million in the first half of the most recent financial year. How that squares with overall losses counted in multiple billions isn't explained. The report further claims that OpenAI plans to spend an astonishing $200 billion through the end of the decade, 60% to 80% of which will be spent on training and running AI models. So, how does OpenAI eventually make money? The reports says internal forecasts predict the for-profit part of OpenAI will hit $100 billion in annual revenues in 2029, up from an estimated $4 billion in 2025. At this point, the numbers are getting silly. So, let's put that $100 billion in revenue into context. In 2025, Nvidia had revenues of around $130 billion as a consequence of holding a near-total monopoly over perhaps the largest tech hardware boom in human history. And OpenAI is expecting to more or less match that in about four years. Uh huh. The revenue split for that $100 billion is said to be just over 50% from ChatGPT, roughly 20% from sales of AI models to developers through APIs and another 20% or so from "other products", which include video generation, search and mooted new services including AI research assistants. It's also thought that the cost of inference, which is running AI models as opposed to training them, is coming down fast. Intriguingly, OpenAI expects to spend less on acquiring training data, too. That's forecast to cost $500 million this year, but taper down to $200 million annually towards the end of the decade. Exactly what that says about how OpenAI trains its new models and what data it uses isn't clear. But it could suggest a move to more recursive training on AI-generated data. Anywho, all one can say for sure is that a huge amount of money is involved. Whether OpenAI will come good financially -- or for the human race, generally -- well, that's a totally different matter.
[7]
OpenAI reveals its data center capacity tripled to 1.9GW in 2025 - SiliconANGLE
OpenAI Group PBC on Sunday shared new information about its financial performance and data center construction efforts. Chief Financial Officer Sarah Friar disclosed in a blog post that its annualized recurring revenue topped $20 billion last year. That's up from $6 billion in 2024 and $2 billion the year before. In the same time frame, OpenAI grew its data center footprint from 200 megawatts to about 1.9 gigawatts. That the company's annualized revenue and computing capacity both grew about tenfold in three years is no coincidence. In today's blog post, Friar disclosed that OpenAI ties data center investments to growth milestones. "Capital is committed in tranches against real demand signals," she explained. "That lets us lean forward when growth is there without locking in more of the future than the market has earned." According to the executive, OpenAI is working to not only expand its data center infrastructure but also make it more cost-efficient. Friar divulged that the company has brought down its inference expenses to under $1 per million tokens. OpenAI achieved that partly by mixing and matching different types of data center hardware. "We train frontier models on premium hardware when capability matters most," Friar wrote. "We serve high-volume workloads on lower-cost infrastructure when efficiency matters more than raw scale." Friar didn't specify what chips power OpenAI's lower-cost infrastructure. It's possible the hardware uses the same pricey, top-of-the-line graphics processing units as the company's most advanced training clusters. The most advanced GPUs are often the most cost-efficient. Nvidia's flagship Rubin graphics card, for example, can run some inference workloads at 1/10th the cost per token of its predecessor. Finding ways to lower hardware expenses is likely to become an even bigger priority for OpenAI going forward. In September, sources told The Information that the company was on track to end 2025 with a $8 billion loss, $1.5 billion more than originally expected. OpenAI's loss is reportedly set to more than double to $17 billion this year. The sources stated that the company's efforts to develop custom chips and data centers are part of its effort to cut infrastructure costs. Last year, OpenAI inked a $10 billion partnership with Broadcom Inc. to co-design AI accelerators. Separately, it's working with SoftBank Group Corp.'s SB Energy business to build Stargate data centers based on a custom design. Friar's blog post contained hints about OpenAI's long-term revenue growth plans. The executive detailed that she expects new monetization models to emerge in the artificial intelligence market. "Licensing, IP-based agreements and outcome-based pricing will share in the value created," Friar wrote. Ads are another component of OpenAI's growth strategy. On Friday, the AI provider announced plans to display paid promotions below ChatGPT prompt responses. The company will test its advertising system with a limited number of users in the U.S. before rolling it out more broadly. Friar indicated that OpenAI's development roadmap also prioritizes AI agents and other workflow automation tools. According to the executive, the company is focused on helping users automate tasks that span multiple applications. Another priority is equipping models with the ability to use context for extended periods of time.
[8]
OpenAI Hits $20 Bn ARR Mark as Compute Capacity Triples: CFO Sarah Friar | AIM
OpenAI is also shaping its commercial strategy by extending ChatGPT use to advertising and commerce. OpenAI's annualised revenue has surged past $20 billion in 2025, up from $2 billion in 2023, as the company rapidly expands its compute capacity, according to a new statement by Sarah Friar, chief financial officer of OpenAI. In a company blog post, Friar said OpenAI has structured its business model so that revenue growth increases in step with the practical value its AI systems generate, tying financial performance directly to the amount of real-world work carried out using its technology. Compute capacity has grown roughly threefold year over year, reaching about 1.9 gigawatts in 2025, compared with 0.2 GW in 2023, while revenue expanded at a similar pace to exceed $20 billion ARR. "Our ability to serve customers -- as measured by revenue -- directly tracks available compute," Friar wrote, adding that greater access to compute in earlier years would likely have driven even faster adoption and monetisation. OpenAI said both daily and weekly active users are at all-time highs, driven by ChatGPT's transition from a consumer curiosity to what Friar described as "infrastructure that helps people create more, decide faster, and operate at a higher level." Initially launched as a research preview, ChatGPT is now embedded in everyday personal and professional workflows, from education and writing to software development, marketing, and finance. That usage shift shaped OpenAI's commercial strategy, starting with consumer subscriptions, expanding to team and enterprise plans, and adding usage-based pricing for developers through its API platform. "As AI moved into teams and workflows, we created workplace subscriptions and added usage-based pricing so costs scale with real work getting done," Friar said. More recently, OpenAI has extended its model to advertising and commerce, positioning ChatGPT as a decision-making platform where users move from exploration to action. Friar stressed that ads and commercial options are only introduced when they are "clearly labelled and genuinely useful," arguing that monetisation must feel native to the product experience. At the core of OpenAI's financial strategy is compute management. Friar called compute "the scarcest resource in AI," noting that OpenAI has moved from reliance on a single provider to a diversified ecosystem of partners. In January, OpenAI signed a $10-billion deal with chipmaker Cerebras Systems, turning its focus to inference infrastructure. Looking ahead to 2026, Friar said OpenAI's financial focus will be on practical adoption, particularly in health, science, and enterprise use cases where improved intelligence can directly translate into measurable outcomes. She also signalled future revenue models beyond subscriptions and APIs, including licensing, IP-based agreements, and outcome-based pricing, as AI expands into areas such as drug discovery, energy systems, and financial modelling.
[9]
OpenAI's Revenue Soars Past $20 Billion After 233% Jump -- But Explosive Growth Comes With Massive Compute Costs And A $17 Billion Burn Rate - SoftBank Group (OTC:SFTBF), SoftBank Group (OTC:SFTBY)
OpenAI's annualized revenue surged past $20 billion in 2025, marking extraordinary growth that underscores booming demand for AI -- and the immense financial strain required to sustain it. OpenAI Posts Historic Revenue Growth In 2025 On Sunday, OpenAI said that its annualized revenue run rate exceeded $20 billion in 2025, a 233% increase from 2024, accelerating sharply from the prior year's growth when revenue rose from $2 billion in 2023 to $6 billion in 2024. "This is never-before-seen growth at such scale," CFO Sarah Friar said in a blog post. "We firmly believe that more compute in these periods would have led to faster customer adoption and monetization." Revenue Closely Tracks Explosive Compute Expansion The company acknowledged that revenue growth has moved almost in lockstep with its expansion in computing power, highlighting the infrastructure-heavy nature of generative AI. OpenAI said it increased compute capacity from 0.2 gigawatts in 2023 to 0.6 gigawatts in 2024, reaching roughly 1.9 gigawatts in 2025 -- nearly a tenfold increase in two years. Revenue followed a similar trajectory, climbing from $2 billion to more than $20 billion over the same period. "Compute grew 3X year over year or 9.5X from 2023 to 2025," the company said, adding, "While revenue followed the same curve growing 3X year over year, or 10X from 2023 to 2025." Massive Infrastructure Comes With Enormous Costs The company is reportedly burning more than $17 billion annually, and revenue from subscriptions alone may fall short of supporting its highly compute-intensive AI operations. In December, it was reported that OpenAI was seeking to raise $100 billion at a valuation of $830 billion, largely to fund further compute expansion. Around the same time, SoftBank Group (OTC:SFTBF) (OTC:SFTBY) was reported to have completed a $40 billion investment. Ads Mark A Shift In Monetization Strategy Facing mounting costs, OpenAI on Friday announced plans to test advertisements in ChatGPT's Free and Go tiers, while keeping paid plans ad-free. The company said ads will be clearly labeled, separate from AI responses and will not use conversation data for targeting. Although CEO Sam Altman has previously described ads as a "last resort," the move reflects growing pressure to monetize a vast base of nonpaying users. As of mid-2025, reportedly only about 35 million users, roughly 5% of weekly active users, subscribed to paid plans. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Image via Imagn Images SFTBFSoftBank Group Corp$25.05-1.84%OverviewSFTBYSoftBank Group Corp$12.55-%Market News and Data brought to you by Benzinga APIs
[10]
OpenAI's Annual Recurring Revenue Tripled to $20 Billion in 2025 | PYMNTS.com
Over the same period, OpenAI's compute grew from 0.2 gigawatt (GW) in 2023, to 0.6 GW in 2024 and about 1.9 GW in 2025, Friar said in a Sunday (Jan. 18) blog post. "And we firmly believe that more compute in these periods would have led to faster customer adoption and monetization," Friar said in the post. Saying that compute is "the scarcest resource in AI," Friar said that OpenAI has shifted from having one compute provider to "a diversified ecosystem" and that the company manages this portfolio in a way that makes AI viable for users' everyday workflows. "As these systems move from novelty to habit, usage becomes deeper and more persistent," Friar said. "That predictability strengthens the economics of the platform and supports long-term investment." Noting that OpenAI added a free ad-supported tier to its business model that also includes consumer and team subscriptions and usage-based application programming interfaces (APIs) tied to workloads, Friar said this model "closes the loop." "Where this goes next will extend beyond what we already sell," Friar said. "As intelligence moves into scientific research, drug discovery, energy systems and financial modeling, new economic models will emerge. Licensing, IP-based agreements and outcome-based pricing will share in the value created." OpenAI announced Friday (Jan. 16) that it was bringing its $8-a-month ChatGPT Go subscription tier to the United States and everywhere ChatGPT is available after launching the tier in 171 countries since August. The tier joined the existing Pro and Plus plans that cost $20 and $200 per month, respectively. The company also said it plans to begin testing ads in the U.S. for its Free and Go tiers within weeks. It said it will not include ads in the Plus, Pro, Business and Enterprise subscription plans. On Wednesday (Jan. 14), OpenAI said it will integrate 750 megawatts of ultra-low latency compute from chipmaker Cerebras in several stages, beginning this year and continuing through 2028. OpenAI said the compute will accelerate the response time of its AI models.
[11]
OpenAI Reveals Growth in Revenues and Compute Capacity
Given that the company is preparing for an IPO sometime in 2026, this sudden reveal is far from surprising from OpenAI Barely a week after signing yet another deal with Nvidia to for computing power, OpenAI has now shared more details linking its growth with computing capacity. While the latter grew by 9.5 times in the two years up to 2025, the annualised recurring revenues expanded by ten times in the same period to $20 billion last year. More than the numbers, what is significant is that OpenAI is becoming more transparent with their numbers, having confused most of us with their math last year when the world had to rely on some leaked documents to make sense of their finances. The numbers for 2025 have come from the horse's mouth. So no need to speculate. And it is none other than OpenAI CFO Sarah Friar who has been magnanimous while sharing the correlation between revenue growth and compute capacity. The latter grew from 0.2GW in 2023 to about 1.9GW in 2025. Of course, Sam Altman expects this to be around 30GW or more and this is where his $1 trillion expense comes into play. In a blog post, Friar says the year would be one of "practical adoption" of AI with the priority being on "closing the gap between what AI now makes possible and how people, companies, and countries are using it day to day." She believes that had there been more compute available, the customer adoption and monetisation could've been faster. "Compute is the scarcest resource in AI. Three years ago, we relied on a single compute provider. Today, we are working with providers across a diversified ecosystem. That shift gives us resilience and, critically, compute certainty. We can plan, finance, and deploy capacity with confidence in a market where access to compute defines who can scale," she notes in the post. Over the past three years she notes that OpenAI's ability to serve customers (revenues) was directly linked to available compute. "Compute grew 3X year over year or 9.5X from 2023 to 2025: 0.2 GW in 2023, 0.6 GW in 2024, and ~1.9 GW in 2025. While revenue followed the same curve growing 3X year over year, or 10X from 2023 to 2025: $2B ARR in 2023, $6B in 2024, and $20B+ in 2025. This is never-before-seen growth at such scale," she says. And then Friar makes a none-too-surprising inference that "more compute in these periods would have led to faster customer adoption and monetization." She seemed to suggest that the company's data centre investment are tied inexorably to their growth milestones. "Capital is committed in tranches against real demand signals. That lets us lean forward when growth is there without locking in more of the future than the market has earned," she says. A thumbs-up for OpenAI's data centre investments Casting an eye on the future, OpenAI's CFO says that the company is working now not just on expanding its data centre infrastructure but also making it cost efficient. In the blog post, Friar says the company reduced its inference expenses to under $1 per million tokens, partly by mixing and matching data centre hardware. "We train frontier models on premium hardware when capability matters most. We serve high-volume workloads on lower-cost infrastructure when efficiency matters more than raw scale," she said without divulging details of the microchips powering the latter. For now, we can only speculate that Nvidia's Rubin graphics could be the preferred one. The official also provides indications of long-term revenue growth plans of OpenAI, which isn't as surprising as it looks, given that the company would have to make its financial public if the planned IPO is to hit home some time in 2026. How does OpenAI plan to monetise its services? Friar believes that new monetisation plans are emerging in the AI space that includes "licensing, IP-based agreements and outcome-based pricing" with advertisements being another component in the mix. Last week, OpenAI had shared plans to display paid promotions below the ChatGPT prompt responses, starting with the United States. She continued that the next phase would be agents and workflow automation "that run continuously, carry context over time, and take action across tools. For individuals, that means AI that manages projects, coordinates plans, and executes tasks. For organizations, it becomes an operating layer for knowledge work." With the IPO scheduled, this reveal isn't surprising Friar echoes her boss Sam Altman's sentiments that global compute requires financial commitments made years in advance and that growth cannot ever move in a smooth line. "At times, capacity leads usage. At other times, usage leads capacity. We manage that by keeping the balance sheet light, partnering rather than owning, and structuring contracts with flexibility across providers and hardware types," she adds, in what is an obvious reference to concerns about AI investments being circular. Though information is still sketchy from the CFO, it is definitely better than what it was in the past. Looks like OpenAI has taken the first step towards transparency when it comes to its numbers and we must welcome the move wholeheartedly. What remains now is that the company follows through with such regular updates. Sarah Friar concludes by stating that "Infrastructure expands what we can deliver. Innovation expands what intelligence can do. Adoption expands who can use it. Revenue funds the next leap. This is how intelligence scales and becomes a foundation for the global economy." As customers and prospective investors, we only wish that once in a while, they tell us what they are up to. The more "Open" you are, the less we need to "Speculate".
[12]
OpenAI Projects $20B Estimated Revenue As AI Adoption Gains
The annualised revenue of OpenAI climbed past $20 billion in 2025, according to a blog by Chief Financial Officer (CFO) Sarah Friar about how the company's business model scales with intelligence. For context, annualised revenue is an estimate of a company's earnings based on revenue from a shorter period. In the post, she framed the milestone as part of a broader transition from early experimentation to deeply embedded adoption of ChatGPT across workflows. Friar reported the year-by-year progression from $2 billion in Annual Recurring Revenue (ARR) in 2023, $6 billion in 2024, and over $20 billion in 2025, with available compute growing in proportion, roughly tripling each year from 0.2 Gigawatt (GW) in 2023 to 1.9 GW in 2025. These numbers reflect the electrical power consumption required to run the compute infrastructure behind OpenAI. This parallel with compute, she said, reflects how investment in computational capacity enables frontier research, stronger models, broader adoption, and ultimately higher revenue. However, as of November 2025, it is estimated that OpenAI will pay $1.4 trillion across various contracts with Amazon, Microsoft, Oracle, and others for Compute infrastructure. Furthermore, she writes that OpenAI applied a principle where "our business model should scale with the value intelligence delivers", building monetisation paths from subscriptions and usage-based APIs to a commerce-supported free tier and advertising as adoption deepened. Friar said this approach reflects how intelligence is moving from a specialised tool to a mass-market utility used across everyday digital services. Looking ahead to 2026, Friar said the company's priority is "practical adoption", with a focus on "closing the gap" between what AI now enables and how individuals, companies, and countries actually use it day to day. She pointed to health, science, and enterprise as key areas where intelligence can translate into measurable productivity and economic outcomes. OpenAI is preparing for a potential initial public offering (IPO) that could value the company at up to $1 trillion, setting the stage for what would be one of the largest listings in corporate history according to a Reuters report. The company is considering filing with securities regulators in the second half of 2026, with an eventual public debut targeted in 2027. Preliminary discussions have explored raising around $60 billion or more in the offering. OpenAI's private valuation sits near $500 billion following recent secondary share sales, and the proposed $1 trillion IPO valuation would more than double that figure. CFO Sarah Friar has indicated the company is aiming for a 2027 listing, although internal planning could see a late 2026 debut if market conditions allow. However, an OpenAI spokesperson stressed that an IPO is not the company's current focus, emphasising that the business is building a durable enterprise while advancing its mission to ensure broad benefits from artificial general intelligence (AGI). OpenAI completed a major recapitalisation in October 2025, restructuring its corporate setup and strengthening its partnership with Microsoft ahead of a potential future public listing. The move positions the OpenAI Foundation, the organisation's nonprofit arm, to hold roughly $130 billion in equity within the company's for-profit business, known as OpenAI Group PBC, while retaining overall control. According to the recapitalisation announcement, the change simplified OpenAI's structure, giving the nonprofit direct access to substantial resources as the company prepares for the eventual arrival of AGI. The update followed nearly a year of engagement with regulators in California and Delaware on compliance and governance. Alongside this, OpenAI signed a separate definitive agreement with Microsoft, which now holds about 27% ownership in the for-profit entity valued at roughly $135 billion on an as-converted diluted basis. Under the new terms, Microsoft retains exclusive intellectual property rights to OpenAI's models until AGI is declared, extended through 2032, and the companies can co-develop products and provide wider API access. In January 2026, OpenAI also announced it would begin testing advertisements in ChatGPT's free and lower-tier plans as part of broader revenue diversification efforts, with ads clearly labelled and separated from the chatbot's responses.
[13]
OpenAI CFO says annualized revenue crosses $20 billion in 2025
Jan 19 (Reuters) - OpenAI Chief Financial Officer Sarah Friar said in a blog post on Sunday the company's annualized revenue has surpassed $20 billion in 2025, up from $6 billion in 2024 with growth closely tracking an expansion in computing capacity. OpenAI's computing capacity rose to 1.9 gigawatts (GW) in 2025 from 0.6 GW in 2024, Friar said in the blog, adding that Microsoft-backed OpenAI's weekly and daily active users figures continue to produce all-time highs. OpenAI last week said it would start showing ads in ChatGPT to some U.S. users, ramping up efforts to generate revenue from the AI chatbot to fund the high costs of developing the technology. Separately, Axios reported on Monday that OpenAI's policy chief Chris Lehane said that the company is "on track" to unveil its first device in the second half of 2026. Friar said OpenAI's platform spans text, images, voice, code and APIs, and the next phase will focus on agents and workflow automation that run continuously, carry context over time, and take action across tools. For 2026, the company will prioritize "practical adoption," particularly in health, science and enterprise, she said. Friar said the company is keeping a "light" balance sheet by partnering rather than owning and structuring contracts with flexibility across providers and hardware types. (Reporting by Katha Kalia in Bengaluru; Editing by Andrea Ricci)
Share
Share
Copy Link
OpenAI CFO Sarah Friar claims compute power directly drives revenue, but the company burned $9 billion in 2025 while generating $20 billion in sales. With profitability not expected until 2030 and market share dropping from 90% to 60-70%, analysts warn this could be a make-or-break year for the AI leader as it prepares for a potential IPO.
OpenAI CFO Sarah Friar published a blog post this week attempting to reassure investors about the company's financial trajectory, claiming a direct correlation between compute power and revenue generation
1
. According to Friar, OpenAI's computing capacity grew threefold annually between 2023 and 2025, expanding from 0.2 GW to 0.6 GW and finally 1.9 GW1
. Revenue growth followed a similar pattern, jumping from $2 billion to $6 billion and exceeding $20 billion by the end of 20253
. Friar argued that additional computing power would have accelerated customer adoption and monetization, transforming compute from a fixed constraint into a form of currency1
.
Source: MediaNama
Despite the revenue narrative, OpenAI's financial fundamentals reveal a troubling picture. According to investor data shared with the Financial Times, the company's estimated 2025 expenditure reached $22 billion, averaging $1.83 billion monthly
1
. With sales of $9 billion during that period, OpenAI lost $0.69 on every incoming dollar1
. Deutsche Bank analysts warned that OpenAI "may be most at risk as it seems not yet to have found a workable business model to cover its reported cash burn of $9bn last year and likely $17bn this year"3
. Former Fidelity manager George Noble characterized the company as a "cash incinerator," noting it's "burning $15 million per day on Sora alone"5
.
Source: Tom's Hardware
The sustainability of OpenAI's approach faces mounting skepticism. Of the estimated 800 million weekly users, only a fraction are paying customers
3
. Separate analysis indicates that 95 percent of ChatGPT's 800 million users aren't paying, while the platform generates roughly 70 percent of the company's recurring revenue2
. HSBC predicted that even with 3 billion regular users by 2030, increased subscription rates, and greater corporate API demand, "OpenAI would need $207 billion of new financing by 2030"2
. The recent announcement that OpenAI will test advertising in ChatGPT—a move Sam Altman previously called "a last resort"—signals desperation in monetizing AI services3
.
Source: The Register
The AI infrastructure buildout presents exponential cost challenges. OpenAI has committed to datacenter projects worth $1.4 trillion
3
4
. Noble warned that "it's going to cost 5x the energy and money to make these models 2x better," arguing that "every incremental improvement now requires exponentially more compute, more datacenters, more power"5
. PitchBook analyst Dimitri Zabelin identified the critical question as "whether enterprise monetization, pricing power, and inference cost declines can outpace rising compute intensity"3
. The company's market share has also declined from approximately 90% in 2024 to 60-70% in 2025, with Google Gemini and Perplexity capturing significant ground1
.Related Stories
OpenAI profitability remains distant, with expectations pushed to 2030
1
. The company is widely expected to pursue an IPO late this year or early 2027, potentially targeting a $1 trillion valuation3
4
. However, Deutsche Bank analysts noted that OpenAI's "moat is relatively shallow" compared with larger competitors like Microsoft whose AI initiatives are subsidized by established business fundamentals3
. Sebastian Mallaby of the Council on Foreign Relations predicted OpenAI could run out of money within the "next 18 months"5
. The company secured $22.5 billion from SoftBank at the end of last year, adding to $40 billion already committed, pushing its possible valuation to $500 billion3
.The broader implications extend beyond OpenAI. Financial Times contributing editor Ruchir Sharma noted that AI accounted for 40 percent of US GDP growth and 80 percent of stock gains in 2025
2
. IMF chief economist Pierre-Olivier Gourinchas expressed concern about "the risk of a market correction, if expectations about AI gains in productivity and profitability are not realized"2
. Noble argued that the AI hype cycle is peaking with "diminishing returns becoming impossible to hide"5
. Competitor Anthropic appears to take a more conservative approach, expecting profitability in 2028, while xAI burns close to $1 billion monthly with only $500 million in 2025 income1
. Analysts suggest 2026 will be "make or break" for AI model developers as investor confidence shifts from scale to sustainable financial models and actual returns3
.Summarized by
Navi
[2]
28 Sept 2024

09 Jul 2025•Business and Economy

06 Nov 2025•Business and Economy

1
Policy and Regulation

2
Technology

3
Technology
