14 Sources
14 Sources
[1]
AMD CEO Lisa Su Says Concerns About an AI Bubble Are Overblown
Lisa Su leads Nvidia's biggest rival in the AI chip market. When asked at WIRED's Big Interview event if AI is a bubble, company said "Emphatically, from my perspective, no." Earlier this year, WIRED said that AMD CEO Lisa Su was "out for Nvidia's blood." The American chipmaker is still small compared to the juggernaut that is Nvidia -- their market caps are $353 billion and $4.4 trillion, respectively -- but Su's company is gaining steam. Today, when Su took the stage at WIRED's Big Interview conference in San Francisco, she had something else in her sights: the AI bubble. When asked by WIRED senior writer Lauren Goode if the tech industry is in an AI bubble, her response was "emphatically, from my perspective, no." The AI industry is going to need scores of chips from companies like AMD, and fears of such a bubble, Su said, are "somewhat overstated." That might sound bold, but boldness is Su's whole deal. Since she became AMD's CEO in 2014, she has increased the company's market cap from $2 billion to $300 billion. Now, Su is betting big on the need for much more computing power for AI, and the data centers needed to provide that. Still, there are plenty of hurdles ahead for AMD. One is all of those data centers being built, and another is getting its chips out into the hands of as many customers as possible. During the discussion, Goode asked the AMD CEO about selling chips to China. She confirmed that AMD will pay a 15 percent tax instituted by the Trump administration on MI308 chips it plans to resume shipping to China. The US government previously halted sales of the chips to China, but then began reviewing applications again over the summer. AMD said earlier this year that US export restrictions on the MI308 chips would cost the company roughly $800 million. Earlier this year, AMD made a huge deal with OpenAI, under which the AI company will deploy 6 gigawatts of AMD's Instinct GPUs over the course of several years. As part of the deal, AMD agreed to allow OpenAI to buy 160 million shares of the company's stock for a penny per share. effectively giving it a 10 percent stake in the company. The first gigawatt deployment is set to rollout in the second half of next year. It's one of several big bets AMD is making on AI data centers to power artificial intelligence. What Su said she's not worried about is competition from Nvidia, or even Google or Amazon, both of which have their own chip-making plans. "When I look at the landscape, what keeps me up at night is 'How do we move faster when it comes to innovation?'" Su said. Su believes that AI is still in its infancy and her company needs to be ready to provide chips for the future. "As good as the models are today," she says, "the next one will be better." There's huge potential in AI, and "there's not a reason not to keep pushing that technology" into the future.
[2]
AMD CEO Lisa Su 'emphatically' rejects talk of an AI bubble -- says claims are 'somewhat overstated'
AMD CEO says long-term demand for compute will justify today's rapid data-center buildout. AMD CEO Lisa Su used her appearance at WIRED's Big Interview conference in San Francisco to push back against growing speculation that the AI sector is overheating. Asked whether the industry is in a bubble, Su replied "emphatically" no, arguing that concerns are "somewhat overstated" and that AI is still in its infancy. According to Su, AMD needs to be ready to provide chips for the future -- "there's not a reason not to keep pushing that technology." Her remarks come as AMD prepares for several of its largest data-center commitments to date, including a multi-gigawatt accelerator deployment with OpenAI and the resumption of MI308 shipments to China under a new export-control framework. OpenAI plans to deploy six gigawatts of Instinct GPUs over the next several years under a joint announcement the companies made earlier this year. The first one-gigawatt block is scheduled for the second half of next year. As part of that arrangement, OpenAI secured the option to buy up to 160 million AMD shares at a penny each once deployment milestones are met. AMD presented the structure as a way to align long-term incentives around infrastructure delivery rather than a short window of product availability. Meanwhile, the company's operations in China have been shaped by a different kind of uncertainty. AMD has confirmed that it will pay a 15% export tax on MI308 shipments under revised export rules, and that it is ready to do so. Washington halted sales of the part in April before reopening a licensing process that allowed vendors to apply for restricted shipments. AMD has told investors that the original controls would create up to $800 million in inventory and purchase-commitment charges, which makes re-entering the market on known terms a positive step, even with the additional fee. China will not be the main driver of AMD's data-center revenue in the near term, but it remains one of the few regions with customers capable of absorbing large accelerator batches at short notice. Su's comments also addressed pressure from hyperscalers that are expanding their in-house silicon portfolios. She argued that AMD's challenge is not matching any single rival but advancing its own roadmap quickly enough to capture the next wave of deployments. In her view, each generation of AI models raises performance expectations, and the industry's underlying trajectory supports sustained investment in training and inference clusters. For a company that has spent much of the past decade rebuilding its position in high-performance computing, the coming cycle will test how well that confidence translates into delivered hardware and long-term customer commitments.
[3]
AI datacenter boom could end badly, Goldman Sachs warns
Bank sketches four scenarios in which monetization falters or demand swamps supply by 2030 Goldman Sachs warns that datacenter investments may fail to pay off if the industry is unable to monetize AI models, but hedges its bets by saying that demand could also overwhelm available capacity by 2030. Investors have been pouring cash into datacenters, buoyed by the AI-driven demand for compute resources that has led to a boom in construction of new facilities in the US and elsewhere. Analyst Omdia said this week that capital expenditure on bit barns is forecast to reach $1.6 trillion by 2030 thanks to AI, growing 17 percent annually between now and then. But doubts have been raised about just how much return on investment (ROI) will come from the current AI frenzy, with IT firm Lenovo finding earlier this year that many business leaders were unconvinced that AI is worth the expense, while another report claimed that forecasts of future demand are based on little more than guesswork. "A lot of investors have struggled with the hype and quantifying what this all means," Goldman Sachs senior equity analyst Jim Schneider says in the report, which provides four possible scenarios for how the datacenter game may play out between now and 2030, leaving investors to take their pick. Half of these scenarios forecast that demand for new bit barn capacity will drop off between now and the end of the decade, while one predicts it will come close to using up the entire available capacity, and the remaining scenario sees demand overwhelm the supply. The base case, which we assume means that Goldman Sachs Research thinks it is the most likely outcome, is that the balance between supply and demand is set to narrow significantly over the next 18 months, with datacenter occupancy peaking at around 93 percent sometime next year. After 2027, supply constraints are expected to ease up. Its second scenario notes that many AI applications are currently free to use, and there is no guarantee that users will be willing to switch to a paid model. Microsoft has reportedly been finding it difficult to convince customers of the benefits of paying $30 per seat for Copilot, for example. If users balk at paying for AI tools, plans for monetization of these offerings will fall flat and this will feed through into slower demand for datacenters. This would result in excess supply, Goldman Sachs says, potentially forcing operators to lower their lease rates. The third scenario is based on the observation that much of the capacity in datacenters is still used for cloud services and traditional workloads. But companies have been seeking to rationalize or even reduce their use of cloud resources because of growing costs, which look set to increase even further in the near future. In this scenario, corporate spending on cloud services declines, leading to occupancy in datacenters falling even though AI demand remains steady. The remaining scenario sees a surge in AI use overwhelm the available supply of datacenter capacity. Applications such as AI-generated videos could see a jump in the compute infrastructure required, the report says, while bit barns typically take several years to construct and developers may be unable to keep up. Goldman Sachs, which reported record revenue for Q3 2025 of $15.2 billion, up 20 percent year-on-year, cautions that its report is for educational purposes only, and does not constitute an investment recommendation. In other words, make up your own mind what you think will happen. On the one hand, it says, if occupancy rates fall due to an inability to monetize AI models, it will get harder for datacenter operators to generate the expected returns on their capital investments, while on the other hand, high demand could see a shortage where owners will be leasing or utilizing capacity as fast as they can build it. The latter would be reflected in their ROI ratios. Some investors have already started to express concerns, with French multinational Axa telling Bloomberg this week that it is "exercising greater caution on the artificial intelligence build-out" when backing financing for the sector, amid growing awareness of the emerging risks. The managers of Norway's $2 trillion sovereign wealth fund also said recently that they are cautious about investing directly in datacenters due to the sector's high volatility. However, Macquarie Asset Management told a different tale earlier this year, when its managing director explained to the audience at the Datacloud Global Congress in Cannes, France, that it liked the datacenter market because it offered a captive customer base who find it difficult to move elsewhere.
[4]
AI Business Deals Are Getting a Bit 2008-ish
The complex financial ties binding the sector together could become everyone's collective downfall. A company that most people have never heard of is among the year's best-performing technology firms -- and a symbol of the complex, interconnected, and potentially catastrophic ways in which AI companies do business these days. CoreWeave's IPO in March was the largest of any tech start-up since 2021, and the company's share price has subsequently more than doubled, outperforming even the "Magnificent Seven" tech stocks. On Wall Street, CoreWeave is regularly referred to as one of the most important companies powering the AI revolution. In the past few months, it has announced a $22 billion partnership with OpenAI, a $14 billion deal with Meta, and a $6 billion arrangement with Nvidia. Not bad for a former crypto-mining firm turned data-center operator with zero profits and billions of dollars in debt on its books. CoreWeave's business model consists of buying up lots of high-end computer chips, and building or leasing data centers to house those chips. It then rents out those assets to AI companies that need computing power but prefer not to take on the huge up-front costs themselves. If this is straightforward enough, CoreWeave's financial situation is anything but. The company expects to bring in $5 billion in revenue this year while spending roughly $20 billion. To cover that gap, the company has taken on $14 billion in debt, more than half of which comes due in the next year. Many of these loans were issued by private-equity firms at high interest rates, and several use complex forms of financial engineering, such as giving the money to newly formed legal entities created for the explicit purpose of borrowing on CoreWeave's behalf (more on that later). CoreWeave also faces $34 billion in scheduled lease payments that will start kicking in between now and 2028. From the May 2025 issue: The new king of tech The money that CoreWeave is making, meanwhile, comes from just a few intimately connected sources. A single customer, Microsoft, is responsible for as much as 70 percent of its revenue; its next biggest customers, Nvidia and OpenAI, might make up another 20 percent, though exact numbers are hard to find. Nvidia is also CoreWeave's exclusive supplier of chips and one of its major investors, meaning CoreWeave is using Nvidia's money to buy Nvidia's chips and then renting them right back to Nvidia. OpenAI is also a major CoreWeave investor and has close financial partnerships with both Nvidia and Microsoft. All of this might make CoreWeave the purest distillation of a trend sweeping through the AI sector. In recent months, tech giants including Amazon, Google, Meta, Microsoft, and Oracle have been making gargantuan investments in new data centers, tying together their fortunes through circular financing deals, and borrowing huge piles of debt from lightly regulated lenders. The companies and their most ardent backers argue that these deals will set them up to capture the limitless profits of the coming AI revolution. But the last time the economy saw so much wealth tied up in such obscure overlapping arrangements was just before the 2008 financial crisis. If the AI revolution fails to materialize on the scale or the timeline that the industry expects, the economic consequences could be very ugly indeed. The extreme financialization of the AI sector reflects a simple reality: The infrastructure required to train and run AI systems is so expensive that not even the largest companies have enough cash to pay for it all. Spending on data centers is conservatively projected to exceed $400 billion this year, roughly the size of the economy of Denmark; McKinsey estimates that it will reach nearly $7 trillion by 2030. Creative measures are necessary to pay for all of this investment. At the center of the action is Nvidia, the world's most valuable company. Companies that train and run AI systems, such as Anthropic and OpenAI, need Nvidia's chips but don't have the cash on hand to pay for them. Nvidia, meanwhile, has plenty of cash but needs customers to keep buying its chips. So the parties have made a series of deals in which the AI companies are effectively paying Nvidia by handing over a share of their future profits in the form of equity. The chipmaker has struck more than 50 deals this year, including a $100 billion investment in OpenAI and (with Microsoft) a $15 billion investment in Anthropic. Formally, these transactions don't obligate the AI companies to spend money on Nvidia's chips -- an Nvidia spokesperson told Bloomberg that the company "does not require any of the companies we invest in to use Nvidia technology" -- but in practice, that's where the money goes. OpenAI has made its own series of deals, including agreements to purchase $300 billion of computing power from Oracle, $38 billion from Amazon, and $22 billion from CoreWeave. Those cloud providers, in turn, are an important market for Nvidia's chips. OpenAI has also invested in several smaller AI start-ups, which in exchange have agreed to pay for ChatGPT enterprise accounts. Even when represented visually, the resulting web of interlocking relationships is almost impossible to track. Together, these arrangements amount to an entire industry making a double-or-nothing bet on a product that is nowhere near profitable. A single company, OpenAI, is simultaneously a major source of revenue and investment for several cloud companies and chipmakers; a close financial partner to Microsoft, Oracle, and Amazon; a significant customer for Nvidia; and a leading investor in AI start-ups. And yet the company is projected to generate only $10 billion this year in revenue -- less than a fifth of what it needs annually just to fund its deal with Oracle. It is on track to lose at least $15 billion this year, and doesn't expect to be profitable until at least 2029. By one estimate, AI companies collectively will generate $60 billion in revenue against $400 billion in spending this year. The one company that is making a lot of money from the AI boom, Nvidia, is doing so only because everyone else is buying its chips in the hopes of obtaining future profits. The AI companies and their boosters see this as a gamble worth taking. Demand for AI services, they point out, is growing at an exponential rate. According to calculations by Azeem Azhar, a widely cited AI-industry analyst, the direct revenues from AI services have increased nearly ninefold over the past two years. If that pace continues, then it's only a matter of time before AI companies will begin making record-shattering profits. "I think people who fixate on exactly how these investments are being financed are stuck in an outdated way of thinking," Azhar told me. "Everyone is assuming that this technology will improve at a linear pace. But AI is an exponential technology. It's a whole different paradigm." If, however, AI does not produce the short-term profits its proponents envision -- if its technical advances slow down and its productivity-enhancing effects underwhelm, as a mounting body of evidence suggests may be the case -- then the financial ties that bind the sector together could become everyone's collective downfall. The extreme concentration of stock-market wealth in a handful of tech companies with deep financial links to one another could make an AI crash even more severe than the dot-com crash of the 2000s. And a stock-market correction might be the least of America's worries. When equity investments go bad, investors might lose their shirts, but the damage to the real economy is typically contained. (The dot-com crash, for example, didn't cause mass unemployment.) But the AI build-out is so expensive that it can't be funded by equity investments alone. To finance their investments, AI companies have taken on hundreds of billions of dollars in debt, a number that Morgan Stanley expects to rise to $1.5 trillion by 2028. When a bunch of highly leveraged loans go bad at the same time, the fallout can spread throughout the financial system and trigger a major recession. The AI sector's debt is, of course, not guaranteed to go bad. But the complex way in which it is arranged and packaged isn't reassuring. For instance, earlier this year, Meta decided to build a new data center in Louisiana that will cost $27 billion. Instead of applying for a loan from a traditional lender, the company partnered with Blue Owl Capital, a private-equity firm, to set up a separate legal entity, known as a special-purpose vehicle, or SPV, that will borrow the money on Meta's behalf, build the data center according to Meta's instructions, and then lease it back to Meta. Because Blue Owl is technically the majority owner of the project, this setup keeps the debt off of Meta's balance sheet, enabling the company to keep borrowing at low interest rates without worrying about a hit to its credit rating. Other companies, including xAI, CoreWeave, and Google, have borrowed or plan to borrow huge sums through similar kinds of arrangements. Meta has described its arrangement with Blue Owl as an "innovative partnership" that is "designed to support the speed and flexibility required for Meta's data center projects." But the reason the credit-rating system exists is to give lenders and investors a clear sense of the risk they are taking on when they issue a loan. A long history exists of companies trying to circumvent that system. In the run-up to the 2008 financial crisis, several major financial institutions used SPVs to keep billions of dollars in household debt off of their balance sheets. Enron, the energy corporation that famously collapsed in 2001 after a massive accounting scandal, used SPVs to mask its shady accounting practices. "When I see arrangements like this, it's a huge red flag," Paul Kedrosky, a managing partner at SK Ventures and research fellow at MIT who has written extensively about financial-engineering techniques, told me. "It sends the signal that these companies really don't want the credit-rating agencies to look too closely at their spending." SVPs aren't the only 2008-era financing tool making a comeback. Data-center debt totaling billions of dollars is being sliced up into "asset-backed securities," which are then bundled and sold to investors. This is not an inherently problematic way for companies to fund their borrowing. But Kedrosky argues that during periods of heightened speculation, these vehicles turn debt into a financial product whose worth is disconnected from the value of the underlying asset it represents -- which can encourage reckless behavior. "Investors see these complex financial products and they say, I don't care what's happening inside -- I just care that it's highly rated and promises a big return," Kedrosky said. "That's what happened in '08. And once that kind of thinking takes off, it becomes really dangerous." Rogé Karma: Just how bad would an AI bubble be? Then there are the so-called GPU-backed loans. Several data-center builders and cloud providers, including CoreWeave, have obtained multibillion-dollar loans to purchase chips by posting their existing chips as collateral, just as many homeowners used their homes as collateral to take out loans for second and third homes in the 2000s. But, as Advait Arun, an analyst at the Center for Public Enterprise, notes in a recent report on the AI sector's finances, whether that collateral will hold its value is far from clear. When new chip models are released, the value of older models tends to fall. According to Arun, if the collapse in chip prices were steep enough, a vicious cycle could ensue. As older chips fall in value, any loan using those chips as collateral suddenly becomes at risk of default. Lenders might respond by calling in their loans early, before companies have the revenue to pay them back. At that point, the lender might try to sell the chips to recoup their investment, but that will only flood the market with even more chips, driving down the values of existing chips even further, causing other lenders to call in their loans and so on. "A few months ago I would have told you that this was building toward a repeat of the dot-com crash," Mark Zandi, the chief economist at Moody's Analytics, told me. "But all of this debt and financial engineering is making me increasingly worried about a 2008-like scenario." The federal government responded to the 2008 crisis by limiting the ability of traditional banks to take on big, risky loans. Since then, however, private-equity firms, which aren't subject to the same regulatory scrutiny as banks, have gotten more heavily into the lending business. As of early this year, these firms had lent about $450 billion in so-called private credit to the tech sector, including financing several of the deals discussed above. And, according to one estimate, they will lend it another $800 billion over the next two years. "If the AI bubble goes bust, they are the ones that will be left holding the bag," Arun told me. A private-credit bust is almost certainly preferable to a banking bust. Unlike banks, private-equity firms don't have ordinary depositors. In theory, if their loans fail, the groups that will be hurt the most are institutional investors, such as pension funds, university endowments, and hedge funds, limiting the damage to the broader economy. The problem is that nobody knows for certain that this is the case. Private credit is functionally a black box. Unlike banks, these entities don't have to disclose who they are getting their money from, how much they're lending, how much capital they're holding, and how their loans are performing. This makes it impossible for regulators to know what risks exist in the system or how tied they are to the real economy. Evidence is growing that the links between private credit and the rest of the financial system are stronger than once believed. Careful studies from the Federal Reserve estimate that up to a quarter of bank loans to nonbank financial institutions are now made to private-credit firms (up from just 1 percent in 2013) and that major life-insurance companies have nearly $1 trillion tied up in private credit. These connections raise the prospect that a big AI crash could lead to a wave of private-credit failures, which could in turn bring down major banks and insurers, Natasha Sarin, a Yale Law School professor who specializes in financial regulation, told me. "Unfortunately, it usually isn't until after a crisis that we realize just how interconnected the different parts of the financial system were all along," she said. An AI-induced financial disaster is far from inevitable. Still, given the warning signs, one would hope for the federal government to be doing what it can to reduce the risk of a crisis. Instead, the Trump administration is doing the opposite. In August, the president signed an executive order that instructs federal agencies to loosen regulations so that ordinary 401(k) holders can invest directly in "alternative assets" such as, yes, private credit, a change that could expose a far broader swath of the public to the fallout if AI loans go bad. Perhaps that is the key difference between 2008 and 2025. Back then, the federal government was caught off guard by the crash; this time, it appears to be courting one.
[5]
AMD and IBM's CEO doesn't see an AI bubble, just $8 trillion in data centers
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. Editor's take: Normal people are now shelling out hundreds of dollars for modest RAM upgrades, while the companies powering the AI boom are looking forward to even more market hysteria. And despite record spending, spiraling hardware costs, and mounting skepticism from analysts, the CEOs of IBM and AMD insist there is no AI bubble - or at least not one that threatens their business. IBM CEO Arvind Krishna argues that AI is not going through a financial bubble, despite many analysts, banks, and investors claiming otherwise. Krishna, who has led the former PC giant since 2020, is more concerned with the unsustainable infrastructure investments planned by OpenAI and other AI ventures seeking AGI paradise. Krishna recently appeared on a Verge podcast, where he discussed quantum computing, the Jeopardy-winning supercomputer Watson, AI, and more. He doesn't believe there is an AI bubble, but warns that many AI-focused companies will never see a return on investment if they keep spending as recklessly as they are now. The Indian American executive estimates that a single one-gigawatt AI data center requires around $80 billion. If a company wants to commit massive amounts of GPUs, RAM, and power to build a 20 to 30 gigawatt capacity, that cost rises to $1.5 trillion. The entire AI industry has already talked up about 100 gigawatts of capacity in its grand, earth-shaking announcements, which would require $8 trillion to actually build the data-crunching facilities behind them. No one is going to recoup that kind of money anytime soon. Krishna said AI ventures would need to generate $800 billion in profit just to pay interest on an $8 trillion infrastructure buildout - and no company in the industry is anywhere near those numbers. HSBC Holdings recently estimated that OpenAI will likely burn hundreds of billions of dollars for years. The pursuit of artificial general intelligence (AGI) is the driving bet behind this sort of unrealistic financial commitment, Krishna said. But AGI will require much more than a few hallucinating large language models, with the IBM CEO estimating a zero to one percent chance of successfully building the first real AGI in history. Credit: Value Sense IBM is mostly focused on the infrastructure and cloud side of the IT business these days, so the company has little interest in pushing GPUs or AI accelerators as the next big thing. AMD, however, is trying to do exactly that. CEO Lisa Su says the industry's growing concerns about an AI bubble are overblown. In a recent interview with Wired, the head of the x86 chipmaker said she doesn't see any bubble from her perspective. There's an industry that needs massive amounts of chips to build new computing capacity, and AMD is more than ready to supply at least part of that demand. Credit: Value Sense The Santa Clara - based company recently struck a major deal with OpenAI, committing to deliver 6 gigawatts of Instinct GPUs over several years. Su isn't worried about an AI bubble so much as she is about AMD's ability to keep up with chip demand from AI ventures. She said AI has massive potential, and sees no reason not to continue pushing this planet-straining, job-reshaping technology for the foreseeable future. As Rockstar's co-founder recently suggested, the executives promoting AI everywhere aren't necessarily the most human or creative people around.
[6]
Tech leaders fill $1T AI bubble, insist it doesn't exist
Even as enterprises defer spending and analysts spot dotcom-era warning signs Tech execs are adamant the AI craze is not a bubble, despite the vast sums of money being invested, overinflated valuations given to AI startups, and reports that many projects fail to make it past the pilot stage. HPE is one of the giants tapping into booming demand for high-performance hardware to drive AI development, and the president and general manager of its Networking division, Rami Rahim, told us he doesn't see that ending anytime soon. "It wouldn't be the first time in the history of this industry that there would be a correction, and that's fine, you know. So we would adjust, and we would be just fine in the end. But I don't see it at this point in time. Right now, I don't see any signs of a slowdown based on the projects that are in the market right now and the conversations and the plans that we're talking about with our customers," he said. Rahim was speaking to The Register at HPE's Discover event in Barcelona this week, where it showcased various new and upcoming technologies, most with an AI twist. Asked about many AI projects not making it into production, he said: "There are pilots, but I also think there's a lot of actual, real value being created with production products. I mean, I can tell you from the standpoint of what we do in engineering inside of HPE Networking, more and more of our developers are getting far more efficient by leveraging copilots to write software and to verify software." But aren't there concerns about the quality of the code produced by AI assistants? "In the early days, there was a lot of concern, but I've seen an inflection point, the technology and trust based on more and more experience with it has actually improved dramatically. So it takes time. These things never happen overnight, but I sincerely do see the value," Rahim said. Many people have drawn parallels with the current situation and the dotcom market bust at the turn of the millennium, but Rahim said what is happening with AI right now is not completely analogous to what happened then. "I don't think it's a good idea to look at the past as an indicator of what might happen in the future," he told us. "It's just different technologies right now. I mean, right now, the appetite for consumption of AI products, GPU cycles, is enormous," Rahim said. Whether that changes in a year or two is "difficult to say," he admitted. AMD chief exec Lisa Su also maintains that AI is definitely not a bubble. Speaking at the UBS Global Technology and AI Conference 2025 this week, Su said: "I spend most of my time talking to the largest customers, the largest AI users out there. And there's not a concept of a bubble." Instead, she believes the tech industry is a couple of years into a "ten-year super cycle," one where "computing allows you to unlock more and more levels of capability, more and more levels of intelligence." The cycle started with model training as the primary use case, but is now shifting to inference, Su noted, and with no single model fitting all situations and use cases, customers are having to fine-tune to meet their requirements, and this continues to drive demand for infrastructure. "The one thing that is constant as we talk to customers is we need more compute. That at this point, if there was more compute installed, more compute capability, we would get to the answer faster," Su claimed. As the head of a company that makes CPUs, GPUs, ASICs, and FPGAs, she would say that, of course. But what of companies like the industry darling OpenAI, which is valued at $500 billion even though it doesn't expect to make money until 2030 and may have to raise hundreds of billions to cover losses and its investments in AI datacenters? "I think all of the capex forecasts that have increased over the last three to six months have certainly shown that there is confidence that those investments are going to lead to better capabilities going forward," Su said. "And so, from the standpoint of do we see a bubble, we don't see a bubble. What we do see is very well-capitalized companies, companies that have significant resources, using those resources at this point in time because it's such a special point in time in terms of AI learning and AI capabilities," she added. This is despite OpenAI CEO Sam Altman admitting earlier this year that he thinks the industry is in the midst of a bubble. When asked about complaints from many early adopters that there is little or no return on the investments they have made in AI, Su claimed this has not been AMD's experience. "What started as, let's call it, let's try AI for our internal use cases, has now turned into significant clear productivity wins going forward. So there's no question that there is a return on investment for investment in AI," she said. Su did concede that AI has not lived up to all the hype being broadcast about it. "If you look at today's AI, as much progress as we've made over the last couple of years, we're still not at the point where we're fully exploiting the potential of AI," she said. "And I still say that we are in the very, very early innings of seeing that payoff. So as we talk to the largest enterprise customers, I think every conversation is, 'Lisa, how can you help us, how can we learn faster so that we can take advantage of the technology?' So I think the return on investment certainly will be there." Meanwhile, Microsoft was forced to deny reports this week which claimed several of its divisions had lowered growth targets for products using AI after sales staff missed goals for the fiscal year that ended in June. Elsewhere, the head of South Korean conglomerate SK Group, which owns memory chipmaker SK hynix, opined that AI stocks might be due a haircut after rising too fast and too high. "I don't see a bubble in the AI industry," SK Group chairman Chey Tae-won said at a forum in Seoul, as reported by Reuters. "But when you look at the stock markets, they rose too fast and too much, and I think it is natural that there could be some period of corrections," he added. This could come soon, according to research firm Forrester, which recently found that large organizations are set to defer a large chunk of planned AI spending until 2027 because of the current gap between vendor promises and reality. Even the Bank of England's Financial Policy Committee has warned of the dangers of a sudden correction in the financial markets because of AI stocks, comparing the risks to the dotcom bubble. ®
[7]
AI data center boom sparks fears of glut amid lending frenzy | Fortune
For the skeptics, those are some of the examples of why the artificial intelligence data center boom is getting out of hand. There's a frenzy of development going on to support the AI revolution, and with it an insatiable demand for debt to fund it. Some estimate the overall infrastructure roll-out cost could reach $10 trillion, and with so many lenders lining up to throw cash at the assets, the fear is a bubble is building that could eventually leave equity and credit players facing substantial pain. "One key risk to consider is the possibility that the boom in data center construction will result in a glut. Some data centers may be rendered uneconomic, and some owners may go bankrupt," Oaktree Capital Management LP co-founder Howard Marks wrote in a note this week. "We'll see which lenders maintain discipline in today's heady environment." Given the flood of money going in, another danger is that there will be less credit available when facilities being constructed now using loans are in need of refinancing in three to five years' time. There's also growing concern about the level of leverage, particularly given the technology may underperform its high expectations. In such a scenario, lenders may be even more reluctant to refinance, and companies would have to find additional equity or pay more to borrow. "Momentum is strong, but if this is irrational exuberance, investors will lose when the music stops," said Sadek Wahba, chairman and managing partner at infrastructure investor I Squared Capital. He said his firm is trying to be careful, cautioning that "every deal has nuance, and the fine print matters." The broader AI universe has also been caught up in the worries, with circular deals and soaring valuations taking a toll on the bullish sentiment that once dominated. At Brookfield, Chief Executive Officer Bruce Flatt sees $5-$10 trillion of spending to finance the roll out of AI across everything from data centers to power infrastructure. McKinsey & Co. estimates almost $7 trillion is needed by 2030 just on data centers, including those for AI. "These are sums that have never been invested before," Flatt said. OpenAI, for example, has plans to spend $1.4 trillion on AI infrastructure - and would spend more if it could. Chief Financial Officer Sarah Friar has repeatedly said that the company's only constraint was finding more computing capacity. If the scale of the deals is one worry, another surrounds how they are being packaged and structured. Lenders are slicing and dicing debt and selling it on to other investors, meaning it becomes more and more opaque, according to Vinay Nair, chief executive officer at fintech platform TIFIN and a teacher in executive education programs at The Wharton School. "You're spreading this risk through the system," he said. If there's a decline, "I don't think we totally understand all the ripple effects of this through that credit channel." Some borrowers have been shifting the risk from AI data centers off their balance sheets using the securitization markets, where the debt is tranched into slices with varying risks and returns and bought up by the likes of insurers and pension funds. A similar story is emerging in the graphics processing units that process the data. With the lending environment so positive, some borrowers are even asking for more than 100% of the build cost for projects, according to two private credit lenders, who asked not to be identified as the details are private. In one case, the request was for 150%, with the property developer justifying the request on the basis of the uplift in valuation of the facility when rents start flowing, one of the people said. Meanwhile, there's also a risk of hype at play. Nuclear startup Fermi Inc. has yet to develop any data centers, but its valuation briefly jumped to more than $19 billion when it listed this year. That's made billionaires of founders Toby Neugebauer and Griffin Perry, son of former US energy secretary Rick Perry. But there's also increasing market jitters about the borrowing and spending. Fermi has slipped back below the level at which it went public. Concern about Facebook parent Meta Platforms Inc.'s spending hit its stock in late October and Oracle Corp.'s slumped this week after the company reported a jump in investment in data centers and other equipment. For years, landlords financed data centers with a combination of equity and debt and leased out the space. Hyperscalers, large cloud computing providers like Microsoft Corp. and Alphabet Inc.'s Google, also developed sites themselves as cloud services took off. Now, companies want to keep adding capacity and maintain control of it, but are increasingly structuring deals to reduce the impact on financial statements, which helps limit the risk they'll be seen as overexposed. The hyperscalers are starting to use so-called synthetic leases, which limit the liabilities that appear on their balance sheet but still allowthem to benefit from tax relief on depreciation, according to Jeffrey Shell, a vice chairman of corporate capital markets at CBRE. Tech giants would previously just write their own checks "because they need to move quickly for first mover advantage," said Shell. "At some point, even for the biggest companies, financing at these levels has a meaningful impact on the balance sheet." As borrowing soars, credit markets are having to adapt to cope with the demand. "The size has now outstripped what you're going to realistically place into CMBS, ABS, and the private placement project bond market," said Scott Wilcoxen, JPMorgan Chase & Co.'s global head of digital infrastructure investment banking. "It's going to take all of them." At least $175 billion of data-center related US credit deals have been struck this year so far, according to figures compiled by Bloomberg News. Oaktree's Marks questions the yields on the debt that's been sold by hyperscalers to finance the AI investments. Play Video The spread is sometimes only about 100 basis points higher than US Treasuries, leaving the investing veteran wondering whether it's "prudent to accept 30 years of technological uncertainty to make a fixed-income investment that yields little more than riskless debt?" And not everyone is a fan of the design of some of the vehicles that investors are being asked to put money into. "We've seen master trust structures where the assets can be rotated every few years," said Michelle Russell-Dowe, co-head of private debt and credit alternatives at Schroders Capital. "It's hard to underwrite so we don't like those." Mentions of bubbles have seen regulators take an interest. The Bank of England is reviewing lending to data centers after growing concerned at the level of spending and financing. According to JPMorgan's Wilcoxen, one phrase that keeps popping up in the market to describe the vast expanse of financing being tapped is "everything everywhere all at once," a riff on the recent Oscar-winning movie. "The amount of money that is chasing all this is extraordinary," he said.
[8]
The AI boom is a loop-de-loop economy. Here's how
To some on Wall Street, the AI boom looks less like a broad-based revolution and more like a roller-coaster that keeps adding speed and very few riders. Chip giants send money into GPU clouds and model labs that already live on their hardware. Those clouds borrow against racks of GPUs and sign multibillion-dollar contracts to host AI workloads. AI companies drag other power and chip suppliers onto the train with compute deals and equity-for-chips arrangements that stretch well into the next decade. Each move adds another turn to the ride and pushes the cars a little faster. But some look at the ride and see an AI economy that's screaming in circles on one track. On earnings calls and in press releases, these moves sound like strategy: "partnerships," "ecosystems," "assured demand." From above, the pattern is harder to ignore. Instead of a broad base of independent customers buying compute as they need it, a few giants finance one another's buildouts, pre-sell years of infrastructure to each other, and then point to those contracts as proof the coaster can't possibly (possibly!) derail. Nvidia crushed its most recent earnings -- and its stock fell the next day as worried whispers of a "circular economy" and "AI bubble" concerns started traveling faster than the guidance. The mechanics sound simple and, to some, uncomfortable. "New" AI dollars are leaving a corporate budget as a multiyear cloud commitment, landing in a contract that's already earmarked for certain labs, jumping again into a GPU cloud's lease -- and ending up supporting someone's credit line or equity stake. The same promised workloads now prop up: a cloud backlog, a GPU-backed loan, and a chipmaker's growth slide. Across the internet and social media, this is being called a "three-companies-in-a-trench-coat" economy, an "ouroboros," a "Jenga tower," and a "crazy" round of Monopoly, where the same stack of pretend money keeps getting counted as fresh wealth every time it passes "go." Goldman Sachs wrote this fall that "AI bubble concerns are back, and arguably more intense than ever," citing "the increasing circularity of the AI ecosystem" as part of the problem. Morgan Stanley's Todd Castagno has warned that the AI ecosystem is becoming increasingly "circular" as today's loop-de-loops can inflate demand and valuations without creating economic value. "We are increasingly going to be customers of each other," Microsoft CEO Satya Nadella said in November, laying out his company's latest round of AI alliances. Microsoft and Nvidia bankroll Anthropic, which runs on Microsoft's cloud and chews through Nvidia GPUs; AMD hands OpenAI six gigawatts of future supply and, potentially, an up-to-10% stake in the company; Saudi-backed Humain wires in "exclusive technology" deals with AMD and Cisco, while Nvidia and Elon Musk's xAI are lining up a separate 500-megawatt data center in the kingdom. And everyone gets to hold up the relationships as proof that their side of the stack is indispensable. Veritas Investment Research's Anthony Scilipoti says his team has identified "another 80-100" circular deals involving Nvidia, on top of the headline partnerships investors already know. And the Bank of England, in its latest Financial Stability Report, has warned that valuations for AI-focused tech stocks look "materially stretched." Investors are increasingly worried that AI money isn't just flowing through a broad market anymore. It's looping, corkscrewing, and doubling back on itself, faster every quarter, while the companies at the heart of the AI boom insist the ride can only go farther -- and faster. Oracle is staking its future on OpenAI's appetite with a compute commitment worth up to $300 billion over five years starting in 2027, a number that probably would have sounded outrageous in any other era. Today, it sounds almost -- almost -- normal. Oracle has been racing to build the data centers to serve that work, leaning on bond markets and private credit, but since the deal was announced, the company's shares have lost more market value than the entire face value of the contract because the deal is already underwater. Meanwhile, Nvidia has taken stakes in GPU clouds, which borrow billions against towers of Nvidia hardware and then sell that capacity back to AI labs, many of which Nvidia also backs or courts. In September, the chipmaker and OpenAI announced a letter of intent to deploy at least 10 gigawatts of Nvidia systems for OpenAI's next-generation infrastructure -- a project Nvidia CEO Jensen Huang has called "the biggest AI infrastructure project in history." Nvidia said it "intends to invest up to $100 billion" in OpenAI as that hardware rolls out. Still, chief financial officer Colette Kress has reminded investors that, despite the headlines, there is still "no definitive agreement" and "no assurance" the deal will be completed on the expected terms. And with a recent $2 billion investment in Synopsys, Nvidia has now moved into the software that helps design the next generation of chips and systems, embedding itself even further down the stack. Nvidia isn't just selling tickets; it's designing the track, leasing the land, and deciding which riders get a seat. Threaded between OpenAI and Nvidia is CoreWeave, the GPU cloud startup that turned rack after rack of Nvidia chips into structured finance. In 2024, CoreWeave secured a $7.5 billion debt facility led by Blackstone and a consortium of private-credit players, using data centers and GPUs as collateral. This year, CoreWeave expanded its agreement with OpenAI again: a deal worth up to $6.5 billion brought the total contract value to about $22.4 billion. Nvidia owns a more-than-5% stake in CoreWeave and has agreed to buy capacity from it (over $6 billion of it), acting as a kind of backstop customer for the same compute that CoreWeave is selling to OpenAI and others. Then, there's AMD, which has its own orbit. As part of the OpenAI-Oracle megaproject (aka the broader Stargate buildout with Oracle and SoftBank), AMD has committed to supply up to 6 gigawatts of Instinct GPUs by 2030, and OpenAI has been granted warrants that could, if milestones are met, give it up to a 10% stake in AMD. In the Gulf, AMD and Cisco have teamed up with Saudi-backed startup Humain in a joint venture that plans to deliver up to 1 gigawatt of AI infrastructure over the next several years, starting with a 100-megawatt deployment in Saudi Arabia; AMD and Cisco will be minority equity holders, with the Public Investment Fund-backed Humain in the driver's seat. There's also the problem of exit velocity. Once a company has pledged 10 gigawatts of AI power or promised investors hundreds of billions in AI infrastructure, backing down carries a political and reputational cost that doesn't show up in spreadsheets. Utilities have rewritten long-term plans. Local officials have posed with shovels in fields that now double as collateral. Sovereign funds have stamped their names on AI parks meant to prove they are on the right side of the future. Industrial policy, grid planning, and corporate capex now converge on the same bet: that this small cast of riders will keep screaming excitedly around every turn. Still, there's a very loud crowd arguing that the loop is the least interesting thing about the AI economy. Nvidia has started pushing back at Michael Burry's "circular" talk, telling analysts that its cross-deals are tiny next to its revenue and that the startups it backs predominantly earn money from outside customers, not from Nvidia itself. In its latest earnings, Nvidia said demand for its newest Blackwell chips was "off the charts," with cloud GPU capacity effectively sold out. AMD CEO Lisa Su makes a similar case, arguing that the bubble debate "misses the bigger picture" because AI is a structural shift in how workloads are run, not a passing theme. David Wagner, the head of equities at Aptus Capital Advisors, told Quartz that the companies at the center of the AI boom know exactly how much leverage they have: balance sheets that can handle fresh debt, cash flow that can underwrite long-lived projects, and what he calls plenty of "runway" and "dry powder" to keep spending if the cycle wobbles. He sees AI already turning into real revenue in cloud and software, and some degree of overbuild is the cost of making sure the lights stay on when the next wave of workloads shows up. The circular deals look strange from the outside; from his seat, they look like big platforms using every tool they have to lock in a market they already dominate. When critics point out that Microsoft is both a shareholder and a major customer of CoreWeave, its CEO, Michael Intrator, points back to actual use rate: Copilot, Office 365, and Meta's AI experiments are chewing through capacity, he says, describing demand as "overwhelming," stretching from hyperscalers to sovereign AI projects. He says debt and vendor financing are just ways to keep up with an order book that keeps overshooting the forecasts. Other strategists sit somewhere in the middle of the circular economy debate. Mark Jamison opened a recent piece in Barrons by conceding that "AI looks like a circular money machine" -- or "so the alarmists say" -- before arguing that "the evidence suggests something different, a powerful technological transformation that remains grounded in fundamentals." The big-bank house view leans the same way. Morgan Stanley's tech team has described AI spending as part of a longer-term profit cycle, modeling roughly $1.1 trillion in AI software revenue by 2028 (up from $45 billion in 2024) and arguing that AI capex has "considerable potential for return," while J.P. Morgan's outlook says tech-led gains don't yet resemble a bubble -- so long as the incremental capex produces durable cash flow. Charles Schwab's Liz Ann Sonders has argued that this isn't dot-com 2.0 because today's AI leaders are enormous, cash-rich incumbents, not cash-burning startups, but she also warns that disappointment relative to sky-high expectations could still roil markets. Still, the market wants to know: How much of this motion reflects real demand from the rest of the economy? Oracle can point to an OpenAI contract that runs into the hundreds of billions. CoreWeave can point to more than $20 billion in obligations from OpenAI and another long-dated deal with Meta. Nvidia can point to the 10 gigawatts of planned OpenAI capacity plus AMD's six-gigawatt pledge. Each of those numbers supports everyone's individual growth story. None of them cleanly separates end users paying for AI from a handful of companies buying one another's capacity and calling it "momentum." The circular economy has become a main attraction, a key operating system of the AI boom. The chipmakers, clouds, and labs that dominate the story are financing each other's expansions, locking in each other's demand, and building power-hungry infrastructure around promises that all stem from the same circle of names. Still, the money is real. So the result of all this dealmaking looks to some like unstoppable momentum. To others, this looks like a very elaborate way of keeping the roller coaster ride moving -- farther and faster in tighter and tighter loops.
[9]
AI's $400 bn problem: Are chips getting old too fast?
New York (AFP) - In pursuit of the AI dream, the tech industry this year has plunked down about $400 billion on specialized chips and data centers, but questions are mounting about the wisdom of such unprecedented levels of investment. At the heart of the doubts: overly optimistic estimates about how long these specialized chips will last before becoming obsolete. With persistent worries of an AI bubble and so much of the US economy now riding on the boom in artificial intelligence, analysts warn that the wake-up call could be brutal and costly. "Fraud" is how renowned investor Michael Burry, made famous by the movie "The Big Short," described the situation on X in early November. Before the AI wave unleashed by ChatGPT, cloud computing giants typically assumed that their chips and servers would last about six years. But Mihir Kshirsagar of Princeton University's Center for Information Technology Policy says the "combination of wear and tear along with technological obsolescence makes the six-year assumption hard to sustain." One problem: chip makers -- with Nvidia the unquestioned leader -- are releasing new, more powerful processors much faster than before. Less than a year after launching its flagship Blackwell chip, Nvidia announced that Rubin would arrive in 2026 with performance 7.5 times greater. At this pace, chips lose 85 to 90 percent of their market value within three to four years, warned Gil Luria of financial advisory firm D.A. Davidson. Nvidia CEO Jensen Huang made the point himself in March, explaining that when Blackwell was released, nobody wanted the previous generation of chip anymore. "There are circumstances where Hopper is fine," he added, referring to the older chip. "Not many." AI processors are also failing more often than in the past, Luria noted. "They run so hot that sometimes the equipment just burns out," he said. A recent Meta study on its Llama AI model found an annual failure rate of 9 percent. Profit risk For Kshirsagar and Burry alike, the realistic lifespan of these AI chips is just two or three years. Nvidia pushed back in an unusual November statement, defending the industry's four-to-six-year estimate as based on real-world evidence and usage trends. But Kshirsagar believes these optimistic assumptions mean the AI boom rests on "artificially low" costs -- and consequences are inevitable. If companies were forced to shorten their depreciation timelines, "it would immediately impact the bottom line" and slash profits, warned Jon Peddie of Jon Peddie Research. "This is where companies get in trouble with creative bookkeeping." The fallout could ripple through an economy increasingly dependent on AI, analysts warn. Luria isn't worried about giants like Amazon, Google, or Microsoft, which have diverse revenue streams. His concern focuses on AI specialists like Oracle and CoreWeave. Both companies are already heavily indebted while racing to buy more chips to compete for cloud customers. Building data centers requires raising significant capital, Luria points out. "If they look like they're a lot less profitable" because equipment must be replaced more frequently, "it will become more expensive for them to raise the capital." The situation is especially precarious because some loans use the chips themselves as collateral. Some companies hope to soften the blow by reselling older chips or using them for less demanding tasks than cutting-edge AI. A chip from 2023, "if economically viable, can be used for second-tier problems and as a backup," Peddie said.
[10]
AMD's Lisa Su doesn't believe there's an AI bubble: 'Emphatically, from my perspective, no'
As AI continues to balloon and pull in even more investment, one of the fiercest debates (other than copyright, ethics, and the environment) is about whether or not it's a bubble. AMD's Lisa Su has weighed in on the debate. Recently, in an interview with Wired, the AMD chief was asked if she thought AI was a bubble. Her response: "emphatically, from my perspective, no." Su claims that fears around a potential bubble are "somewhat overstated". Unfortunately, as this is part of the Wired Big Interview series, and the larger interview hasn't been published yet, it's hard to fully surmise where Su's stance lies outside of this answer. In its barest form, a bubble is where the asset price in some industry far exceeds actual value, due in part to speculation. The 2008 financial crisis and the dot-com bubble popping didn't mean people wouldn't buy houses or use the internet, but that the value tied up in it exceeded what it was worth, and those betting billions on it could no longer see a return on investment. Investment is certainly there when it comes to AI. Nvidia is the first company to have been valued at $5 trillion. Nvidia, in turn, invested $100 billion into OpenAI back in September. In October, AMD signed a multi-year deal with OpenAI, so there's some justification for wishful thinking here. Talking to Wired, Su says, "When I look at the landscape, what keeps me up at night is 'How do we move faster when it comes to innovation?'" Thoughts seem divided on whether or not there's an AI bubble, so here's a non-exhaustive list of those who have weighed in on the topic: A common argument, regardless of whether people think AI is a bubble or not, is that many companies will be disrupted due to the gold rush that has happened with the technology. As the IBM CEO puts it, "maybe two or three" of every ten companies might achieve what they want to with AI, and the rest will have to stomach losses made in the process. Nvidia is one of the companies that has benefited the most from the AI boom, with stock prices rising from around $3 in 2019 to over $180 at the time of writing. However, AMD has also had a boom, with stock prices rising from $30-40 in 2019 to around $215. But a bubble isn't just when companies have major booms: it's when companies have major booms and the asset value doesn't align with legitimate value. Whether or not AI is a bubble will be partially dependent on whether AI companies can actually match the potential they're selling consumers. Another factor is whether wider adoption is possible long-term. AI is such an all-encompassing force that even the US and UK governments have come out in favour of it, though Sam Altman claims he's not looking for a government bailout if things go bad. While the whole idea of an AI bubble is still up for debate, most seem to agree that only a few can win, and one just has to hope major governments are on the winners' podium.
[11]
AI's $400 billion problem: Are chips getting old too fast?
In pursuit of the AI dream, the tech industry this year has plunked down about $400 billion on specialized chips and data centers, but questions are mounting about the wisdom of such unprecedented levels of investment. Building data centers requires raising significant capital, Luria points out. In pursuit of the AI dream, the tech industry this year has plunked down about $400 billion on specialized chips and data centers, but questions are mounting about the wisdom of such unprecedented levels of investment. At the heart of the doubts: overly optimistic estimates about how long these specialized chips will last before becoming obsolete. With persistent worries of an AI bubble and so much of the US economy now riding on the boom in artificial intelligence, analysts warn that the wake-up call could be brutal and costly. "Fraud" is how renowned investor Michael Burry, made famous by the movie "The Big Short," described the situation on X in early November. Before the AI wave unleashed by ChatGPT, cloud computing giants typically assumed that their chips and servers would last about six years. But Mihir Kshirsagar of Princeton University's Center for Information Technology Policy says the "combination of wear and tear along with technological obsolescence makes the six-year assumption hard to sustain." One problem: chip makers -- with Nvidia the unquestioned leader -- are releasing new, more powerful processors much faster than before. Less than a year after launching its flagship Blackwell chip, Nvidia announced that Rubin would arrive in 2026 with performance 7.5 times greater. At this pace, chips lose 85 to 90 percent of their market value within three to four years, warned Gil Luria of financial advisory firm D.A. Davidson. Nvidia CEO Jensen Huang made the point himself in March, explaining that when Blackwell was released, nobody wanted the previous generation of chip anymore. "There are circumstances where Hopper is fine," he added, referring to the older chip. "Not many." AI processors are also failing more often than in the past, Luria noted. "They run so hot that sometimes the equipment just burns out," he said. A recent Meta study on its Llama AI model found an annual failure rate of 9 percent. Profit risk For Kshirsagar and Burry alike, the realistic lifespan of these AI chips is just two or three years. Nvidia pushed back in an unusual November statement, defending the industry's four-to-six-year estimate as based on real-world evidence and usage trends. But Kshirsagar believes these optimistic assumptions mean the AI boom rests on "artificially low" costs -- and consequences are inevitable. If companies were forced to shorten their depreciation timelines, "it would immediately impact the bottom line" and slash profits, warned Jon Peddie of Jon Peddie Research. "This is where companies get in trouble with creative bookkeeping." The fallout could ripple through an economy increasingly dependent on AI, analysts warn. Luria isn't worried about giants like Amazon, Google, or Microsoft, which have diverse revenue streams. His concern focuses on AI specialists like Oracle and CoreWeave. Both companies are already heavily indebted while racing to buy more chips to compete for cloud customers. Building data centers requires raising significant capital, Luria points out. "If they look like they're a lot less profitable" because equipment must be replaced more frequently, "it will become more expensive for them to raise the capital." The situation is especially precarious because some loans use the chips themselves as collateral. Some companies hope to soften the blow by reselling older chips or using them for less demanding tasks than cutting-edge AI. A chip from 2023, "if economically viable, can be used for second-tier problems and as a backup," Peddie said.
[12]
AI's $400 billion problem: Are chips getting old too fast?
In pursuit of the artificial intelligence dream, the tech industry this year has plunked down about $400 billion on specialized chips and data centers, but questions are mounting about the wisdom of such unprecedented levels of investment. At the heart of the doubts: overly optimistic estimates about how long these specialized chips will last before becoming obsolete. With persistent worries of an AI bubble and so much of the U.S. economy now riding on the boom in artificial intelligence, analysts warn that the wake-up call could be brutal and costly.
[13]
View: AI bubble is real and it will birth giants - The Economic Times
The AI boom is creating a bubble, but it is a necessary one. Similar to past industrial bubbles, it is building future infrastructure. Companies are investing heavily in data centers and technology. While a correction is expected, strong AI companies will emerge to redefine intelligence. This investment fuels future progress."If we delivered a bad quarter, it is evidence there's an AI bubble. If we delivered a great quarter, we are fuelling the AI bubble," a dejected Jensen Huang told his employees recently, even as his company, Nvidia, delivered its astonishing, "bubble-bursting" quarterly results. Many watching this AI boom likely would have felt a similar mix of doubt, vertigo and exhilaration. Nvidia's quarterly profit jumped 245% in two years, pushing it to an unheard-of valuation of nearly $5 trillion. OpenAI is valued at $500 billion and is promising a $1 trillion IPO, with no sight of profits until 2030. "AI-washing", or slapping an AI label on every piece of software, has become a shortcut to funding. The US stock market's gains are disproportionately powered by a narrow AI cohort with valuations detached from gravity. This combination of yet unseen valuations, planet-shaping capex and a narrative about remaking the world has one word following it around like a faint smell of ozone: bubble. Let us not mince words: We are in one. Bubbles are not random occurrences and neither is this. Its recipe has a familiar mix of ingredients: A real technological discontinuity. Bubbles happen around things that are genuinely new and potentially huge like railroads, electricity, or the internet. AI is another generational and fundamental technology which is a general-purpose tool used by hundreds of millions, with enterprises experimenting and adopting at scale. A compelling story about inevitability. Investors don't just buy earnings; they buy the future. The AI story of intelligence becoming freely available on tap for every company and human being, solving problems that we could not solve otherwise, is transformational. Even if today's LLMs and models are not in their final form, the direction is locked. The winner seems to be the one who executes at speed along it. FOMO + herd behaviour. Once a few winners appear, capital floods in, good judgment loosens and people start extrapolating straight lines into infinity. That is happening now. Easy capital at scale. Here, the AI story is different. Rather than only the stock market, debt, or VC/PE players, it is Big Tech that is funding much of the build-out from their enormous cash flows. Yes, some players are borrowing heavily, and there are "circular" financing loops that deserve closer scrutiny, but this is not 2008-style debt stacked on debt across the whole system WE NEED THIS BUBBLE The valuations are eye-watering, the hype is deafening and there is a fake-it-until-you-make-it energy radiating from parts of Silicon Valley that feels uncomfortably reminiscent of 1999 and the dotcom bust. But here is a contrarian take: We need this bubble. To understand why, let's distinguish between two types of economic frenzies that Amazon founder Jeff Bezos recently described: Financial bubbles destroy everything. Industrial bubbles create the future. The 2008 crash was a financial bubble, built on complex derivatives, subprime mortgages and leverage upon leverage. When it popped, it left nothing behind but foreclosed homes and shattered retirement accounts. It was wealth destruction with no redeeming legacy. The AI boom, however, is reminiscent of a classic industrial bubble -- like the Railway Mania of the 1840s or the dotcom boom of the late 1990s. When the dotcom bubble burst, trillions of dollars in paper wealth evaporated. But what remained were thousands of miles of dark fibre, massive server farms and a generation of engineers who knew how to build the internet. That infrastructure became the bedrock for the modern digital economy and created companies like Amazon, Facebook and Google. Right now, companies like Microsoft, Google and Amazon are pouring nearly half a trillion dollars into data centres. OpenAI's $500 billion Stargate project will be the largest infrastructure investment ever, more than twice the Manhattan Project. A rising cohort of critics find this irrational, what with the "circular deals" where tech giants like Nvidia invest in startups like OpenAI and Anthropic, which then pay that money right back to them for GPUs -- a house of cards, which will tumble any moment. But there is another way to think of it. This is the messy, chaotic process of infrastructure building: the digital equivalent of China building "ghost cities" and massive high-speed rail networks years before the population fully needed them or US's interstate highways after World War II. They built for future capacity, not present demand; they were "too much, too early" by strict ROI logic, until they were not, and made both of them superpowers. BUILDING FOR THE FUTURE In one sense, the tech industry is doing something similar. Silicon Valley's DNA is not to cater to the present but build the future. It knows that the Age of AI is inevitable, and closing its ears to the worry warts of Wall Street, it is rushing headlong into it. They sense in their bones that they are building the "railroads" of the 21st century -- GPU clusters, gigawatt-scale energy grids and foundational models. That is why Huang said through gritted teeth, "There's been a lot of talk about an AI bubble. From our vantage point, we see something very different." OpenAI's Sam Altman echoes him, stating that progress in generational advancements like electricity always comes from "bold investment and long-term conviction". This optimism is something in our DNA as human beings. As a species, we are evolutionarily wired for taking risks and punching above our weight. That is how this puny creature defeated mammoths and sabre-toothed tigers to become the "pole species". We were the apex risk-takers. We did not leave the caves by playing it safe; we left by betting on fire, on tools, on the unknown. Bubbles are a feature of human innovation, not a bug. They are the mechanism by which society collectively decides to allocate massive, irrational amounts of resources to a new frontier. The same impulse that sent us across oceans in leaky boats is now sending trillions into silicon brains. Sometimes we overdo it -- like the famous Tulip Mania or the South Sea Bubble. But without the overdoing, breakthroughs do not happen, and we as a species do not move forward. WHEN IT POPS So, if we are in a bubble, will it pop? Almost certainly, there will be a correction, and the startups adding .ai to their name just to juice their valuation, will be wiped out. The weak, the useless and the hyped-up will be destroyed. I would bet on more than one pop -- a series rather than a single dotcom-sized implosion. There are many reasons: The demand is already real (consumer subscriptions, enterprise pilots, automation in code, design, service, science); the biggest spenders are cash-rich platforms that can survive a winter or two; and, because this boom is not primarily financed by system-wide leverage. Out of the detritus will emerge stronger AI companies. Just as Amazon survived the dot-com crash to redefine retail, the true AI giants will survive this correction to redefine intelligence.
[14]
A bursting bubble would be great for AI
AI's rapid expansion is fuelled by massive infrastructure spending, but real progress often comes from scarcity, not abundance. When resources tighten, innovation tends to accelerate, as seen in past energy and agricultural crises. A cooling AI investment bubble could push the industry toward creating more efficient, smarter systems. "Bubbles are great. May the bubbles continue," Eric Schmidt, Google's former chief executive, recently said. For artificial intelligence to advance, companies must continue to pour record-breaking investments into AI infrastructure -- or so the thinking goes. Build more data centers, and AI will find a cure for cancer, reach artificial general intelligence and beat China. But progress usually happens under pressure. When energy gets expensive, people invent energy-saving methods. When there's a worker shortage, they invent labor-saving machines. A deflating AI bubble may be just what the tech industry needs: As funding dries up, companies will have to build models that do more with fewer chips and less power. Economists have a name for innovation in times of scarcity: directed technical change. In 1977, as Americans stood on long, winding gas lines, President Jimmy Carter likened the energy crisis to a war, and businesses responded accordingly, developing technologies we now take for granted: more-efficient engines, better-insulated homes, a wave of electric and hybrid vehicle technologies and early forms of renewable energy. Similar circumstances transformed agriculture. In the early 20th century, abundant, low-wage labor dulled the incentive to mechanize. Then, in the spring of 1927 the Mississippi River burst through the levees, turning cotton country into an inland sea. Many residents took refuge in Red Cross camps; in some counties, up to four-fifths of families left. With fewer hands for planting and harvesting, landlords turned to machines: Tractors replaced teams, and mechanical tools spread faster there than in neighboring counties. Generative AI needs its own course correction -- both for the sake of energy efficiency and for its own advancement. Large language models, for all their wonders, can only predict the next thing a human would say. Train one on texts from the late 1800s and it won't invent airplanes or rockets. It will channel ideas from that period, when leading scientists thought human flight was impossible. If we only scale up our current approach, wasting money on fast-obsolete chips and energy-guzzling data centers, we won't progress beyond our current technology, which still yields limited, mediocre results. Better AI would remember what it learns, just as humans do, and squeeze more work from each watt. Tech companies spend billions of dollars running large language models that don't learn while they run. A tool that does both simultaneously would come closer to approximating the human brain, allowing it to innovate more readily. The boom and bust pattern has been central to AI's advancement. In the 1980s, the blossoming AI industry tried to replicate human reasoning by feeding computer systems thousands of "if-then" rules written by programmers. The approach proved expensive and limited. But the resulting shock pushed researchers toward models that learned from examples and dealt better with uncertainty, while neural networks, then unfashionable, kept getting better. Progress became easier to measure, and the field stopped betting everything on a single big approach. Jobs were lost, and labs closed, but that slowdown taught scientists and developers better habits -- more empirical, flexible and results-focused -- that set the stage for modern AI Scarcity is still pushing AI forward, as companies with fewer resources learn to do more with less. In 2018, Europe's data regulations imposed strict rules and heavy fines on how personal data could be collected and stored. In response, tech companies adapted tactics to fine-tune existing models and use artificially generated data instead of real records. And since 2018, DeepSeek, a Chinese company, has worked around the U.S. export constraints. Its models -- trained with a small fraction of the computing power of Western rivals yet comparable on many performance benchmarks -- show how scarcity breeds ingenuity. When there's no incentive for energy-efficient innovation, technology risks settling on the wrong track. Around 1900, electric vehicles had promise; New York and London even ran electric taxi fleets. But underinvestment in the electricity grid, coupled with cheap oil, led to a system that favored internal combustion for generations. It isn't hard to imagine a different century had we priced carbon early and kept building the grid. Without changing course, AI could be bound for the same fate -- a technology that had immense promise but that is trapped in an outdated paradigm that saps our resources. Humans are astonishingly energy-efficient. A child can pick up cause and effect, how the physical world behaves and basic social norms with a brain that runs on only about 20 watts of power. Today's AI models burn through mountains of data and electricity to approximate that same performance, yet still misfire the moment they must handle unfamiliar problems. A course-corrected AI would help us tackle new challenges -- making scientific discoveries, enabling medical breakthroughs -- rather than merely refining what we already know. Bubbles are noisy while they inflate. When they burst, the froth clears and you can see which ideas hold up without subsidy. If the AI boom cools, what survives will be the systems that do more with less.
Share
Share
Copy Link
AMD CEO Lisa Su emphatically rejects concerns about an AI bubble, calling them overblown as her company secures a 6-gigawatt GPU deal with OpenAI. But Goldman Sachs warns that datacenter investments could fail if the AI industry can't monetize its models, while IBM's CEO estimates the sector has committed to $8 trillion in infrastructure that may never generate adequate returns.
AMD CEO Lisa Su used her appearance at WIRED's Big Interview conference in San Francisco to emphatically push back against growing speculation about an AI bubble. When asked directly whether the tech industry is experiencing a bubble, Su responded with a firm no, arguing that such concerns are "somewhat overstated" and that AI is still in its infancy
1
.
Source: Tom's Hardware
Her confidence comes as AMD prepares for one of its largest commitments to date: a deal with OpenAI to deploy 6 gigawatts of Instinct GPUs over several years, with the first gigawatt scheduled for the second half of next year
2
.Since becoming CEO in 2014, Lisa Su has transformed AMD from a struggling chipmaker with a $2 billion market cap into a $353 billion company positioned as Nvidia's primary rival in the AI chip market
1
. The OpenAI partnership includes an unusual equity arrangement where the AI company secured the option to buy up to 160 million AMD shares at a penny each once deployment milestones are met, effectively giving OpenAI a 10 percent stake in AMD1
. Su framed this structure as a way to align long-term incentives around infrastructure delivery rather than short-term product availability2
.While AMD bets big on sustained demand for computing power, Goldman Sachs has issued a starkly different assessment. The investment bank warns that datacenter investments may fail to pay off if the AI industry proves unable to monetize its models effectively
3
. Analyst Omdia forecasts that capital expenditure on data centers will reach $1.6 trillion by 2030, growing 17 percent annually3
. Yet doubts persist about return on investment, with many business leaders unconvinced that AI justifies the expense.Goldman Sachs sketched four scenarios for how the AI datacenter boom might unfold by 2030. In its base case, datacenter occupancy peaks at around 93 percent sometime next year before supply constraints ease after 2027
3
. A more pessimistic scenario suggests that if users refuse to pay for AI tools—Microsoft has reportedly struggled to convince customers to pay $30 per seat for Copilot—monetization plans will collapse, leading to excess capacity and forcing operators to lower lease rates3
. Another scenario sees corporate spending on cloud services decline as companies seek to reduce costs, causing datacenter occupancy to fall even as AI demand remains steady3
.IBM CEO Arvind Krishna offers perhaps the most sobering perspective on infrastructure costs. Krishna estimates that a single one-gigawatt AI datacenter requires around $80 billion to build
5
. The AI industry has collectively announced plans for approximately 100 gigawatts of capacity, which would require $8 trillion to actually construct5
.
Source: Japan Times
To recoup that investment, AI ventures would need to generate $800 billion in profit annually just to cover interest payments—a figure no company in the sector approaches
5
.The financial complexity extends beyond raw infrastructure costs. CoreWeave, a former crypto-mining firm turned datacenter operator, exemplifies the circular financing arrangements now common in the AI industry
4
. The company expects $5 billion in revenue this year while spending roughly $20 billion, covering the gap with $14 billion in debt, much of it from private-equity firms at high interest rates4
. CoreWeave uses Nvidia's money to buy Nvidia's chips and then rents them back to Nvidia, while Microsoft accounts for as much as 70 percent of its revenue4
.Related Stories
AMD faces additional complexity navigating export restrictions. Su confirmed that AMD will pay a 15 percent tax on MI308 chips it plans to resume shipping to China under revised export rules
1
. The US government halted sales in April before reopening a licensing process over the summer2
. AMD has told investors that the original export restrictions would create up to $800 million in inventory and purchase-commitment charges, making re-entry on known terms a positive step despite the additional fee2
.Su addressed pressure from hyperscalers like Google and Amazon that are expanding their in-house silicon portfolios. "When I look at the landscape, what keeps me up at night is 'How do we move faster when it comes to innovation?'" Su said
1
. She argued that AMD's challenge isn't matching any single rival but advancing its own roadmap quickly enough to capture the next wave of deployments2
. Her view is that each generation of AI models raises performance expectations, supporting sustained investment in training and inference clusters.Some investors have begun exercising caution. French multinational Axa told Bloomberg it is "exercising greater caution on the artificial intelligence build-out" when backing financing for the sector
3
. Norway's $2 trillion sovereign wealth fund also expressed caution about investing directly in data centers due to the sector's high volatility3
. These financial risks echo patterns from the 2008 financial crisis, when wealth tied up in obscure overlapping arrangements led to economic catastrophe4
.
Source: The Register
Summarized by
Navi
[2]
[3]
[4]
29 Oct 2025•Business and Economy

18 Nov 2025•Business and Economy

06 Nov 2025•Business and Economy

1
Business and Economy

2
Technology

3
Technology
