6 Sources
6 Sources
[1]
Davos Crowd Focuses on AI Returns After Year of Heavy Investments
If a key focus at last year's World Economic Forum in Davos was the need for massive private and public investments to support artificial intelligence development, this year's event was more about proving the payoff. Packed into the crowded AI House, one of the conference's endless corporate spaces, Rasmus Rothe, from the house's co-host Merantix, declared 2026 the "year of AI ROI." Slogans plastered across the Davos promenade guaranteed companies like Cisco and IBM had found the formula for returns on AI investment. OpenAI executives, meanwhile, debuted new education, health and cybersecurity initiatives, framing the moves as part of a push to ensure all markets -- not just the US -- see gains from the technology. At a press huddle, Brad Lightcap, the company's chief operating officer, quoted the sci-fi author William Gibson: "The future is already here -- it's just not evenly distributed." Lightcap added: "That very much rings true on this." The rhetoric hints at the anxiety of this moment for AI. Many investors are getting antsy to see significant commercial growth that justifies the sector's enormous expenditures and lofty valuations. OpenAI, an unprofitable startup, has committed to spend more than $1.4 trillion on data centers and chips for AI in the coming years, including striking several deals with cloud providers and chipmakers that have been criticized as circular. (At Davos, OpenAI Chief Financial Officer Sarah Friar said she wants to "completely refute" that label.) But among attendees at Davos, there was also plenty of optimism, with AI leaders highlighting their business traction. Anthropic Chief Executive Officer Dario Amodei, for example, touted the benefits of his company's focus on enterprise customers. "It's a business that's more stable than consumer," he said in an interview with Bloomberg Editor-in-Chief John Micklethwait. "We can just very directly create value." Anthropic earned the most buzz at the conference, thanks to Claude Cowork, a new tool that's gone viral in tech circles for being intuitive and tackling a wider range of work tasks on the user's behalf. Though it's still a "research preview" and limited to certain paid users, the Cowork product offers a glimpse of how advances in AI could translate into greater productivity for a broad mix of professionals. Top AI developers, including Anthropic and OpenAI, have been working to prove the value of their services for industries ranging from health care to financial services. Some of the clearest, early traction has come in software engineering, where AI tools are speeding up the process of writing and debugging code. OpenAI said at Davos that sales from its software business, where developers pay to plug into its application programming interface, added about $1 billion "in the past few weeks," growing 19% on a weekly basis. Anthropic previously said Claude Code reached a $1 billion revenue run rate in just six months. "It's not vibe coding anymore," said Tariq Shaukat, CEO of Sonar, a startup that evaluates code quality for companies. "It's real production." Shaukat previously ran sales at Google Cloud, where he recalled spending a year to close deals with big financial firms. Now, he said, banks are using AI products released only months ago. He estimates that nearly a third of the code at banks will be AI generated this year. Timothy Young, CEO of Jasper, another AI startup networking at the Swiss village, cited a proverb from his old company, VMware, to explain how coding tools will lead to broader industry sales: "Value follows what developers do." The value can't come fast enough for AI. In addition to general uneasiness with the pace of AI spending, US and European tech companies are also confronting an uncertain geopolitical landscape that could complicate their global strategies. Looming over every conversation, even those about enterprise AI, was Donald Trump's arrival at Davos and his tense standoff with Europe over Greenland. Privately, those that flew in from Silicon Valley worried that Europe might counterpunch Trump's tariff threats by dropping US tech. "I have nothing other than sympathy for my former colleagues who are navigating a very complicated day," George Osborne, the former British politician now running OpenAI for Countries, said the morning before Trump's speech. There was also a debate among tech executives over the extent of the threat posed by China. Google's AI honcho Demis Hassabis told Bloomberg's Emily Chang that there was a "massive overreaction" to Chinese upstart DeepSeek a year ago, and said the country's tech firms remain about six months behind the frontier AI of the leading western labs. In a separate conversation, Mistral CEO Arthur Mensch described that line of thinking as a "fairy tale." China, he said, "is not behind the West." Shaukat said open-source models from China are "everywhere." Young from Jasper AI, which serves the marketing industry, said a growing number of clients are requesting Chinese models because they're cheaper. And Christian Klein, CEO of software firm SAP, said he's seeing increased interest for integration with Alibaba's AI models across Asia and even within Europe. "Some companies are saying, 'You know, you're going to get hit by tariffs. Okay, let's look into certain alternatives,'" Klein said. If that continues, it may force other tech firms to rethink their pricing for AI models, making it that much harder to recoup their investments.
[2]
Google DeepMind chief warns AI investment looks 'bubble-like'
Demis Hassabis says the level of investment in some parts of the tech industry had become detached from commercial realities You can enable subtitles (captions) in the video player Demis, Google launched its most powerful model, Gemini 3, just a few months ago. It was received with a lot of excitement. Where do you think Google is right now on the AI race? Well, we're very happy, as you say, with the last model we released, Gemini 3. It's topping pretty much all the leaderboards. So it's a great model. Feedback's been great from our users and enterprise customers. But I think, overall, we have had a really good year last year when we look back on it. I think the trajectory of progress we've been making is the fastest of anyone in the industry. If you look at, Gemini 2.5, the previous version that we released in April/May last year, that was already becoming very competitive, I think, at the frontier. And then I think we cemented that with Gemini 3. But of course, it's a ferocious, intense competition, as you know. And everyone's pushing as hard as they can. And we've got to make sure we deliver this year too. Is it why Sam Altman declared a code red? Well, apparently that's what was being reported. And... How do you feel about it? It's fine. We just focus on ourselves. And I think that's what we've got to do is block out the noise, and just execute, focus on the quality of our research, and then making sure we're shipping that quality fast enough into our product services. And I think that's what you've seen with our share of the chatbot space with Gemini app has gone up, 650mn monthly users now. And then things like AI Overview, two billion users. It's the most used AI product in the world. So we're really proud and pleased with how that's going. But I think we're just scratching the surface of what we can really do when we're fully in our groove. And we're going to get to that. But when you look at the industry, what do you think rivals are doing best? What do you think is really interesting right now? Well, I think what Anthropic's doing with code is very interesting with their Claude code. There's a lot of excitement around that in the developer market. We're pleased with the performance of Gemini 3. But they've done something special there, I think other than that, I'm very excited about the stuff that we're doing on multimodal. I feel like... Do you want to explain? Yes, multimodal, being... Gemini, from the beginning, has been multimodal. And by that, I mean being able to deal with more than just language and text, but actually image, video, audio as a native input and output. And we're bringing that all together. That's always been our strength. And the reason we want to do that, and I think that's what I'm excited about this year, is that's what you would need for a kind of an assistant that travels around with you in the real world, maybe on your glasses or your phone. It needs to understand the world, the context around you, the physical world. And of course, for robotics, that's critical too. And I think I've been spending quite a lot of time on that last year. And I think that's going to have some big moments in the next couple of years. Can you talk a little bit about these big moments? Is it a question of trying to create devices that would... the new iPhone, or the glasses? There's actually so many simultaneous things one has to do, which is why it's very exciting, but also quite daunting at the moment is, at least from our perspective, as Google DeepMind, as kind of like... we like to think of ourselves, we describe ourselves internally as the kind of engine room of Google. And we're providing the engine, which is these models, like Gemini, and Veo, and Nano Banana, all these state-of-the-art models. And then we got to figure out how do we want to incorporate them into features in products that are really useful to the end user? So there's that whole aspect of work, which is enhancing what already exists, from email, to your Chrome browser, to search. But then there are also all these very exciting new greenfield areas of digital assistants like the Gemini app. But what does that become over time, including new devices? And we're working on, and we've announced recently, partnerships with Warby Parker and Gentle Monster on new types of smart glasses. Obviously, Google has a long history with smart glasses. But I think maybe we were a bit too ahead of our time when we first started this 10-plus years ago at Google with the devices. But now I think what was missing was a killer app for that. And I think a universal digital assistant that helps you in your everyday life could well be that killer app for things like smart glasses that's connected to your phone. This is an area where a lot of your competitors are also working on. Why do you think you will be able to compete very effectively? Well, I think it starts with the quality of your research and models. So I think we have, by far, the deepest and broadest research bench. I think we have the most talent in the industry. And I think that then will translate to the quality of our breakthroughs and research innovations. And then that underpins what you can do with these new products. Speaking of talent, there's a real talent war in the industry. Some researchers are getting offers for $100mn. How are you holding on to your researchers? Are you having to pay more than that? Look, I mean, of course, another part of the ferociousness of the competition is the talent wars. But I think that most top researchers, they're, of course, fabulously well paid. But it's then, beyond that, is the mission. What are you trying to do with your skills? These are phenomenally smart people. They could do anything with their skills. Are you doing good in the world? Are you building products, or applying, in our case as well, AI for science, scientific ends that actually you'd be proud of, and your friends and family would be proud of, and you're overall benefiting society? And I think we're very lucky at Google that we have that, those product services that people love and use every day, from maps to email, that we are enhancing with our AI work. So it's very motivating to when you make a research breakthrough, you can ship it. And then immediately, a billion users can take advantage of that. So my expectation is that this year we're going to hear a lot more about a techlash because there are growing concerns in society, but there are also safety misuse issues. And we've seen several examples of that. How concerned are you? And how do you protect against it? Look, I think society is right to be worried about these things. And I think, of course, as you know, I spent my whole career working on AI because I really believe in all the benefits that are going to come from science and medicine advances, things like AlphaFold that we've done. But we need to also worry about these harmful use cases. We've tried to get ahead of that with things like SynthID, like watermarking technology for things like deep fakes, getting the right guardrails around the usage of Gemini. And we take that responsibility very seriously for all the users that we have. And we try to be role models beyond what we do. So there's what we can control. And then beyond that, we try to be role models for what responsible use of these kind of... deployment of these technologies looks like. And then as far as society goes and the average person is we need to show, as an industry, as a scientific field, what the unequivocal benefits are more clearly, more quickly. And I think for us, that's doubling down on our AI for science and AI for medicine work and things like that are kind of unequivocal goods in the world. I'm going to get to that with Isomorphic, but just staying with the misuse and safety issue, whatever happens in this industry, and if there is a growing techlash it will affect all the companies. So is that something... I mean, are you all not getting together to discuss this? Is there any effort under way to address it as an industry? There are some industry groups. And of course, most of the lab heads know each other quite well. But I think you're seeing different frontier labs do different things. And I think we'll have to see how that works out. What we can control is what we do at Google DeepMind. And we try to broadcast that at places like this and show the way forward that, I think, gets most of the benefits but mitigates the risks. And we hope others will follow in that path. But it would need something governmental, I think, to create the whole of the industry to do that. And then there's also the international co-operation question too. The other big risk this year is the bubble bursting. Are we in an AI bubble? Well, yeah, look, for me, it's not a binary question, yes or no. The AI industry is very big now, as you know. And it's sort of multifactorial. So I think my guess is... I mean, from our point of view we're seeing more usage than ever, incredible demand for our models and the AI features. We can barely satisfy them. There aren't enough chips to go around. And so I think from that perspective, and also overall, there's not going to be... it's going to be the most transformative technology probably ever invented. So I think from that perspective, there can't really be a bubble. But on the other hand, I think there are parts of the industry that do look bubble like, for example, seed rounds, multi-billion dollar seed rounds in new start-ups that don't have a product, or technology, or anything yet does seem a little bit unsustainable. So there may be some corrections in some parts of the market. And then we have to see. I don't worry too much about that from our day to day. I focus on our technology and delivering that. And my job as head of Google DeepMind is to make sure we're well positioned no matter what happens. If the bubble bursts, we'll be fine. We've got an amazing business that we can add AI features to and get more productivity out of. And also, if the bull case continues, then we've also got these amazing AI first, AI native products like the Gemini app. You've also spoken about the AI race and the competition with China. From what I can see, in China there is no AI race. It's very different from what you hear. There's no sort of race to reach AGI [artificial general intelligence]. There is a lot more focus on applications and finding efficiencies. Is that, perhaps, the more realisitic approach? Look, it's perhaps the more... I don't know about realistic, but the less risky approach, perhaps. And I think, by the way, I think the Chinese market, from what I understand, is just as intensely competitive as the western companies are with each other. It's just that I think you're right, they're more focused on the near-term applications, what can you concretely do right now, rather than maybe these more research heavy frontier capabilities that would get you to AGI. I think that's fine. I started DeepMind. And our job is now at Google DeepMind and Alphabet as a whole, we want to build AGI. We think that's the ultimate goal. And then that will unlock so many opportunities and possibilities in the world that we've talked about many times. So I think that's really the North Star. And on the way, we'll create lots of useful technologies. But I think you've got to have that as a North Star if you want to progress the research in as innovative way as possible. And I think that's why, in my opinion, the western companies are still in the lead on that. How many months are you ahead? Is it a matter of months? I think probably it's only a matter of months now, would be my guess. Although interestingly, some of the Chinese leaders and entrepreneurs I talked to, they feel like they're further behind than that. I'm not sure that's the case. Maybe it's only a matter of six months or so now. But I think it's important that because I think what even things like DeepSeek, which was I think was a bit of an overreaction in the West to that, actually, it's a bit overblown, the Chinese labs haven't proven they can innovate beyond the frontier yet. They're getting faster and faster at catching up to the frontier, what the frontier labs are doing. But they haven't innovated beyond that, the next transformers or something like that. They haven't proven they have that capability yet. Do you think they are as focused on it, though? They're probably not. And that might be one reason why. There was, in the last few months, a debate about AGI. And you disagreed with Yann LeCun, who said that there is no such thing as general intelligence. You're a real expert on the brain. So explain to me why you disagreed with him. Yes, yeah, we have many fun debates, Yann and I, at conferences and things. But this was an online one. Yeah, I just think it's kind of ridiculous, his argument on that. I think he's confusing two things, which is general intelligence, which I think clearly we have as humans, and our brain has that, and universal intelligence, something that can understand anything that could be possible. And the thing is, it's obvious our brains are very general because look at modern civilisation we've built. And we're basically tool-making creatures. That's what separates us from other animals is we build tools, from all the modern things around us - vehicles, 747s, but also computers. And I'd include AI in that, as well as the ultimate expression of the computational tool. So if you include all of that and the science that we do, it's unbelievably general. It's not everything that could possibly happen to your retina and all these arguments he makes. But it's clearly general. And then the other argument I make is more from Alan Turing, who's one of my all-time scientific heroes. And he proved that Turing machines could compute anything that was computable. So that's super general class of machine. And all modern computers are based around that. But also, I think most neuroscientists would agree that our brains are an approximate Turing machine, or approximately Turing powerful, which means that we can do, in theory, understand almost anything. So that's computable. And so the idea is that we have... our brains are a general system that can, in theory, learn almost anything. Not that we already know that. So it's a question of... do you have the learning capability versus the actual full knowledge. Obviously, our brains are limited, we can't know everything, a single human. But in totality, our brains are very, very powerful. And very flexible. And extremely flexible. So what is it going to take to get to AGI, recursive self-improvement, where essentially AI models can teach themselves? We're not there yet. How far are we from it? Yeah so... And is that the main breakthrough? Well, that's one. I mean, I think there are... I think there are quite a few capabilities missing from today's systems that will be needed for something that could probably pass as AGI. And continual learning is one of those things, like online learning after you've been - after it's been trained. Can it learn new things from the user or from experience? So it's sometimes called continual learning or online learning. And for that, you need the personalisation, right? Yes, that would be part of it. So if you want it to personalise, then that would be a form of online learning. But also self learning and self-improvement can also be part of that. So that's a closed loop version of like experiencing something in the world and then updating your knowledge base directly and automatically. And we actually pioneered a lot of that work back 10-plus years ago now with AlphaGo and AlphaZero, our games playing programmes. But of course, the question is, in games, that's much simpler. The real world is much messier, much more complex. So the question is, can you translate some of those techniques to the messy real world? Let's get to Isomorphic, because originally, I think the company has said that you'd be going to clinical trials in Q4 of 2025, but then it was preclinical. So what happened? And when will the drugs go to... So nothing happened. I misspoke it. I think it was one interview I gave a couple of years ago. It was preclinical we were entering last year was the plan. And so I misspoke then. And basically, we're in preclinical trials with a few of our drug programmes. It's going very well. We're advancing very well. And then as soon as that's ready, we'll move that into clinical. So when will we have the first AI-designed drug? Well, I hope in the next few years, but it depends on how the preclinical trials go and the clinical trials. Has it been harder than you had expected? Not at all. It's been we're actually doing phenomenally well. And we just announced a new deal with - a new partnership with J&J yesterday. So we now work with J&J, Eli Lilly, and Novartis, three of the best pharmas in the world. And we also have our own internal programmes. So we have about 17 programmes in total. And we're going to talk a lot more about that. You'll see a lot more news from us this year, first half of this year on our progress, which is going very well. You also announced that you were building a materials science lab in the UK. Can you give me some more details about that? A little bit more. I mean, we're still quite early stages with that. But I think materials science, and AI designing new materials, semiconductors, superconductors, batteries, these kind of things is going to be a huge part of the benefits AI will bring to the world. And I think we're at maybe like an AlphaFold 1 level, some promising research prototypes. But we need to go further. And for part of that is we need to be able to test our materials that our AIs are designing quickly. And so we're thinking about creating a kind of automated lab in the UK to test these theoretical compounds that the AI systems are coming up with. Every time I see you, I ask you, what is your expectation of... what's the timeline for AGI? And I've noticed that lately all of you are not talking so much about timelines. In fact, Sam Altman even says there is no... we are almost at that stage. So I am going to ask you, what is your timeline? Well, mine's been very consistent. I think we're about 5 to 10 years away. So maybe it's now about four to nine years. So now it's... so now it's like four to eight years. So I think 2030 is probably the earliest it could be. Maybe 50 per cent chance over that kind of time zone. So I'm still sticking with my timeline. I think others who've had more aggressive timelines maybe are updating to be a little bit longer and a little bit more realistic. But for me, things always take a little bit longer than one assumes, even at the pace that we're all going at. I mean, that's still phenomenally... that's still extremely soon. I just think it's not going to be like next year. And your job has evolved a lot from DeepMind to actually handling all of Google's AI. I'm just wondering where you see your future is. Do you want to be CEO? Look, I love the... I'm very happy with what I'm doing. I love being close to the science and the research. So I still try and carve out time to do that, even though I'm running a lot of things, including some products now. But look, I can get pretty excited about... I'm very general in my interests. And I can get very excited about anything that's cutting edge, especially if it has... especially if it has a leaderboard attached to it. Yeah, I think there's only so much one can do in the day and still leave enough time for serious thinking, which I do at night time. And I quite like that routine. So I hope to be able to stick with that. You didn't say no. No. OK, there we go. Thank you. Thank you, Demis.
[3]
These prophets of economic doom are worried about another collapse
DOOMPROPHETS (Illustration by Emma Kumer/The Washington Post; iStock) Dean Baker has earned a reputation for predicting economic catastrophe, and he tries to follow his own advice. After the economist warned of a stock bubble in the late 1990s, he rebalanced his investments to reduce exposure to the market. Several years later, he became concerned that soaring home values would fall to earth, so he and his wife sold their condo in Washington. He was right both times: The dot-com bubble burst in March 2000, and D.C.-area home prices crested in 2006 before slumping toward the depths of the Great Recession in 2009. Now Baker, who's a distinguished senior fellow at the Center for Economic and Policy Research, has that foreboding feeling again. Investment in artificial intelligence has propelled the stock market to record highs, but he's shifting his investments to be less exposed to what he considers to be an AI bubble edging closer to popping. "I don't make a point of coming up with a negative forecast," he said. "I just try to have open eyes on the economy, and sometimes I see something that other people don't." Baker is among a select group of people with track records of foreseeing major economic train wrecks. These proven prophets of doom are winning attention in online posts and media interviews, as more people begin to wonder if the AI boom is too good to be true. That's giving economic groundhogs like Baker a chance to spread their market wisdom more widely or actively cultivate big new audiences. Michael Burry, whose mid-2000s bet against the housing market inspired Michael Lewis's 2010 book, "The Big Short," triggered headlines across financial news outlets in November when his hedge fund Scion Asset Management disclosed it was betting that the stock prices of AI darlings Nvidia and Palantir will fall significantly over the next few years. The same month, Burry, who didn't respond to a request for an interview, started a Substack newsletter that often predicts an AI-catalyzed market implosion. It has more than 195,000 subscribers and is called Cassandra Unchained, after the princess of Greek myth cursed to foresee the future but to always be ignored. "OpenAI is the next Netscape, doomed and hemorrhaging cash," Burry wrote in a post on X last month that was viewed more than 2 million times, likening the maker of ChatGPT to a casualty of the dot-com bubble. (The Washington Post has a content partnership with OpenAI.) Although voices of caution are having a moment, that doesn't mean they're winning the argument. James Chanos, the founder and managing partner of Kynikos Associates, who bet on the fall of energy giant Enron, said in an interview that market contrarians are often disregarded. Short-sellers like himself are often viewed "as the village idiots or Dr. Evil," he said, either wrongheaded or trying to manipulate the market. "There's kind of no in-between," said Chanos, who prefers to see himself and others as "financial detectives" hunting for bad actors, fraud or froth that should be cleared away. A 2025 Harvard and Copenhagen Business School study of the beliefs of market experts during periods of boom and bust suggests that questioning market optimism is a good idea. "Optimism portends crashes: the most bullish forecasts predict the highest crash risk," the authors found. In most cases, the authors said, "optimism remained unchecked until well after the crash." Other economists have identified key factors that indicate a crisis could be around the corner. A 2020 study of postwar financial crashes around the world by economists at Harvard, the National Bureau of Economic Research and the Copenhagen Business School found that "crises are substantially predictable." When credit and asset prices grow rapidly in the same sector -- conditions the researchers term a "red zone" -- there was a probability of about 40 percent of a financial crisis starting in the next three years, they concluded. A tech-fueled surge in share prices over recent years has driven the total value of the stock market to far outweigh U.S. economic input, an imbalance that has come before previous downturns. But a report issued Jan. 9 by Goldman Sachs Research said many features of past bubbles are absent. Corporate debt is relatively low in historical terms, and most of the S&P 500's 18 percent returns last year came from increased profits, not investors marking up valuations, the report said. Double-digit earnings growth is "providing the fundamental base for a continued bull market," wrote Ben Snider, chief U.S. equity strategist. The report forecast that U.S. stocks would continue to grow in value this year. When Andrew Odlyzko -- an emeritus professor of mathematics at the University of Minnesota who has studied economic bubbles and has a history of recognizing warning signs before a crash -- started getting calls from journalists asking about a potential AI bubble in 2024, he dismissed the idea. At the time he reasoned it wouldn't be systemically devastating if a big company like Google, Microsoft or Meta made an expensive technological bet that flopped. But things have changed in the past year and a half, Odlyzko said. "Now the investments are exceeding the capacity of these platform companies to finance them out of their cash flow, and they are drawing in other sectors of the economy," he said. He pointed to Meta's recent deal to develop a $30 billion data center project in Louisiana, in which the project's debt is held in a separate entity off Meta's books. Such deals remind Odlyzko of the creative financing that led to the Great Recession in 2007. "If -- or more precisely, I'm pretty confident when -- things collapse, the spillover effects will be much more substantial, much more deadly," he said. Today's rush to build AI data centers also reminds Odlyzko of the 19th-century railway mania in Britain, a bubble of speculation on new railroad infrastructure. Both frenzies are creating "big infrastructure ... that's actually drawing on other parts of the economy," he said. Chanos makes comparisons between today's AI fever and the 1990s tech boom, as both bull markets have centered on big ideas: AI today and the internet's beginnings decades ago. In the short term, many early internet businesses cratered, even though the technology worked out in the longer term. Artificial intelligence technology "is real and probably will be very important, but lots and lots of companies that claim they're a great business ... are probably not going to be great businesses," Chanos said. What's different is that it's now much easier for retail investors to jump into the stock market with the rise of stock-trading apps like Robinhood. Chanos said he's "seeing more and more speculation in terms of retail investors who only know markets that generally go up, and if they go down, they go down for just a short period of time." Baker is one of those retail investors who's preparing for the worst, as he has before -- although he hasn't always had perfect timing. He pulled back his portfolio a couple of years before the dot-com bubble burst in March 2000 and sold his D.C. condo in 2004, about two years before home prices started falling in the region. Although discussion about predicting market slumps often frames the events as bad, Baker thinks an AI crash could do the U.S. some good. A slump could lead to a reallocation of resources in the economy, perhaps toward other sectors like manufacturing or health care, he said. "There's all sorts of things you could better use those resources for if the AI really doesn't make sense," Baker said.
[4]
In Davos, the AI bubble is always someone else's problem
Why it matters: Businesses are investing almost unfathomable sums in AI, but there is a growing sense that at least some companies won't be able to recoup their massive investments. The big picture: AI remains the talk of Davos as CEOs and political leaders remain convinced that the technology is leading to a massive societal shift. The other side: Even if you agree AI is real and significant, the internet was real too and that didn't prevent the dot-com bust. * While mood and sentiment certainly play a role in the timing of when boom turns to bust, the driving factor is the economics, namely whether too much was invested too soon to produce a reasonable return. Zoom in: So who is safe and who is vulnerable if the tide turns? If you ask the big guys, they will be fine. * Google, for example, points to the fact that it can spread the cost of its massive investments across its consumer and enterprise businesses, whether its via ads on YouTube, subscriptions to Gemini or big businesses renting compute from Google Cloud. * "We maximize the return on investment we're getting and that allows us to have real strength financially, to continue to invest going forward," Google Cloud CEO Thomas Kurian said during an interview at Axios House Davos. Others claimed they would be insulated because they aren't building the infrastructure and can pay as they go instead. * "SAP is very cap-ex lean," SAP executive board member Thomas Saueressig told Axios Friday on the sidelines of the DLD conference in Munich. "We benefit from the R&D efforts and the investments our partners are doing." Writer CEO May Habib pointed to the biggest AI labs -- Anthropic and OpenAI -- as the ones most at risk. * "When you look at either of the big labs right now, they're both basically valued at more than Salesforce plus Adobe plus Databricks plus Snowflake," Habib said on stage at Axios House on Monday. "That's hard to kind of wrap your head around, because those other companies are working really, really hard to bring AI to their customer base." Snowflake CEO Sridhar Ramaswamy warned, however, a bubble burst would have sweeping impacts. * "If there is a correction, then the entire stock market, including the valuation of our company, including Snowflake is going to go down," he said on stage at Axios House on Monday. * "On the other hand, in terms of good bubbles and bad bubbles, if this bubble results in a reinvigoration of the power sector, which as you know, is not something that's been attractive for investments, that's actually a net positive for all of us." Zoom out: In a few decades time, all of the investment will have been worth it, many execs agreed. * "We're talking about a technology that can augment and maybe even replace human intelligence," Emerald AI CEO Varun Sivaram told Axios. "In the long run, my bet is that we have built less than 1% of all the compute that we will need as humanity, and we will almost never stop needing more compute." * "We don't regret railroads, telecom fiber ... all the build-ups of this kind that we've done in history, we have ended up feeling great about," Meta CTO Andrew Bosworth said during an interview at Axios House Davos. Yes, but: Plenty of companies won't survive long enough to reap the rewards, Bosworth acknowledged. Between the lines: Bosworth points out that consumers will win regardless, given all the compute power that is being unleashed. * "I think consumers and societies are ultimately the beneficiaries of this tremendous land grab of power, data centers and GPU capacity and whatnot," he said. What we're watching: Obviously everyone is waiting for signs of a bubble popping. But also important is just how long any downturn lasts before the next upswing. * NTT Data CEO Abhijit Dubey says his company is rare in that it sees both the demand and supply side of the AI industry because it both builds data centers and offers significant technology services to large companies that would use AI. * "We see where enterprises are on their adoption journeys and where it is at low levels," Dubey said, noting that the buildout of data centers is outpacing enterprises adopting AI. * However, he said he expects any correction to be short lived, with companies adopting AI far faster than past tech shifts, such as the 10-year move to the cloud. Axios' Amy Harder contributed to this report.
[5]
'We could hit a wall': why trillions of dollars of risk is no guarantee of AI reward
Progress of artificial general intelligence could stall, which may lead to a financial crash, says Yoshua Bengio, one of the 'godfathers' of modern AI Will the race to artificial general intelligence (AGI) lead us to a land of financial plenty - or will it end in a 2008-style bust? Trillions of dollars rest on the answer. The figures are staggering: an estimated $2.9tn (£2.2tn) being spent on datacentres, the central nervous systems of AI tools; the more than $4tn stock market capitalisation of Nvidia, the company that makes the chips powering cutting-edge AI systems; and the $100m signing-on bonuses offered by Mark Zuckerberg's Meta to top engineers at OpenAI, the company behind ChatGPT. These sky-high numbers are all propped up by investors who expect a return on their trillions. AGI, a theoretical state of AI where systems gain human levels of intelligence across an array of tasks and are able to replace humans in white-collar jobs such as accountancy and law, is a keystone of this financial promise. It offers the prospect of computer systems carrying out profitable work without the associated cost of human labour - a hugely lucrative scenario for companies developing the technology and the customers who deploy it. There will be consequences if AI companies fall short: US stock markets, boosted heavily by the performance of tech stocks, could fall and cause damage to people's personal wealth; debt markets wrapped up in the datacentre boom could suffer a jolt that ripples elsewhere; GDP growth in the US, which has benefited from the AI infrastructure, could falter, which would have knock-on effects for interlinked economies. David Cahn, a partner at one leading Silicon Valley investment firm, Sequoia Capital, says tech companies now have to deliver on AGI. "Nothing short of AGI will be enough to justify the investments now being proposed for the coming decade," he wrote in a blog published in October. It means there is a lot hanging on progress towards advanced AI, and the trillions being poured into infrastructure and R&D to achieve it. One of the "godfathers" of modern AI, Yoshua Bengio, says the progress of AGI could stall and the outcome would be bad for investors. "There is a clear possibility that we will hit a wall, that there's some difficulty that we don't foresee right now, and we don't find any solution quickly," he says. "And that could be a real [financial] crash. A lot of the people who are putting trillions right now into AI are also expecting the advances to continue fairly regularly at the current pace." But Bengio, a prominent voice on the safety implications of AGI, is clear that continued progress towards a highly advanced state of AI is the more likely endgame. "Advances stalling is a minority scenario, like it's an unlikely scenario. The more likely scenario is we continue to move forward," he says. The pessimistic view is that investors are backing an unrealistic outcome - that AGI will not happen without further breakthroughs. David Bader, the director of the institute for data science at the New Jersey Institute of Technology, says trillions of dollars are being spent on scaling up - tech jargon for growing something quickly - the underlying technology for chatbots, known as transformers, in the expectation that increasing the amount of computing power behind current AI systems, by building more datacentres, will suffice. "If AGI requires a fundamentally different approach, perhaps something we haven't yet conceived, then we're optimising an architecture that can't get us there no matter how large we make it. It's like trying to reach the moon by building taller ladders," he says. Nonetheless, big US tech companies such as Google's parent Alphabet, Amazon and Microsoft are ploughing ahead with datacentre plans with the financial cushion of being able to fund their AGI ambitions through the cash generated by their hugely profitable day-to-day businesses. This at least gives them some protection if the wall outlined by Bengio and Bader comes into view. But there are other more worrying aspects to the boom. Analysts at Morgan Stanley, the US investment bank, estimate that $2.9tn will be spent on datacentres between now and 2028, with half of that covered by the cashflow from "hyperscalers" such as Alphabet and Microsoft. The rest will have to be covered by alternative sources such as private credit, a corner of the shadow banking sector that is activating alarm bells at the Bank of England and elsewhere. Meta, the owner of Facebook and Instagram, has borrowed $29bn from the private credit market to finance a datacentre in Louisiana. AI-related sectors account for approximately 15% of investment grade debt in the US, which is even bigger than the banking sector, according to the investment bank JP Morgan. Oracle, which has signed a $300bn datacentre deal with OpenAI, has had an increase in credit default swaps, which are a form of insurance on a company defaulting on its debts. High-yield, or "junk debt", which represents the higher-risk end of the borrowing market, is also appearing in the AI sector via datacentre operators CoreWeave and TeraWulf. Growth is also being funded by asset-backed securities - a form of debt underpinned by assets such as loans or credit card debt, but in this case rent paid by tech companies to datacentre owners - in a form of financing that has risen sharply in recent years. It is no wonder that JP Morgan says the AI infrastructure boom will require a contribution from all corners of the credit market. Bader says: "If AGI doesn't materialise on expected timelines, we could see contagion across multiple debt markets simultaneously - investment-grade bonds, high-yield junk debt, private credit and securitised products - all of which are being tapped to fund this buildout." Share prices linked to AI and tech are also playing an outsized role in US stock markets. The so-called "magnificent 7" of US tech stocks - Alphabet, Amazon, Apple, Tesla, Meta, Microsoft, and Nvidia - account for more than a third of the value of the S&P 500 index, the biggest stock market index in the US, compared with 20% at the start of the decade. In October the Bank of England warned of "the risk of a sharp correction" in US and UK markets due to giddy valuations of AI-linked tech companies. Central bankers are concerned stock markets could slump if AI fails to reach the transformative heights investors are hoping for. At the same time the International Monetary Fund said valuations were heading towards dotcom bubble-levels. Even tech execs whose companies are benefiting from the boom are acknowledging the speculative nature of the frenzy. In November Sundar Pichai, the chief executive of Alphabet, said there are "elements of irrationality" in the boom and that "no company is going to be immune" if the bubble bursts, while Amazon's founder, Jeff Bezos, has said the AI industry is in a "kind of industrial bubble", and OpenAI's chief executive, Sam Altman, has said "there are many parts of AI that I think are kind of bubbly right now." All three, to be clear, are AI optimists and expect the technology to keep improving and benefit society. But when the numbers get this big there are obvious risks in a bubble bursting, as Pichai admits. Pension funds and anyone invested in the stock market will be affected by a share price collapse, while the debt markets will also take a hit. There is also a web of "circular" deals, such as OpenAI paying Nvidia in cash for chips, and Nvidia will invest in OpenAI for non-controlling shares. If these transactions unravel due to a lack of take-up of AI, or that wall being hit, then it could be messy. There are also optimists who argue that generative AI, the catch-all term for tools such as chatbots and video generators, will transform whole industries and justify the expenditure. Benedict Evans, a technology analyst, says the expenditure numbers are not outrageous in the context of other industries, such as oil and gas extraction which runs at $600bn a year. "These AI capex figures are a lot of money but it's not an impossible amount of money," he says. Evans adds: "You don't have to believe in AGI to believe that generative AI is a big thing. And most of what is happening here is not, 'oh wow they're going to create God'. It's 'this is going to completely change how advertising, search, software and social networks - and everything else our business is based on - is going to work'. It's going to be a huge opportunity." Nonetheless, there is a multitrillion dollar expectation that AGI will be achieved. For many experts, the consequences of getting there are alarming. The cost of not getting there could also be significant.
[6]
Experts Concerned That AI Progress Could Be Speeding Toward a Sudden Wall
Is the AI industry racing towards a point where the tech's progress stalls out or slows to a crawl? Many experts, including one of the field's foundational figures, seem to think it's possible. "There is a clear possibility that we will hit a wall, that there's some difficulty that we don't foresee right now, and we don't find any solution quickly," Yoshua Bengio, one of the "godfathers" of AI, told The Guardian. "And that could be a real [financial] crash," he added. "A lot of the people who are putting trillions right now into AI are also expecting the advances to continue fairly regularly at the current pace." Part of the problem is that the AI industry has hyped itself into an impossibly high-stakes situation. AI finding some niche uses in some industries is not the premise that has raked in trillions of dollars in investment; instead, the end game is creating a so-called artificial general intelligence, or AGI, a hypothetical AI system that matches or surpasses human cognition. David Cahn at the powerful Silicon Valley investment firm Sequoia Capital said as much in an October blog post quoted by The Guardian: "Nothing short of AGI will be enough to justify the investments now being proposed for the coming decade." Betraying the vagueness of the AGI mission, the more specific points of what constitutes AGI is hotly debated, and some tech leaders, including OpenAI CEO Sam Altman, have begun distancing themselves from the terminology. Mark Zuckerberg's Meta, for example, favors calling it -- whatever "it" is -- an AI "superintelligence" instead. Concerns over an AI "wall" or "winter" popping an AI "bubble" have been raised since the boom kicked off three years ago, and were rekindled last summer with the disappointing launch of OpenAI's GPT-5 model, which saw only marginal benchmark gains over its predecessor, and which many fans felt was subjectively worse to talk to. In November, some faith in the industry was restored, however, with the launch of Google's Gemini 3 models, as well as Google's new video generating models capable of producing stunningly lifelike footage. For the time being at least, doubts over the industry's future were offloaded onto doubting OpenAI's ability to lead it instead, with Google taking up the banner. Regardless of who's at the helm, the stakes are enormously high, as the rapid buildout of AI data centers is projected by Morgan Stanley to soar to $2.9 trillion by 2028, with Meta alone saying it will spend $600 billion on US infrastructure. Much attention has begun to be paid to the "circular" nature of the deals being struck among major AI players, such as AI chipmaker Nvidia pledging to invest up to $100 billion in OpenAI, while OpenAI agrees to buy billions of dollars worth of Nvidia's AI chips. The fear is that these deals are helping prop up a multi-trillion-dollar house of cards that could catastrophically collapse if investors get spooked by an AI wall. In some cases, the calls are coming from inside the house. Meta's recently ousted chief AI scientist and fellow "godfather" of AI Yann LeCun is a vocal skeptic of the large language model architecture used to power the industry's leading chatbots, believing an entirely new form of AI "world" model, which is trained on physical data instead of just language, is the pathway to building truly advanced AIs. But Bengio, the other AI godfather, ultimately remains optimistic about the industry's future. "Advances stalling is a minority scenario, like it's an unlikely scenario. The more likely scenario is we continue to move forward," he told The Guardian. On the economic side, technology analyst Benedict Evans argues, with similar optimism, that the AI spending isn't as outrageous as it sounds when compared to other industries like oil and gas, which spend some $600 billion every year. "These AI capex figures are a lot of money but it's not an impossible amount of money," Evans told The Guardian. "You don't have to believe in AGI to believe that generative AI is a big thing. And most of what is happening here is not, 'oh wow they're going to create God'. It's 'this is going to completely change how advertising, search, software and social networks -- and everything else our business is based on -- is going to work,'" Evans said. "It's going to be a huge opportunity." On the other hand, oil and gas companies can justify that spending because their goods and services underpin modern society as we know it. Can AI chatbots come even close to doing the same?
Share
Share
Copy Link
At Davos 2026, the conversation shifted from AI investment excitement to proving returns, with industry leaders declaring it the "year of AI ROI." But growing investor anxiety about trillions of dollars in risk and comparisons to the dot-com bust have economists and market prophets warning of a potential bubble. While companies tout enterprise traction, questions loom about whether massive AI spending can deliver promised productivity gains.
The World Economic Forum in Davos marked a pivotal shift in the Artificial Intelligence (AI) conversation. If 2025 focused on securing massive AI investment, 2026 became about proving the payoff. Rasmus Rothe from Merantix declared 2026 the "year of AI ROI," while corporate giants like Cisco and IBM plastered slogans across the Davos promenade guaranteeing they had cracked the formula for AI returns on investment
1
. The rhetoric reflects mounting investor anxiety about AI spending, as companies face pressure to justify enormous expenditures and lofty valuations that have propelled stock market valuations to record highs.
Source: Bloomberg
The scale of AI infrastructure buildout has reached staggering proportions. OpenAI, an unprofitable startup, has committed to spend more than $1.4 trillion on data centers and chips for AI in coming years
1
. Analysts at Morgan Stanley estimate $2.9 trillion will be spent on data centers between now and 20285
. This unprecedented capital expenditure has triggered warnings from economists with proven track records of predicting crashes. Dean Baker, who correctly forecast both the dot-com bubble burst and the housing market collapse, is now repositioning his investments to reduce exposure to what he considers an AI bubble edging closer to popping3
. Michael Burry, whose bet against the housing market inspired "The Big Short," disclosed his hedge fund is betting against AI darlings Nvidia and Palantir3
.Source: Washington Post
Demis Hassabis, CEO of Google DeepMind, acknowledged that investment levels in some parts of the tech industry had become detached from commercial realities, describing conditions as "bubble-like"
2
. The warning carries weight given Google's position at the frontier AI models race, with Gemini 3 topping leaderboards and the Gemini app reaching 650 million monthly users. Yet even as companies tout progress, the comparison to dot-com bust grows louder. Burry wrote that "OpenAI is the next Netscape, doomed and hemorrhaging cash," likening the ChatGPT maker to a casualty of the late 1990s crash3
.
Source: FT
Despite bubble fears, AI leaders highlighted business momentum at Davos. Anthropic CEO Dario Amodei touted his company's focus on enterprise customers as creating more stable value than consumer markets
1
. Anthropic earned significant buzz with Claude Cowork, a viral tool tackling a wider range of work tasks, while Claude Code reached a $1 billion revenue run rate in just six months. OpenAI reported its software business added about $1 billion "in the past few weeks," growing 19% weekly1
. The clearest productivity gains have emerged in software engineering, where AI tools accelerate code writing and debugging. Sonar CEO Tariq Shaukat estimates nearly a third of code at banks will be AI-generated this year1
.When asked about vulnerability to a downturn, industry leaders pointed fingers elsewhere. Google emphasized spreading costs across consumer and enterprise businesses through YouTube ads, Gemini subscriptions, and Google Cloud rentals
4
. SAP and others claimed capital-expenditure-lean models would insulate them by paying cloud providers as they go. Writer CEO May Habib identified Anthropic and OpenAI as most at risk, noting both are valued at more than Salesforce, Adobe, Databricks, and Snowflake combined4
. However, Snowflake CEO Sridhar Ramaswamy warned a correction would have sweeping impacts across the entire stock market.Related Stories
The path to Artificial General Intelligence (AGI) underpins the financial promise. David Cahn from Sequoia Capital wrote that "nothing short of AGI will be enough to justify the investments now being proposed for the coming decade"
5
. Yoshua Bengio, one of the "godfathers" of modern AI, acknowledged a clear possibility of hitting a wall in AGI progress, which "could be a real crash"5
. Meanwhile, geopolitical challenges complicate global strategies. Donald Trump's Davos appearance and tensions over Greenland raised concerns that Europe might retaliate against tariff threats by dropping US tech1
. Debate also intensified over China's position, with Hassabis claiming Chinese firms remain six months behind while Mistral CEO Arthur Mensch called that view a "fairy tale."NTT Data CEO Abhijit Dubey, seeing both supply and demand sides of the industry, noted that data center buildout is outpacing enterprise AI adoption
4
. Yet he expects any correction to be short-lived, with companies adopting AI far faster than the decade-long cloud migration. Meta CTO Andrew Bosworth argued that society won't regret the buildout, comparing it to railroads and telecom fiber, though acknowledging many companies won't survive to reap rewards4
. Goldman Sachs Research countered crash predictions, noting corporate debt remains relatively low and that double-digit earnings growth provides "the fundamental base for a continued bull market"3
. As the overvaluation of AI companies debate intensifies, investors face a critical question: whether trillions in spending will deliver transformative returns or repeat history's cautionary tales of technological exuberance outpacing economic reality.Summarized by
Navi
[3]
16 Aug 2025•Business and Economy

15 Dec 2024•Technology

18 Nov 2025•Business and Economy

1
Policy and Regulation

2
Technology

3
Technology
