6 Sources
6 Sources
[1]
AI luminaries at Davos clash over how close human level intelligence really is | Fortune
Some of the world's best-known names in artificial intelligence descended on the small ski resort town of Davos, Switzerland, this week for the World Economic Forum (WEF). AI dominated many of the discussions among corporations, government leaders, academics, and non-governmental groups. Yet a clear contrast emerged over how close current models are to replicating human intelligence and what the likely near-term economic impacts of the technology will be. The large language models (LLMs) that have captivated the world are not a path to human-level intelligence, two AI experts asserted in separate remarks at Davos. Demis Hassabis, the Nobel Prize-winning CEO of Google DeepMind, and the executive who leads the development of Google's Gemini models, said today's AI systems, as impressive as they are, are "nowhere near" human-level artificial general intelligence, or AGI. Yann LeCun -- an AI pioneer who won a Turing Award, computer science's most prestigious prize, for his work on neural networks -- went further, saying that the LLMs that underpin all of the leading AI models will never be able to achieve human-like intelligence and that a completely different approach is needed. Their views differ starkly from the position asserted by top executives of Google's leading AI rivals, OpenAI and Anthropic, who assert that their AI models are about to rival human intelligence. Dario Amodei, the CEO of Anthropic, told an audience in Davos that AI models would replace the work of all software developers within a year and would reach "Nobel-level" scientific research in multiple fields within two years. He said 50% of white-collar jobs would disappear within five years. OpenAI CEO Sam Altman (who was not at Davos this year) has said we are already beginning to slip past human-level AGI toward "superintelligence", or AI that would be smarter than all humans combined. Can LLMs lead to general intelligence? In a joint WEF appearance with Amodei, Hassabis said that there was a 50% chance AGI might be achieved within the decade, though not through models built exactly like today's AI systems. In a later, Google-sponsored talk, he elaborated that "maybe we need one or two more breakthroughs before we'll get to AGI." He identified several key gaps, including the ability to learn from just a few examples, the ability to learn continuously, better long-term memory, and improved reasoning and planning capabilities. "My definition of [AGI] is a system that can exhibit all the cognitive capabilities humans can -- and I mean all," he said, including the "highest levels of human creativity that we always celebrate, the scientists and artists we admire." While advanced AI systems have begun to solve difficult math equations and tackle previously unproved conjectures, AI will need to develop its own breakthrough conjectures -- a "much harder" task -- to be considered on par with human intelligence. LeCun, speaking at the AI House in Davos, was even more pointed in his criticism of the industry's singular focus on LLMs. "The reason...LLMs have been so successful is because language is easy," he argued. He contrasted this with the challenges posed by the physical world. "We have systems that can pass the bar exam, they can write code...but they don't really deal with the real world. Which is the reason we don't have domestic robots [and] we don't have level five self-driving cars," he said. LeCun, who left Meta in November to found Advanced Machine Intelligence Labs (AMI), argued that the AI industry has become dangerously monolithic. "The AI industry is completely LLM-pilled," he said. He said that Meta's decision to focus exclusively on LLMs and to invest tens of billions of dollars to build colossal data centers contributed to his decision to leave the tech giant. LeCun added that his view that LLMs and generative AI were not the path to human-level AI, let alone the "superintelligence" desired by CEO Mark Zuckerberg, made him unpopular at the company. "In Silicon Valley, everybody is working on the same thing. They're all digging the same trench," he said. The fundamental limitation, according to LeCun, is that current systems cannot build a "world model" that can predict what is most likely to happen next and connect cause and effect. "I cannot imagine that we can build agentic systems without those systems having an ability to predict in advance what the consequences of their actions are going to be," he said. "The way we act in the world is that we know we can predict the consequences of our actions, and that's what allows us to plan." LeCun's new venture hopes to develop these world models through video data. But while some video AI models try to predict pixels frame-by-frame, LeCun's work is designed to work at a higher level of abstraction to better corresponds to objects and concepts. "This is going to be the next AI revolution," he said. "We're never going to get to human-level intelligence by training LLMs or by training on text only. We need the real world." What business thinks Hassabis put the timeline for genuine human-level AGI at "five to 10 years." Yet the trillions of dollars flowing into AI show the business world isn't waiting to find out. The debate over AGI may be somewhat academic for many business leaders. The more pressing question, says Cognizant CEO Ravi Kumar, is whether companies can capture the enormous value that AI already offers. According to Cognizant research released ahead of Davos, current AI technology could unlock approximately $4.5 trillion in U.S. labor productivity -- if businesses can implement it effectively. But Kumar told Fortune that most businesses had not yet done the hard work of restructuring their businesses or reskilling their workforces to take advantage of AI's potential. "That $4.5 trillion will generate real value in enterprises if you start to think about reinvention [of existing businesses]," he said. He said it also required what he called "the integration" of human labor and digital labor conducted by AI. "Skilling is no longer a side thing," he argued. "It has to be a part of the infrastructure story for you to pivot people to the future, create higher wages and upward social mobility and make this an endeavor which creates shared prosperity."
[2]
Expect AGI Within a Few Years, Says Anthropic CEO -- and Job Losses Too
The executives said governments are underestimating the speed and scale of economic and geopolitical risks The timeline for artificial general intelligence (AGI) is tightening, and according to Anthropic CEO Dario Amodei, the window for policymakers to prepare is closing faster than many realize. Speaking on a panel at the World Economic Forum in Davos alongside Google DeepMind CEO Demis Hassabis, Amodei warned that the rapid evolution of AI is poised to outpace the resilience of labor markets and social institutions. Amodei reaffirmed his aggressive forecast that human-level AI is likely only years, not decades, away. "I don't think that's going to turn out to be that far off," Amodei said, standing by his prediction that superhuman capability could arrive by 2026 or 2027. "It's very hard for me to see how it could take longer than that." The engine behind this acceleration is a burgeoning feedback loop where AI models have begun to automate their own creation. Amodei noted that at Anthropic, the traditional role of the software engineer is already being redefined by AI. "I have engineers within Anthropic who say, 'I don't write any code anymore. I just let the model write the code, I edit it,'" he said. "We might be six to twelve months away from when the model is doing most, maybe all, of what [software engineers] do end to end." While Amodei sees progress compounding quickly -- limited only by chip supply and training cycles -- Hassabis offered a more measured outlook. "I think there has been remarkable progress, but some areas of engineering work, coding, or mathematics are easier to see how they would be automated, partly because they're verifiable -- what the output is," he said. "Some areas of natural science are much harder. You won't necessarily know if the chemical compound you've built, or a prediction about physics, is correct. You may have to test it experimentally, and that will take longer." Hassabis said current AI systems still lack the ability to generate original questions, theories, or hypotheses, even as they improve at solving well-defined problems. "Coming up with the question in the first place, or coming up with the theory or the hypothesis, that's much harder," Hassabis said. "That's the highest level of scientific creativity, and it's not clear we will have those systems." The DeepMind chief maintained a "50% chance" of reaching AGI by 2030, citing a gap between high-speed calculation and true innovation. Despite their differing timelines, the two leaders reached a somber consensus on the economic fallout, agreeing that white-collar jobs are in the crosshairs. Amodei has previously estimated that up to half of entry-level professional roles could vanish within five years, a sentiment he doubled down on at Davos. A test of institutional readiness The primary concern for both executives is not just the technology itself, but the ability of the world's governments to keep up. Hassabis warned that even the most pessimistic economists might be underestimating the speed of the transition, noting that "five to ten years away, that isn't a lot of time." For Amodei, the situation has escalated from a technical challenge to an existential "crisis" of governance. "This is happening so fast and is such a crisis, we should be devoting almost all of our effort to thinking about how to get through this," he said. While he remains optimistic that risks -- ranging from geopolitical friction to individual misuse -- are manageable, he warned that the window for error is slim. "This is a risk that if we work together, we can address," Amodei said. "But if we go so fast that there are no guardrails, then I think there is a risk of something going wrong." Some labor analysts argue that the disruption may show up less as outright job replacement and more as a restructuring of professional work itself. Bob Hutchins, CEO of Human Voice Media, said the core issue is not whether AI replaces workers, but how it changes the nature of their jobs. "We have to quit asking whether or not AI will replace our jobs and begin asking how does it degrade them?" Hutchins said. "There isn't a direct threat that a machine will completely take the place of a person doing a writer's or coder's job. The threat is that the job is being broken down into smaller tasks and managed by an algorithm." According to Hutchins, this shift changes human roles from 'Creator' to 'Verifier." "It takes away the ability of professionals to make their own decisions and breaks down meaningful professional jobs into unskilled, low-wage jobs with a focus on completing individual tasks," he said. "Labor isn't disappearing, it's becoming less obvious, less secure, and much harder to unionize," he added.
[3]
Davos 2026 - the day after AGI arrives means crisis and crazy stuff! Actually no, that's 2026!
When will Artificial General Intelligence (AGI) become a thing? Tomorrow? Next month? Next year? Whenever the PR team decides we need a headline-grabbing thing to say? Who knows, but whatever the correct answer, it's now a de rigeur question to be aired at any major AI talking shop and this week's Davos junket is no exception. So, step forward Dario Amodei, CEO of Anthropic, and Demis Hassabis, CEO of Google DeepMind, to update their earlier outings into the realms of speculation on this front. First up, Amodei, who previously predicted that there would be an AI model that could do everything a human could do at the level of a Nobel Laureate across many fields, by, well, this year. Still sticking by that one? Actually he sort of is: It's always hard to know exactly when something will happen, but I don't think that's going to turn out to be that far off. He bases his assumptions on the idea that AI firms would make models that are good at coding and good at AI research, and use that to produce the next generation of model and speed it up to create a loop. That would increase the speed of model development, he argues: In terms of models that write code, I have engineers within Anthropic, who say, 'I don't write any code anymore. I just let the model write the code, I edit it, I do the things around it'....Then it's a question of, how fast does that loop close? Not every part of that loop is something that can be sped up by AI. There are, chips, there's manufacture of chips, there's training time for the model, so there's a lot of uncertainty. That said, he reckons AGI is a few years away at most: It's very hard for me to see how it could take longer than that. But if I had to guess, I would guess that this goes faster than people imagine, that that key elements of code and increasingly research [are] going faster than we imagine. That's going to be the key driver. Questions For his part, Hassabis has been more conservative in his assumptions, citing a 50% chance of a system that can exhibit all the cognitive capabilities humans can by the end of the decade. He's sticking to that timeline: There has been remarkable progress. In some areas of engineering work, coding or mathematics it is a little bit easier to see how they'll be automated, partly because they're verifiable [in terms of] what the output is. Some areas of natural science are much harder to do than that. You won't necessarily know if the chemical compound you've built or this prediction about physics is correct. It may be. But you may have to test it experimentally, and that will all take longer. So I think there are some missing capabilities at the moment in terms of not just solving existing conjectures or existing problems, but actually coming up with the question in the first place, or coming up with the theory or the hypothesis. I think that's much, much harder, and the highest level of scientific creativity. He adds: The full closing of the loop is an unknown. It's possible to do you may need AGI itself to be able to do that in some domains. These domains where there's there's more messiness around them, it's not so easy to verify your answer very quickly...But I think in coding and mathematics and these kind of areas, I can definitely see that working. And then the question is more theoretical - what is the limit of engineering and maths to solve the natural sciences? The day after... OK, so all that being the case, the topic of the panel on which both gents were sitting was The Day After AGI, which does sound a lot like a disaster movie. Leaving aside the question of when AGI happens, should we be more worried about what happens when it does? Amodei pitches himself as an optimist, but does admit that he can see "grave risks" ahead. He cites a scene from the movie of Carl Sagan's Contact as a frame of reference: It's this international panel that's interviewing people to be humanity's representative to meet the alien. And one of the questions they asked one of the candidates is, 'If you could ask the aliens any one question, what would it be?'. And one of the characters says,' I would ask, how did you do it? How did you manage to get through this technological adolescence without destroying yourselves? How did you make it through?' Ever since I saw it 20 years ago, it's kind of stuck with me. That's the mindset with which he approaches AGI: I think the next few years we're going to be dealing with how do we keep these systems under control that are highly autonomous and smarter than any human? How do we make sure that individuals don't mis-use them. I have worries about things like bio-terrorism. How do we make sure that nation states don't mis-use and that's why I've been so concerned about the CCP and other authoritarian governments. What are the economic impacts? I've talked about labor displacement a lot . What haven't we thought of? [That] in many cases is maybe the hardest thing to deal with. There's also the inevitable 'what happens to my job?' concerns. Amodei admits that he can see a time when Anthropic needs fewer people in junior and intermediate roles: We might have AI that's better than humans at everything in maybe one to two years, maybe a little longer than that, those don't seem to line up. The reason is that there's this there's this lag, and there's this replacement thing. I know the labor market is adaptable. It's just like 80% of people used to do farming, then farming got automated, and they became factory workers, and then knowledge workers. So, there is some level of adaptability here. We should be economically sophisticated about how the labor market works. But my worry is, as this exponential keeps compounding - and I don't think it's going to take that long again, somewhere between between a year and five years - it will overwhelm our ability to adapt. Hassabis shares his concerns here: I'm constantly surprised, even when I meet economists at places like this, that there are not more professional economists and professors thinking about what happens and not just sort of on the way to AGI...Maybe there are ways to distribute this new productivity, this new wealth, more fairly. I don't know if we have the right institutions to do that, but that's what should happen at that point...There are even bigger questions than that to do with meaning and purpose and a lot of the things that we get from our jobs, not just economically, that's one question. But I think that may be easier to solve strangely than what happens to the human condition and humanity as a whole. Who takes charge? And what happens to humanity is a big question right now as the macro-economic and socio-political rulebooks are torn up. Hassabis has an AGI spin here as well: AI's a dual purpose technology, so it could be re-purposed by bad actors for harmful ends. We've needed to think about that all the way through. But I'm a big believer in human ingenuity. But the question is having the time and the focus and all the best minds collaborating on it to solve these problems. I'm sure if we had that, we would solve the technical risk problem. It may be we don't have that, and then that will introduce risk, because it'll be fragmented, there'll be different projects, and people be racing each other. Then it's much harder to make sure systems that we produce will be technically safe, but I feel like that's a very tractable problem There's only so much AI vendors can do, argues Amodei, before governments need to take responsibility - and we're running out of time: We're just trying to do the best we can. We're just one company, and we're trying to operate in the environment that exists, no matter how crazy it is. My policy recommendations [for government] haven't changed - not selling chips is one of the biggest things we can do to make sure that we have the time to handle this. I wish we had five to 10 years, but assume I'm right and it can be done in one to two years. Why can't we slow down? The reason we can't do that, is is because we have geo-political adversaries building the same technology at a similar pace. It's very hard to have an enforceable agreement where they slow down and we slow down. And so if we can just not sell the chips, then this isn't a question of competition between the US and China. This is a question of competition between me and Demis, which I'm very confident that we can work out. That would mean a major shift in current US economic policy, of course, which seems unlikely to say the least. So Amodei has a warning: These random countries in different parts of the world build data centers that have Nvidia chips instead of Huawei chips. I think of this more like it's a decision - are we going to sell nuclear weapons to North Korea because that produces some profit for Boeing, where we can say, 'OK these [bomb] cases were made by Boeing, the US is winning, this is great. That analogy should just make clear how I see this trade off - I just don't think it makes sense. My take Last word to Amodei: There's all kinds of crazy stuff going on in the outside world outside AI, but my view is this is happening so fast, it is such a crisis, we should be devoting almost all of our effort to thinking about how to get through this. Same time, same place next year, guys....and we still won't have AGI then!
[4]
ET@Davos 2026: AGI should lead to fresh understanding of how the world works, says Google DeepMind CEO
Demis Hassabis, CEO of Google DeepMind, believes AGI will revolutionize science and medicine, leading to radical abundance. He noted China's significant progress in AI, now trailing the West by only six to twelve months. However, Hassabis stated that groundbreaking innovations like AlphaGo have so far originated from US companies. Artificial general intelligence (AGI) should generate new theories and lead to fresh understanding of how the world works, Demis Hassabis, chief executive and cofounder of Google DeepMind, told Sruthijith KK in an interview at the World Economic Forum in Davos. "I think that if we do that, we'll accelerate science and human health. We'll have incredible medical solutions and I think we'll be in a world of radical abundance," said the British computer scientist and Nobel laureate, who described DeepMind as the engine room of Google. He said the Chinese have made up much ground in the artificial intelligence race, driven by skilled teams and funding. "A few years ago, I would have said they were one or two years behind. Maybe now they're only six to 12 months behind," he said. "But what I think they're yet to demonstrate is innovating beyond the frontier. So the next transformers, the next AlphaGo, you know... I think so far all of that's come from the West and the US companies." (You can now subscribe to our Economic Times WhatsApp channel)
[5]
Davos 2026: China has caught up a lot with US, but yet to innovate beyond frontier, says Google DeepMind CEO Demis Hassabis
Demis Hassabis, CEO of Google DeepMind, predicts artificial general intelligence will transform science and health within a decade, ushering in an era of abundance. He notes China's progress in AI capabilities but highlights its current lack of frontier innovations. Hassabis also discusses AI's impact on jobs and energy, emphasizing potential benefits and new opportunities. Over the next decade, artificial general intelligence (AGI) will dramatically accelerate science and human health, unleashing an era of "radical abundance," believes Demis Hassabis, chief executive and cofounder of Google DeepMind, who began his pursuit of AGI back in 2010. The British computer scientist and Nobel laureate, who is now building the "engine room of Google," envisions taking AI to billions of people alongside Alphabet CEO Sundar Pichai. China has largely succeeded in catching up with the US in the tech race among nations, but still lacks breakthrough innovations that define entirely new frontiers, Hassabis told ET's Sruthijith KK in an interview on the sidelines of the World Economic Forum in Davos. Edited excerpts: You have said AGI might be possible to achieve in the next 5-10 years. Give us a glimpse into how the world will change once humanity hits AGI? Well, once we get to AGI, at least (by) definition, it should be able to come up with new scientific theories, not just prove an existing theory or maths conjecture, but actually come up with new ideas about how the world works. I think if we do that, we'll accelerate science and human health. We'll have incredible medical solutions and we'll be in a world of radical abundance. I sometimes call it a post-scarcity world. That would be the dream if things go well. 2025 has been great year for Google. With Gemini models, TPU chips, Google has emerged from playing 'catch-up' to now being at the forefront. What changed within Google and what is the road map ahead? We've always had the best and deepest research bench. Over the last 10-15 years, almost all of the inventions that the modern AI industry relies on...transformers, deep reinforcement learning, AlphaGo...these were invented by Google and DeepMind. So, I was confident, but we had to translate that into shipping faster and...combining that research with our products. I think that's what we've managed to do over the last couple of years. You've said China is quite close now to the companies based out of the US in terms of capabilities. How do you see this arms race playing out? The Chinese have extremely capable teams, and also have a lot of resources. I think that they've caught up a lot. A few years ago, I would have said they were one or two years behind. Maybe now, they're only 6-12 months behind. But what they're yet to demonstrate is innovating beyond the frontier. The next transformers, the next AlphaGo, you know...so far all of that has come from the West. So, it's easier to catch up to the frontier than it is to push the frontier yourself. That's the part that is the unknown. AI needs tremendous amount of investments. And only China and the US are seen as being capable of it. How should countries like India approach this era of AI? What should we optimise for? I think each nation needs to decide that. But I feel like there's so much opportunity in applying these technologies to revolutionise industries, to do more startups and regional versions of these technologies. So...there's probably no point trying to create frontier models. There are maybe half-a-dozen providers of that now. One can pick which one you want and for pretty cheap in most cases. The question now is, what are the capability overhangs? I think even (for) those of us who are building these technologies, it's all happening so fast (that) we don't have enough time to explore what all of the possibilities are, even with existing technology. So, there's a lot there to be explored and exploited by any country that wants to apply it to their local strengths. I think India has a huge part to play in this. Give us a glimpse into when you and Sundar (Pichai) are chatting about AI and how to integrate it into Google's various products. What are your top priorities right now? We talk pretty much every day and there's always a million things to discuss. But the main thing is, the easiest way to explain it is if you think of Google DeepMind, the organisation that I run, as the engine room of Google. That's how we describe it internally. So, we're creating the engine-obviously that's Gemini - but there are other models as well. Our world models, there's VEO, our video models, Nano Banana, our image models. So, these are like the pieces of the engine and our job is to give that to the rest of Google, all these amazing products and services that billions of people use every day and as quickly as possible, ship that engine under the hood into those products and features. Two of the most important themes from a wider societal perspective: one is job displacement as a consequence of the advancements in AI. And the second is the inexorable need for energy. What do you think about these and how can the AI industry be mindful and solve as you go along on these two fronts - jobs and energy? On energy, obviously, we're seeing enormous demands. The AI models are getting more efficient, maybe 10X more efficient every year... The reason the demand keeps going up is we still haven't got to AGI yet. So, you still need more energy. What I would say though is that, although they're using some energy, the benefits are going to far outweigh the usage... (AI is working on) new materials, optimising existing grids, getting more out of the same technologies and also developing new technologies like fusion. We're working with Commonwealth Fusion on containing plasma in the fusion reactors. So, AI in the medium term is going to more than pay for itself in energy savings and energy breakthroughs than it does today. On jobs, there's going to be some disruption. Usually what happens, though, is that new jobs get created...that are higher level, more creative, perhaps more fulfilling. That's what we're going to see, at least in the early, next five years of the AI revolution. (You can now subscribe to our Economic Times WhatsApp channel)
[6]
Google DeepMind CEO discusses AI progress and timeline for AGI By Investing.com
Investing.com -- Google DeepMind CEO Demis Hassabis shared insights on artificial intelligence development during a conversation at Bloomberg House in Davos, Switzerland. Hassabis highlighted Google's competitive advantage in AI, noting the company has "invented most of the breakthroughs that the modern AI industry relies on." He emphasized Google's full-stack capabilities, from TPU hardware to research labs and consumer products that integrate with AI technology. The CEO described the intense work environment at the frontier of AI development, revealing he typically works 100-hour weeks for 50 weeks per year. "It's ferociously competitive out there. Maybe the most intense competition there has ever been in technology and the stakes are incredibly high," Hassabis said. Regarding physical robotics, Hassabis believes a breakthrough moment is approaching but estimates it's still 18-24 months away. Google recently announced a collaboration with Boston Dynamics, with impressive demonstrations potentially coming "in a year or two" that could be scaled up. When discussing Chinese AI competition, Hassabis acknowledged companies like ByteDance are "maybe six months behind, not one or two years behind the frontier." However, he questioned whether Chinese companies can innovate beyond the current technological frontier. Hassabis maintained his prediction that artificial general intelligence (AGI) has a 50% chance of arriving by 2030, though he noted his definition sets a high bar requiring capabilities like scientific creativity and continuous learning. On AI's impact on employment, Hassabis disagreed with predictions of rapid job displacement, suggesting AI systems need greater consistency before they can fully replace human workers. He believes the long-term potential of AGI could create "a post-scarcity world" with solutions to fundamental challenges like energy production. Hassabis advised young people to become proficient with AI tools, which he compared to "superpowers in the creative arts," enabling individuals to accomplish what previously required multiple people. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
Share
Share
Copy Link
Leading AI figures at the World Economic Forum in Davos presented sharply divergent views on artificial general intelligence. Anthropic CEO Dario Amodei maintains AGI could arrive within years, potentially displacing 50% of white-collar jobs within five years. Google DeepMind's Demis Hassabis estimates a 50% chance by 2030, while Yann LeCun argues current large language models will never achieve human-level intelligence without fundamental breakthroughs.
The World Economic Forum in Davos became a battleground for competing visions of artificial general intelligence this week, as leading AI experts presented starkly different assessments of when machines might match human cognitive abilities. Dario Amodei, CEO of Anthropic, doubled down on his aggressive forecast that human-level intelligence could arrive by 2026 or 2027, telling attendees "it's very hard for me to see how it could take longer than that"
1
2
. His optimism stems from observing engineers at Anthropic who no longer write code manually, instead letting AI models handle the work while they edit and manage the output. Amodei estimates the industry might be "six to twelve months away from when the model is doing most, maybe all, of what software engineers do end to end"2
.
Source: Decrypt
Demis Hassabis, the Nobel Prize-winning CEO of Google DeepMind, offered a more measured perspective during the same panel discussion. While maintaining his earlier estimate of a 50% chance of reaching AGI by 2030, Hassabis emphasized that current AI systems remain "nowhere near" true human-level intelligence
1
. He identified critical gaps in AI development, including the ability to learn from few examples, continuous learning capabilities, improved long-term memory, and enhanced reasoning and planning1
. The Google DeepMind leader noted that while AI systems excel at solving well-defined problems in mathematics and coding, they struggle with scientific creativity—particularly the ability to generate original questions, theories, or hypotheses rather than merely solving existing conjectures2
.
Source: ET
Yann LeCun, the Turing Award-winning AI pioneer who recently left Meta to found Advanced Machine Intelligence Labs, delivered perhaps the sharpest critique of current AI development approaches at Davos. Speaking at the AI House, LeCun argued that large language models—the foundation of systems like ChatGPT and Claude—will never achieve human-level intelligence
1
. "The AI industry is completely LLM-pilled," he said, criticizing what he sees as a dangerous focus on a single technological approach1
.LeCun's fundamental objection centers on LLMs' inability to build "world models" that predict consequences and connect cause and effect. "I cannot imagine that we can build agentic systems without those systems having an ability to predict in advance what the consequences of their actions are going to be," he explained
1
. He pointed to the absence of domestic robots and level-five self-driving cars as evidence that current systems fail to deal with real-world complexity, despite passing bar exams and writing code. His new venture aims to develop world models through video data, working at higher levels of abstraction that correspond to objects and concepts rather than predicting pixels frame-by-frame. "This is going to be the next AI revolution," LeCun declared1
.Despite their disagreements on timing, the AI experts at Davos reached a sobering consensus on economic impacts. Amodei warned that AI models would replace all software developers within a year and reach "Nobel-level" scientific research across multiple fields within two years
1
. He maintained his earlier prediction that 50% of white-collar jobs could disappear within five years1
2
. The Anthropic CEO described the situation as "happening so fast and is such a crisis, we should be devoting almost all of our effort to thinking about how to get through this"3
.Hassabis acknowledged that even pessimistic economists might underestimate the transition speed, noting that "five to ten years away, that isn't a lot of time" for labor markets and institutions to adapt
2
. Bob Hutchins, CEO of Human Voice Media, offered a nuanced perspective on these changes, arguing that the real threat isn't outright job replacement but job degradation. "The threat is that the job is being broken down into smaller tasks and managed by an algorithm," Hutchins said, describing a shift from 'Creator' to 'Verifier' roles that transforms meaningful professional work into "unskilled, low-wage jobs with a focus on completing individual tasks"2
.
Source: diginomica
Related Stories
The Davos discussions also highlighted the global AI race, particularly China's rapid progress. Hassabis told the Economic Times that Chinese teams have closed the gap significantly: "A few years ago, I would have said they were one or two years behind. Maybe now they're only six to 12 months behind"
4
5
. However, he emphasized that China has yet to demonstrate frontier innovations like transformers or AlphaGo, noting "it's easier to catch up to the frontier than it is to push the frontier yourself"5
.Amodei raised concerns about authoritarian governments potentially misusing advanced AI systems, specifically mentioning worries about bio-terrorism and how nation states might deploy these technologies
3
. Both executives stressed that the primary challenge isn't just the technology itself, but whether governments can keep pace with AI development. "This is a risk that if we work together, we can address," Amodei said. "But if we go so fast that there are no guardrails, then I think there is a risk of something going wrong"2
.Looking beyond the risks, Hassabis painted an optimistic long-term picture of AGI's potential. He envisions artificial general intelligence generating new scientific theories and fresh understanding of how the world works, leading to "radical abundance" or what he calls a "post-scarcity world"
4
5
. "We'll accelerate science and human health. We'll have incredible medical solutions," he told the Economic Times4
. This vision depends on AI systems achieving the "highest level of scientific creativity," not just solving existing problems but formulating entirely new questions and hypotheses1
.The competing visions presented at Davos underscore the uncertainty surrounding AI's trajectory. While Amodei sees a feedback loop rapidly accelerating development through AI-assisted AI research, Hassabis identifies fundamental capability gaps that may require "one or two more breakthroughs" beyond current architectures
1
. LeCun's call for entirely new approaches through world models suggests the path to human-level intelligence may require abandoning the current LLM paradigm altogether. What remains clear is that policymakers, businesses, and workers face mounting pressure to prepare for transformative changes, whether those arrive in months, years, or decades.Summarized by
Navi
[3]
1
Policy and Regulation

2
Technology

3
Technology
