3 Sources
3 Sources
[1]
Yann LeCun, Demis Hassabis Clash Over What 'General Intelligence' Means | AIM
"The most incomprehensible thing about the world is that the world is comprehensible." A public disagreement between AI researchers Yann LeCun and Demis Hassabis has reopened a long-running debate on whether human intelligence can be described as "general". In a recent podcast appearance, LeCun said the idea of general intelligence, when used to mean human-level intelligence, is flawed. LeCun argues that "there is no such thing as general intelligence", saying the term is largely used to describe human-level intelligence, which he believes is a mistake. Human intelligence, he said, is "super specialised", shaped by evolution to handle the physical world and social interaction efficiently. While humans navigate real-world environments and deal with other people well, LeCun pointed out that they perform poorly at many structured tasks, like chess, and are outperformed by other animals in several domains. This, he said, shows that humans are not broadly general but highly specialised. "We think of ourselves as being general, but it's simply an illusion because all of the problems that we can apprehend are the ones that we can think of," LeCun said. Hassabis responded that LeCun was conflating general intelligence with universal intelligence. "Brains are the most exquisite and complex phenomena we know of in the universe (so far), and they are in fact extremely general," he wrote in his post on X. He argued that while no system can escape the no free lunch theorem, a general system can still learn any computable function in principle. "In the Turing machine sense, the architecture of such a general system is capable of learning anything computable given enough time and memory," he said, adding that human brains and AI foundation models are "approximate Turing machines". Hassabis also rejected the idea that human performance in narrow domains undermines generality. Referring to chess, he said it was notable that humans invented the game at all and reached elite levels of play. LeCun later said the dispute was largely about terminology. "I object to the use of 'general' to designate 'human level' because humans are extremely specialised," he wrote in his response. He argued that intelligence should be judged not just by theoretical capability but by efficiency under limited resources. "For the vast majority of computational problems, [the human brain is] horribly inefficient," he said, citing time and memory constraints in tasks such as chess. To support his argument, LeCun used an analogy from deep learning, noting that while a simple neural network can approximate any function in theory, it becomes impractical for most real-world problems. He also pointed to biological limits, arguing that the number of functions the human brain can represent is vanishingly small compared to the space of all possible functions. "Not only are we not general, we are [also] ridiculously specialised," he said. LeCun concluded by noting that humans mistake this specialisation for generality because most possible functions are incomprehensible. Quoting Albert Einstein, he wrote, "The most incomprehensible thing about the world is that the world is comprehensible."
[2]
Battle of the Nerds: Godfather of AI, Google DeepMind Chief Argue Over AGI
* Hassabis said LeCun is "plain incorrect" * He also equated human brains with Turing machines * LeCun believes the human brain is highly inefficient The AI researcher beef was on nobody's 2025 Bingo card, but it has happened (before GTA 6). X (formerly Twitter) was called the "digital town square" by Elon Musk, and it is an acceptable social media platform to argue with those whose views you do not agree with, and generally, no one bats an eye. But when the Godfather of AI and the 2024 Nobel Prize in Chemistry winner tangle in a war of words, it does turn heads. On Monday, Yann LeCun and Demis Hassabis were engaged in a heated conversation over whether general intelligence exists. Demis Hassabis and Yann LeCun Battle Over General Intelligence While the main argument was about the existence of general intelligence as a concept, there is a deeper link with the technology in which both are heavily invested. Hassabis is the CEO of Google DeepMind, the division which leads Google's major AI projects, from research to deployment. On the other hand, LeCun served as Meta's Chief AI Scientist for years and has recently launched his AI startup, Advanced Machine Intelligence (AMI) Labs. So, when they argue over the concept of general intelligence, what they are really saying is whether building artificial general intelligence (AGI) is a feasible goal or not. Notably, every major AI company, including Anthropic, Google, Meta, Microsoft, OpenAI, xAI, and more. The first post was made by Hassabis when he replied to an interview with LeCun, where the Turing Award winner declared that "general intelligence as a concept" does not make sense. To put it simply, he believes that human minds are super-specialised to complete tasks in our physical world. Calling the word "general" a misnomer, he claims that human perception believes in general intelligence because we cannot even imagine the problems that our brains cannot solve. He also illustrates this with the example of Chess, a sport where machines are far superior to humans. "Yann is just plain incorrect here; he's confusing general intelligence with universal intelligence," replied Hassabis, adding that the human brain is extremely general. He argued that the super-specialised nature of the brain is an acquired trait due to its finite memory and energy. "But the point about generality is that in theory, in the Turing Machine sense, the architecture of such a general system is capable of learning anything computable given enough time and memory (and data), and the human brain (and AI foundation models) are approximate Turing Machines," the Google DeepMind CEO said. He also refuted the chess argument, adding that humans inventing chess in the first place is evidence of the general capacity. With nearly 10,000 likes and 1,200 reshares (at the time of publishing), the post captured the attention of LeCun. He replied, "I object to the use of 'general' to designate 'human level' because humans are extremely specialised." Giving the example of the optic nerve, he argued that it is capable of a massively large number of vision functions and sees a large number of rays, but in reality, the eye can only see a fraction of all rays that exist in the world (the visible light or VIBGYOR). Reiterating his earlier position, he added, "Clearly, a properly trained human brain with an infinite supply of pens and paper is Turing complete. But for the vast majority of computational problems, it's horribly inefficient, which makes it highly suboptimal under bounded resources (like playing a chess game)." Hassabis has yet to respond to this argument. So, how does it connect to AGI? The rationale is that if humans themselves do not possess general intelligence, how can we create a machine that is capable of it? As such, AGI is a meaningless goalpost, and the real goal has always been superintelligence. However, many experts in the field have disagreed with this notion and have called AGI the midpoint to superintelligence or sentient AI, with human-level intelligence and the ability to perform general-purpose tasks.
[3]
Demis Hassabis vs Yann LeCun: Is human intelligence general or specialized?
The debate hinges on definitions, efficiency, and bounded intelligence In the often abstract, occasionally esoteric world of artificial intelligence research, it's rare to see a philosophical disagreement surface so publicly - and so bluntly. Yet that's exactly what happened when Yann LeCun, Meta's chief AI scientist, declared that "there is no such thing as general intelligence," calling the concept "complete BS." Within hours, Demis Hassabis, CEO of Google DeepMind, stepped in to disagree - politely, but firmly. What followed wasn't a petty Twitter spat. It was a crystallized debate about how we define intelligence itself - and whether humans, often the implicit benchmark for "general intelligence," are even worthy of that status in the first place. In a short clip (of a longer interview) uploaded on X.com, which was barely one minute, Yann LeCun's argument begins with an uncomfortable demotion of our own species. Humans, he says, only seem general because the world conveniently matches the problems we evolved to solve. "Human intelligence is super specialized," Yann LeCun argues. We're good at navigating the physical world and reading other humans because evolution shaped us that way. "And chess we suck at," he adds pointedly, noting that many animals outperform us in tasks we barely comprehend. The illusion of generality, LeCun claims, exists only because "all of the problems that we can apprehend are the ones that we can think of." Demis Hassabis, who apart from being CEO of DeepMind is also a Nobel laureate, sees this as a category error. "Yann is just plain incorrect here," he writes, accusing LeCun of confusing 'general intelligence' with 'universal intelligence'. The distinction matters. No finite system can be optimal at everything - Hassabis freely acknowledges the no free lunch theorem - but that doesn't preclude generality. In theory, he says, systems like the human brain are capable of learning "anything computable given enough time and memory (and data)." In the Turing Machine sense, humans - and modern AI foundation models - are "approximate Turing Machines." That framing leads Hassabis to an almost poetic defense of human cognition. Yes, humans aren't optimal chess engines. But "it's amazing that humans could have invented chess in the first place," let alone produce someone like Magnus Carlsen. The real marvel, Hassabis suggests, isn't bounded performance - it's the capacity to traverse domains at all, from science to aviation to abstract games, using brains evolved for hunting and gathering. In his rebuttal to Hassabis' explanation and point of view, Yann LeCun doesn't deny the theoretical power of human brains. He concedes that "a properly trained human brain with an infinite supply of pens and paper is Turing complete." The problem, he insists, is efficiency. Intelligence in the real world is always resource-bounded. Under those constraints, the human brain is wildly suboptimal for most conceivable tasks. To make his case, LeCun turns mathematical - and devastatingly so. The optic nerve, he explains, carries roughly one million fibers. A vision task, simplified, is a Boolean function from one million bits to one bit. The number of such possible functions? 2^(2^1,000,000). The number of functions the human brain can actually represent, given its roughly 10^14 synapses? At most 2^(3.2Γ10^15). "This is a teeny-tiny number," LeCun writes, "compared to 2^(1E301030)." His conclusion is blunt: "Not only are we not general, we are ridiculously specialized." Also read: LLMs worse than babies in field of AI: Yann LeCun 'Godfather of AI' explains why This is where the debate zooms out from AI architecture to something more existential. LeCun quotes Einstein: "The most incomprehensible thing about the world is that the world is comprehensible." We understand only a vanishingly small, highly structured slice of reality. The rest, LeCun argues, we call entropy - and ignore. So who's right? Both, if you think about it. Hassabis is defending a theoretical notion of generality - the ability of a single architecture to span domains. While LeCun is defending a practical one - what systems can efficiently do under constraints. The disagreement, as LeCun himself concedes, is "largely one of vocabulary." But the stakes aren't semantic. As AI systems inch closer to human-level performance across tasks, how we define "general intelligence" will shape what we build - and what we expect from it. Whether intelligence is broad or narrow may matter less than this shared realization that whatever it is, it's rarer, stranger, and more constrained than our species has long liked to believe.
Share
Share
Copy Link
Meta's Chief AI Scientist Yann LeCun and Google DeepMind CEO Demis Hassabis engaged in a heated public disagreement over whether general intelligence exists. LeCun argues human intelligence is highly specialized, while Hassabis defends it as genuinely general. The debate carries significant implications for artificial general intelligence development across the AI industry.
A public disagreement between two of AI's most influential figures has reignited fundamental questions about the nature of intelligence itself. Yann LeCun, Meta's Chief AI Scientist, and Demis Hassabis, CEO of Google DeepMind and 2024 Nobel Prize winner in Chemistry, clashed over whether general intelligence exists as a meaningful concept
1
. The dispute, which unfolded on X, carries profound implications for artificial general intelligence development and how the AI industry defines its most ambitious goals.
Source: Digit
In a recent podcast appearance, LeCun declared that "there is no such thing as general intelligence," arguing that the term is fundamentally flawed when used to describe human-level intelligence
1
. According to LeCun, human intelligence is "super specialized," shaped by evolution to handle the physical world and social interaction efficiently. While humans navigate real-world environments well, they perform poorly at structured tasks like chess and are outperformed by other animals in several domains1
.LeCun's position challenges a core assumption in AI research. "We think of ourselves as being general, but it's simply an illusion because all of the problems that we can apprehend are the ones that we can think of," he explained
1
. To support his argument, LeCun turned to mathematics. The optic nerve carries roughly one million fibers, and a vision task can be simplified as a Boolean function from one million bits to one bit. The number of such possible functions is 2^(2^1,000,000), while the human brain with its approximately 10^14 synapses can represent at most 2^(3.2Γ10^15) functions3
. "Not only are we not general, we are ridiculously specialized," LeCun concluded1
.Hassabis responded forcefully, stating that LeCun was "plain incorrect" and confusing general intelligence with universal intelligence
2
. "Brains are the most exquisite and complex phenomena we know of in the universe (so far), and they are in fact extremely general," Hassabis wrote on X1
. While acknowledging that no system can escape the no free lunch theorem, Hassabis argued that a general system can still learn any computable function in principle. "In the Turing machine sense, the architecture of such a general system is capable of learning anything computable given enough time and memory," he said, adding that human brains and foundation models are "approximate Turing machines". Regarding chess, Hassabis noted it was remarkable that humans invented the game at all and reached elite levels of play1
.Related Stories
LeCun later clarified that the dispute was largely about terminology. "I object to the use of 'general' to designate 'human level' because humans are extremely specialized," he wrote
1
. He argued that intelligence should be judged not just by theoretical capability but by efficiency under limited resources. "Clearly, a properly trained human brain with an infinite supply of pens and paper is Turing complete. But for the vast majority of computational problems, it's horribly inefficient," LeCun explained, citing time and memory constraints in tasks such as chess2
. To illustrate his point, LeCun used an analogy from deep learning, noting that while a simple neural network can approximate any function in theory, it becomes impractical for most real-world problems1
.The debate over general intelligence carries direct implications for AGI as a research goal. If humans themselves do not possess general intelligence, as LeCun argues, then creating machines with truly general capabilities may be a misguided objective
2
. Every major AI company, including Anthropic, Google, Meta, Microsoft, OpenAI, and xAI, is investing heavily in AGI development2
. The philosophical debate between these AI researchers thus has practical consequences for how the industry allocates resources and defines success. Many experts view AGI as the midpoint to superintelligence, with human-level intelligence and the ability to perform general-purpose tasks2
. However, LeCun's position suggests that superintelligence, rather than AGI, should be the real goalpost. As AI systems inch closer to human-level performance across tasks, how we define the definition of general intelligence will shape what we build and what we expect from it3
. The terminology dispute between LeCun and Hassabis may seem academic, but it reflects deeper questions about learning capabilities, computable functions, and the bounded nature of all intelligence. LeCun concluded his argument by quoting Albert Einstein: "The most incomprehensible thing about the world is that the world is comprehensible"1
. This suggests humans understand only a highly structured slice of reality, mistaking this specialized intelligence for true generality.
Source: Gadgets 360
Summarized by
Navi
09 Jun 2025β’Science and Research

05 Apr 2025β’Science and Research

13 Aug 2025β’Technology

1
Technology

2
Technology

3
Technology
