Curated by THEOUTPOST
On Thu, 27 Mar, 4:03 PM UTC
2 Sources
[1]
Firms and researchers at odds over superhuman AI
Paris (AFP) - Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin. The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction. "Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026". Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it. Others, though are more sceptical. Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude. LeCun's view appears backed by a majority of academics in the field. Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI. 'Genie out of the bottle' Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention. Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member. "They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'." Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI. "It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores. A similar, more recent thought experiment is the "paperclip maximiser". This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off. While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values. Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it. He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans. 'Biggest thing ever' The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University. "If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said. Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added. "If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it". The challenge can lie in communicating these ideas to politicians and the public. Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.
[2]
Firms and researchers at odds over superhuman AI
"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026". Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin. The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction. "Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026". Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it. Others, though are more sceptical. Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude. LeCun's view appears backed by a majority of academics in the field. Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI. 'Genie out of the bottle' Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention. Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI fellow singled out for his achievements in the field. "They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'." Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI. "It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores. A similar, more recent thought experiment is the "paperclip maximiser". This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off. While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values. Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it. He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans. 'Biggest thing ever' The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University. "If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said. Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added. "If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it". The challenge can lie in communicating these ideas to politicians and the public. Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.
Share
Share
Copy Link
AI industry leaders and academic researchers clash over the timeline and feasibility of achieving artificial general intelligence (AGI), sparking discussions about the future of AI development and its potential impacts.
Leaders of major AI companies are generating significant buzz around the potential for "strong" artificial intelligence to surpass human capabilities in the near future. Sam Altman, CEO of OpenAI, recently stated that "Systems that start to point to AGI are coming into view" 12. Dario Amodei of Anthropic has even suggested that this milestone "could come as early as 2026" 12. These bold predictions are being used to justify the massive investments pouring into AI development, with hundreds of billions of dollars allocated for computing hardware and energy supplies.
However, many researchers in the field view these claims with skepticism, considering them more as marketing spin than realistic projections. Yann LeCun, Meta's chief AI scientist, expressed doubt about achieving human-level AI solely by scaling up large language models (LLMs) 12. This perspective appears to be shared by a majority of academics, as evidenced by a recent survey conducted by the Association for the Advancement of Artificial Intelligence (AAAI). Over three-quarters of respondents agreed that "scaling up current approaches" was unlikely to produce AGI 12.
The concept of artificial general intelligence (AGI) has sparked intense debate about its potential consequences. Proponents envision a future of machine-delivered abundance, while critics warn of existential risks to humanity. Some researchers, like Kristian Kersting from the Technical University of Darmstadt, suggest that these dramatic claims might be a strategy to capture attention and justify large investments 12.
Despite the skepticism, some prominent figures in the field, including Nobel laureate Geoffrey Hinton and Turing Prize winner Yoshua Bengio, have raised concerns about the dangers of powerful AI 12. Thought experiments like the "paperclip maximizer" highlight potential risks associated with misaligned AI systems that could pursue their goals at the expense of human values and safety 12.
While the debate over AGI's timeline continues, many researchers emphasize the importance of addressing more immediate concerns related to existing AI technologies. Kersting, for instance, expresses greater worry about near-term harms such as discrimination in AI-human interactions 12. However, experts like Sean O hEigeartaigh from Cambridge University argue that even if AGI emerges later than some predict, its potential impact warrants serious consideration and planning 12.
The challenge of discussing advanced AI concepts with politicians and the general public remains significant. O hEigeartaigh notes that talk of super-AI often "creates this sort of immune reaction... it sounds like science fiction" 12. This perception gap highlights the need for improved communication strategies to convey the potential implications of AI advancements to a broader audience.
Reference
[1]
[2]
OpenAI CEO Sam Altman's recent statements about achieving AGI and aiming for superintelligence have ignited discussions about AI progress, timelines, and implications for the workforce and society.
20 Sources
20 Sources
As artificial intelligence rapidly advances, the concept of Artificial General Intelligence (AGI) sparks intense debate among experts, raising questions about its definition, timeline, and potential impact on society.
4 Sources
4 Sources
The AI Action Summit in Paris marks a significant shift in global attitudes towards AI, emphasizing economic opportunities over safety concerns. This change in focus has sparked debate among industry leaders and experts about the balance between innovation and risk management.
7 Sources
7 Sources
OpenAI CEO Sam Altman's recent blog post suggests superintelligent AI could emerge within 'a few thousand days,' stirring discussions about AI's rapid advancement and potential impacts on society.
12 Sources
12 Sources
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved