6 Sources
6 Sources
[1]
Worried about superintelligence? So are these AI leaders - here's why
Leaders argue that AI could existentially threaten humans.Prominent AI figures, alongside 1,300 others, endorsed the worry.The public is equally concerned about "superintelligence." The surprise release of ChatGPT just under three years ago was the starting gun for an AI race that has been rapidly accelerating ever since. Now, a group of industry experts is warning -- and not for the first time -- that AI labs should slow down before humanity drives itself off a cliff. Also: What Bill Gates really said about AI replacing coding jobs A statement published Wednesday by the Future of Life Institute (FLI), a nonprofit organization focused on existential AI risk, argues that the development of "superintelligence" -- an AI industry buzzword that usually refers to a hypothetical machine intelligence that can outperform humans on any cognitive task -- presents an existential risk and should therefore be halted until a safe pathway forward can be established. The unregulated competition among leading AI labs to build superintelligence could result in "human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction," the authors of the statement wrote. They go on to argue that a prohibition on the development of superintelligent machines could be enacted until there is (1) "broad scientific consensus that it will be done safely and controllably," as well as (2) "strong public buy-in." The petition had more than 1,300 signatures as of late Wednesday morning. Prominent signatories include Geoffrey Hinton and Yoshua Bengio, both of whom shared a Turing Award in 2018 (along with fellow researcher Yann LeCun) for their pioneering work on neural networks and are now known as two of the "Godfathers of AI." Also: What AI pioneer Yoshua Bengio is doing next to make AI safer Computer scientist Stuart Russell, Apple cofounder Steve Wozniak, Virgin Group founder Sir Richard Branson, former Trump administration Chief Strategist Steve Bannon, political commentator Glenn Beck, author Yuval Noah Harari, and many other notable figures in tech, government, and academia have also signed the statement. And they aren't the only ones who appear worried about superintelligence. On Sunday, the FLI published the results of a poll it conducted with 2,000 American adults that found that 64% of respondents "feel that superhuman AI should not be developed until it is proven safe and controllable, or should never be developed." It isn't always easy to draw a neat line between marketing bluster and technical legitimacy, especially when it comes to a technology as buzzy as AI. Also: What Zuckerberg's 'personal superintelligence' sales pitch leaves out Like artificial general intelligence, or AGI, "superintelligence" is a hazily defined term that's recently been co-opted by some tech developers to describe the next rung in the evolutionary ladder of AI: an as-yet-unrealized machine that can do anything the human brain can do, only better. In June, Meta launched an internal R&D arm devoted to building the technology, which it calls Superintelligence Labs. At around the same time, Altman published a personal blog post arguing that the advent of superintelligence was imminent. (The FLI petition cited a 2015 blog post from Altman in which he described "superhuman machine intelligence" as "probably the greatest threat to the continued existence of humanity.") Want more stories about AI? Sign up for AI Leaderboard, our weekly newsletter. The term "superintelligence" was popularized by a 2014 book by the same name by the Oxford philosopher Nick Bostrom, which was largely written as a warning about the dangers of building self-improving AI systems that could one day escape human control. Bengio, Russell, and Wozniak were also among the signatories of a 2023 open letter, also published by the FLI, that called for a six-month pause on the training of powerful AI models. Also: Google's latest AI safety report explores AI beyond human control Though that letter received widespread attention in the media and helped kindle public debate about AI safety, the momentum to quickly build and commercialize new AI models -- which, by that point, had thoroughly overtaken the tech industry -- ultimately overpowered the will to implement a wide-scale moratorium. Significant AI regulation, at least in the US, is also still lacking. That momentum has only grown as competition has spilled over the boundaries of Silicon Valley and across international borders. President Donald Trump and some prominent tech leaders like OpenAI CEO Sam Altman have framed the AI race as a geopolitical and economic competition between the US and China. At the same time, safety researchers from prominent AI companies including OpenAI, Anthropic, Meta, and Google have issued occasional, smaller-scale statements about the importance of monitoring certain components of AI models for risky behavior as the field evolves.
[2]
AI heavyweights call for end to 'superintelligence' research
UNSW Sydney provides funding as a member of The Conversation AU. I have worked in AI for more than three decades, including with pioneers such as John McCarthy, who coined the term "artificial intelligence" in 1955. In the past few years, scientific breakthroughs have produced AI tools that promise unprecedented advances in medicine, science, business and education. At the same time, leading AI companies have the stated goal to create superintelligence: not merely smarter tools, but AI systems that significantly outperform all humans on essentially all cognitive tasks. Superintelligence isn't just hype. It's a strategic goal determined by a privileged few, and backed by hundreds of billions of dollars in investment, business incentives, frontier AI technology, and some of the world's best researchers. What was once science fiction has become a concrete engineering goal for the coming decade. In response, I and hundreds of other scientists, global leaders and public figures have put our names to a public statement calling for superintelligence research to stop. What the statement says The new statement, released today by the AI safety nonprofit Future of Life Institute, is not a call for a temporary pause, as we saw in 2023. It is a short, unequivocal call for a global ban: We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in. The list of signatories represents a remarkably broad coalition, bridging divides that few other issues can. The "godfathers" of modern AI are present, such as Yoshua Bengio and Geoff Hinton. So are leading safety researchers such as UC Berkeley's Stuart Russell. But the concern has broken free of academic circles. The list includes tech and business leaders such as Apple cofounder Steve Wozniak and Virgin's Richard Branson. It includes high-level political and military figures from both sides of US politics, such as former National Security Advisor Susan Rice and former chairman of the Joint Chiefs of Staff Mike Mullen. It also includes prominent media figures such as Glenn Beck and former Trump strategist Steve Bannon, together with artists such as Will.I.am and respected historians such as Yuval Noah Harari. Why superintelligence poses a unique challenge Human intelligence has reshaped the planet in profound ways. We have rerouted rivers to generate electricity and irrigate farmland, transforming entire ecosystems. We have webbed the globe with financial markets, supply chains, air traffic systems: enormous feats of coordination that depend on our ability to reason, predict, plan, innovate and build technology. Superintelligence could extend this trajectory, but with a crucial difference. People will no longer be in control. The danger is not so much a machine that wants to destroy us, but one that pursues its goals with superhuman competence and indifference to our needs. Imagine a superintelligent agent tasked with ending climate change. It might logically decide to eliminate the species that's producing greenhouse gases. Instruct it to maximise human happiness, and it might find a way to trap every human brain in a perpetual dopamine loop. Or, in Swedish philosopher Nick Bostrom's famous example, a superintelligence tasked with producing as many paperclips as possible might try to convert all of Earth's matter, including us, into raw material for its factories. The issue is not malice but mismatch: a system that understands its instructions too literally, with the power to act cleverly and swiftly. History shows what can go wrong when our systems grow beyond our capacity to predict, contain or control them. The 2008 financial crisis began with financial instruments so intricate that even their creators could not foresee how they would interact until the entire system collapsed. Cane toads introduced in Australia to fight pests have instead devastated native species. The COVID pandemic exposed how global travel networks can turn local outbreaks into worldwide crises. Now we stand on the verge of creating something far more complex: a mind that can rewrite its own code, redesign and achieve its goals, and out-think every human combined. A history of inadequate governance For years, efforts to manage AI have focused on risks such as algorithmic bias, data privacy, and the impact of automation on jobs. These are important issues. But they fail to address the systemic risks of creating superintelligent autonomous agents. The focus has been on applications, not the ultimate stated goal of AI companies to create superintelligence. The new statement on superintelligence aims to start a global conversation not just on specific AI tools, but on the very destination AI developers are steering us toward. The goal of AI should be about creating powerful tools to serve humanity. This does not mean autonomous superintelligent agents that can operate beyond human control without aligning with human well-being. We can have a future of AI-powered medical breakthroughs, scientific discovery, and personalised education. None of these require us to build an uncontrollable superintelligence that could unilaterally decide the fate of humanity.
[3]
AI experts push to pause superintelligence
Why it matters: AI development is moving at breakneck speed with minimal oversight and with the full-throated endorsement of the Trump administration. * AI "doomers" have lost their foothold with U.S. policymakers. But they're still trying to be heard, and are highly involved in global AI policy debates. Driving the news: The call to action, organized by the Future of Life Institute, has more than 800 signatures from a diverse group, including: * AI pioneers Yoshua Bengio and Geoffrey Hinton, Apple co-founder Steve Wozniak, Sir Richard Branson, Steve Bannon, Susan Rice, will.i.am and Joseph Gordon-Levitt. * The group also released polling that found that three-quarters of U.S. adults want strong regulations on AI development, with 64% of those polled saying they want an "immediate pause" on advanced AI development, per a survey of 2,000 adults from Sept. 29 - Oct. 5. Yes, but: In early 2023, the Future of Life Institute and many of the same signatories published a similar letter calling for a six-month pause on training any models more powerful than GPT-4. * That pause was largely ignored. What they're saying: "We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in," a statement from the group's website reads.
[4]
AI heavyweights call for end to 'superintelligence' research
I have worked in AI for more than three decades, including with pioneers such as John McCarthy, who coined the term "artificial intelligence" in 1955. In the past few years, scientific breakthroughs have produced AI tools that promise unprecedented advances in medicine, science, business and education. At the same time, leading AI companies have the stated goal to create superintelligence: not merely smarter tools, but AI systems that significantly outperform all humans on essentially all cognitive tasks. Superintelligence isn't just hype. It's a strategic goal determined by a privileged few, and backed by hundreds of billions of dollars in investment, business incentives, frontier AI technology, and some of the world's best researchers. What was once science fiction has become a concrete engineering goal for the coming decade. In response, I and hundreds of other scientists, global leaders and public figures have put our names to a public statement calling for superintelligence research to stop. What the statement says The new statement, released today by the AI safety nonprofit Future of Life Institute, is not a call for a temporary pause, as we saw in 2023. It is a short, unequivocal call for a global ban: "We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in." The list of signatories represents a remarkably broad coalition, bridging divides that few other issues can. The "godfathers" of modern AI are present, such as Yoshua Bengio and Geoff Hinton. So are leading safety researchers such as UC Berkeley's Stuart Russell. But the concern has broken free of academic circles. The list includes tech and business leaders such as Apple cofounder Steve Wozniak and Virgin's Richard Branson. It includes high-level political and military figures from both sides of US politics, such as former National Security Advisor Susan Rice and former chairman of the Joint Chiefs of Staff Mike Mullen. It also includes prominent media figures such as Glenn Beck and former Trump strategist Steve Bannon, together with artists such as Will.I.am and respected historians such as Yuval Noah Harari. Why superintelligence poses a unique challenge Human intelligence has reshaped the planet in profound ways. We have rerouted rivers to generate electricity and irrigate farmland, transforming entire ecosystems. We have webbed the globe with financial markets, supply chains, air traffic systems: enormous feats of coordination that depend on our ability to reason, predict, plan, innovate and build technology. Superintelligence could extend this trajectory, but with a crucial difference. People will no longer be in control. The danger is not so much a machine that wants to destroy us, but one that pursues its goals with superhuman competence and indifference to our needs. Imagine a superintelligent agent tasked with ending climate change. It might logically decide to eliminate the species that's producing greenhouse gases. Instruct it to maximize human happiness, and it might find a way to trap every human brain in a perpetual dopamine loop. Or, in Swedish philosopher Nick Bostrom's famous example, a superintelligence tasked with producing as many paperclips as possible might try to convert all of Earth's matter, including us, into raw material for its factories. The issue is not malice but mismatch: a system that understands its instructions too literally, with the power to act cleverly and swiftly. History shows what can go wrong when our systems grow beyond our capacity to predict, contain or control them. The 2008 financial crisis began with financial instruments so intricate that even their creators could not foresee how they would interact until the entire system collapsed. Cane toads introduced in Australia to fight pests have instead devastated native species. The COVID pandemic exposed how global travel networks can turn local outbreaks into worldwide crises. Now we stand on the verge of creating something far more complex: a mind that can rewrite its own code, redesign and achieve its goals, and out-think every human combined. A history of inadequate governance For years, efforts to manage AI have focused on risks such as algorithmic bias, data privacy, and the impact of automation on jobs. These are important issues. But they fail to address the systemic risks of creating superintelligent autonomous agents. The focus has been on applications, not the ultimate stated goal of AI companies to create superintelligence. The new statement on superintelligence aims to start a global conversation not just on specific AI tools, but on the very destination AI developers are steering us toward. The goal of AI should be about creating powerful tools to serve humanity. This does not mean autonomous superintelligent agents that can operate beyond human control without aligning with human well-being. We can have a future of AI-powered medical breakthroughs, scientific discovery, and personalized education. None of these require us to build an uncontrollable superintelligence that could unilaterally decide the fate of humanity. This article is republished from The Conversation under a Creative Commons license. Read the original article.
[5]
One thing most of us seem to agree on, from Sir Stephen Fry to Steve Bannon, is that artificial superintelligence development should be paused while we figure out, y'know, the safety concerns
Artificial superintelligence is, for most of us I think, quite a scary thought. A human-like AI, far more intelligent and capable than any we've seen before, developed by companies that often seem unprepared (or uncaring) towards the potential knock-on effects of its deployment. Goody. Still, we've been told by industry leaders like Mark Zuckerberg that artificial superintelligence is coming, and we should all be very excited. Many are not, it seems, and that many includes a substantial list of celebrities, former government officials, philosophers, engineers, and scientists who've signed an online statement calling for its prohibition, created by The Future of Life Institute. The statement reads: "We call for a prohibition on the development of superintelligence, not lifted before there is: 1. Broad scientific consensus that it will be done safely and controllably, and 2. Strong public buy-in." Good luck on that last one, that's all I'll say. Still, the statement has been signed by a vast array of notable figures, from Sir Stephen Fry to former White House chief strategist Steve Bannon (via The Guardian). Who could have guessed those two would ever stand on the same side? Not me, that's for sure. Scrolling down the list, it's surprising to see the range of public figures seemingly happy to put their name down against the rise of human-like AI, at least until we can figure out, as a species, whether it can be implemented safely and controllably. Sir Richard Branson is in there. Apple co-founder Steve Wozniak, too. Oh, and Will.I.am, Grimes, and Prince Harry, among others. It makes for quite a dinner party list, at the very least. I'd love to see, say, Paolo Benanti, the current papal AI advisor, sitting alongside Joseph Gordon-Levitt discussing future technologies over a cocktail or two. Actually, the latter left a rather astute point underneath their signature: "Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc." said Gordon-Levitt. "But does AI also need to imitate humans, groom our kids, turn us all into slop junkies and make zillions of dollars serving ads? "Most people don't want that. But that's what these big tech companies mean when they talk about building 'Superintelligence'." Powerful words, Mr Gordon-Levitt. I also rather enjoyed Looper, just so you're aware. We can be friends. All joking aside, while the point against the creation of superintelligent AI without scientific consensus and public support seems a noble one, I can't help but feel that the private companies beavering away at it in their labs will take little notice. At least, not without some serious government intervention, which looks unlikely given that many administrations around the world seem only too happy to let AI investment boost their respective economies until the wheels, seemingly inevitably, fall off. But keep fighting the good fight, you motley band of dissenters, you. It's worth mentioning that anyone can sign alongside the great, the good, and the dubious listed here, so if you agree with their point, feel free to jot your details down using the form linked at the top of the page. Your name may be published alongside some esteemed company, at the very least, and won't that be fun?
[6]
Geoffrey Hinton, Yoshua Bengio sign statement urging suspension of AGI development - SiliconANGLE
Geoffrey Hinton, Yoshua Bengio sign statement urging suspension of AGI development Hundreds of scientists, politicians, entrepreneurs and artists have signed a statement urging the suspension of efforts to develop artificial general intelligence, or AGI. The brief document was published today on the website of the Future of Life Institute, a nonprofit co-founded by Skype co-creator Jaan Tallinn. Tallinn was also an early investor in DeepMind, a startup that eventually became Alphabet Inc.'s Google DeepMind machine learning lab. AGI is a hypothetical form of AI capable of performing many tasks better than humans. The signatories to today's statement argue that development of AGI should be suspended because of the technology's potential for harm. They wrote that "we call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in." The latter point ties into a survey the Future of Life Institute published on Monday. The survey, which polled 2,000 U.S. adults, found that 64% of the respondents believe efforts to develop superhuman AI should be suspended until the technology is proven safe or banned outright. The signatories to today's statement include two of the most prominent figures in AI research: Geoffrey Hinton and Yoshua Bengio. The two computer scientists developed several of the technologies that underpin today's large language models. Hinton, who won half the 2024 Nobel Prize in Physics for his contributions to AI research, invented the backpropagation algorithm that neural networks use to learn new skills. Hinton earlier received the Turing Prize, the most prestigious award in computer science, together with two other researchers. One of those researchers was Bengio, who led early work on embeddings and developed a predecessor to LLMs' attention mechanism. Today's AGI statement also drew the backing of numerous other prominent figures. The group includes four other Nobel laureates besides Hinton, dozens of professors and Apple Inc. co-founder Steve Wozniak. Beyond the academic and technology ecosystems, the statement drew the backing of a half dozen former members of Congress, three members of the European Parliament, Prince Harry, Meghan Markle, Richard Branson and others. Notably, the statement was also signed by two current employees of OpenAI and Anthropic PBC. The two companies would likely be among the first affected by initiatives to regulate AGI development.
Share
Share
Copy Link
A diverse coalition of AI experts, tech leaders, and public figures have signed a statement calling for a prohibition on superintelligence development. The group cites potential existential risks and the need for scientific consensus on safety measures.
A diverse coalition of AI experts, tech leaders, and public figures have united to call for a global prohibition on the development of superintelligence. The statement, organized by the Future of Life Institute (FLI), has garnered over 1,300 signatures from prominent individuals across various fields
1
.Source: ZDNet
Superintelligence refers to hypothetical AI systems that significantly outperform humans on essentially all cognitive tasks
2
. The signatories argue that the development of such systems could pose existential risks to humanity, including potential human extinction, economic obsolescence, and loss of control over critical systems1
.Source: Tech Xplore
The list of signatories represents a remarkably broad coalition, bridging divides across various sectors:
1
2
2
3
The statement calls for a prohibition on superintelligence development until two conditions are met:
2
This marks a significant escalation from the previous call for a six-month pause on AI development in early 2023, which was largely ignored by the industry
3
.Related Stories
A recent poll conducted by the FLI found that 64% of American adults believe superhuman AI should not be developed until proven safe and controllable
1
. However, the momentum to build and commercialize new AI models has continued to grow, with competition spilling over international borders1
.The call for prohibition faces significant challenges, as superintelligence development is backed by hundreds of billions of dollars in investment and some of the world's best researchers
4
. Critics argue that current AI governance efforts fail to address the systemic risks of creating superintelligent autonomous agents4
.Source: The Conversation
As the debate continues, the global community must grapple with the potential benefits and risks of superintelligence, balancing technological progress with safety and ethical considerations.
Summarized by
Navi
[2]
[4]
1
Business and Economy
2
Business and Economy
3
Technology