AI Experts and Public Figures Call for Ban on Superintelligence Development

Reviewed byNidhi Govil

9 Sources

Share

Over 700 prominent figures, including AI pioneers and celebrities, have signed a statement calling for a prohibition on AI superintelligence development. The move comes amid growing concerns about the potential risks of advanced AI systems.

AI Luminaries and Public Figures Unite Against Superintelligence Development

In a striking display of concern over the rapid advancement of artificial intelligence, more than 700 prominent figures have signed a statement calling for the prohibition of AI superintelligence development. The signatories include AI pioneers, tech leaders, celebrities, and political figures, reflecting a growing unease about the potential risks associated with highly advanced AI systems

1

.

Source: ZDNet

Source: ZDNet

The Call for Prohibition

The statement, published by the Future of Life Institute (FLI), argues that the development of AI systems capable of outperforming humans on nearly all cognitive tasks poses significant risks. These concerns range from human economic obsolescence and loss of freedom to potential national security threats and even human extinction

2

.

The signatories are calling for a halt on superintelligence development until two key conditions are met:

  1. Broad scientific consensus that it can be done safely and controllably
  2. Strong public buy-in for such development

    3

Notable Signatories and Public Opinion

The list of signatories includes AI "godfathers" Yoshua Bengio and Geoffrey Hinton, Apple co-founder Steve Wozniak, Virgin Group founder Richard Branson, and celebrities such as Kate Bush and Joseph Gordon-Levitt

1

4

.

A national poll conducted by FLI revealed that only 5% of Americans support the current fast, unregulated development towards superintelligence. Moreover, 64% believe that superintelligent AI shouldn't be developed until proven safe and controllable, while 73% want robust regulation on advanced AI

1

.

The Superintelligence Debate

The concept of superintelligence, popularized by philosopher Nick Bostrom, refers to a hypothetical AI system that can outperform humans on any cognitive task. While some view it as the next evolutionary step in AI development, others warn of potential catastrophic consequences

2

.

Source: Tech Xplore

Source: Tech Xplore

Critics argue that a superintelligent AI might pursue its goals with indifference to human needs, potentially leading to unintended and harmful outcomes. Examples range from drastic solutions to climate change to the conversion of Earth's resources for singular purposes

3

.

Source: CNET

Source: CNET

Industry Response and Future Implications

Despite previous calls for pauses in AI development, the momentum to build and commercialize new AI models has continued to grow. The race for AI supremacy has even been framed as a geopolitical and economic competition between nations

2

.

As the debate intensifies, the call for a prohibition on superintelligence development marks a significant escalation in efforts to address the potential risks of advanced AI systems. The diverse coalition of signatories underscores the growing recognition of AI's far-reaching implications across various sectors of society

5

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo