AI Heavyweights and Public Figures Unite to Call for Pause on Superintelligence Research

Reviewed byNidhi Govil

6 Sources

Share

A diverse coalition of AI experts, tech leaders, and public figures have signed a statement calling for a prohibition on superintelligence development. The group cites potential existential risks and the need for scientific consensus on safety measures.

AI Experts and Public Figures Call for Superintelligence Prohibition

A diverse coalition of AI experts, tech leaders, and public figures have united to call for a global prohibition on the development of superintelligence. The statement, organized by the Future of Life Institute (FLI), has garnered over 1,300 signatures from prominent individuals across various fields

1

.

Source: ZDNet

Source: ZDNet

Defining Superintelligence and Its Risks

Superintelligence refers to hypothetical AI systems that significantly outperform humans on essentially all cognitive tasks

2

. The signatories argue that the development of such systems could pose existential risks to humanity, including potential human extinction, economic obsolescence, and loss of control over critical systems

1

.

Source: Tech Xplore

Source: Tech Xplore

Notable Signatories and Their Concerns

The list of signatories represents a remarkably broad coalition, bridging divides across various sectors:

  1. AI pioneers: Geoffrey Hinton and Yoshua Bengio, both Turing Award winners

    1

  2. Tech leaders: Apple co-founder Steve Wozniak and Virgin Group founder Sir Richard Branson

    2

  3. Political figures: Former National Security Advisor Susan Rice and former chairman of the Joint Chiefs of Staff Mike Mullen

    2

  4. Public figures: Glenn Beck, Steve Bannon, and historian Yuval Noah Harari

    3

The Call for Prohibition

The statement calls for a prohibition on superintelligence development until two conditions are met:

  1. Broad scientific consensus that it can be done safely and controllably
  2. Strong public buy-in

    2

This marks a significant escalation from the previous call for a six-month pause on AI development in early 2023, which was largely ignored by the industry

3

.

Public Opinion and Industry Momentum

A recent poll conducted by the FLI found that 64% of American adults believe superhuman AI should not be developed until proven safe and controllable

1

. However, the momentum to build and commercialize new AI models has continued to grow, with competition spilling over international borders

1

.

Challenges and Future Implications

The call for prohibition faces significant challenges, as superintelligence development is backed by hundreds of billions of dollars in investment and some of the world's best researchers

4

. Critics argue that current AI governance efforts fail to address the systemic risks of creating superintelligent autonomous agents

4

.

Source: The Conversation

Source: The Conversation

As the debate continues, the global community must grapple with the potential benefits and risks of superintelligence, balancing technological progress with safety and ethical considerations.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo