9 Sources
9 Sources
[1]
A long list of public figures are calling for a ban on superintelligent AI.
The initiative, announced Wednesday by the Future of Life Institute, put together an open letter signed by public figures from Nobel Laureates and national security experts to prominent AI researchers and religious leaders -- calling for "a prohibition on the development of superintelligence until the technology is reliably safe and controllable, and has public buy-in - which it sorely lacks." Signatories included actor Joseph Gordon-Levitt; musician will.i.am; leading computer scientist Geoffrey Hinton; billionaire investor Richard Branson; and Apple co-founder Steve Wozniak.
[2]
Prince Harry, Geoffrey Hinton Call for Ban on AI Superintelligence
Prince Harry and Meghan, the Duke and Duchess of Sussex, Steve Bannon and artificial intelligence pioneer Geoffrey Hinton are part of a group calling for a ban on AI superintelligence until that technology can be deployed safely. In a statement organized by the nonprofit the Future of Life Institute, the group of scientists and other public figures advocated for a prohibition on the development of superintelligence -- or AI that is vastly more capable than humans -- until there is "broad scientific consensus that it will be done safely and controllably." Other notable signatories include Apple Inc. co-founder Steve Wozniak, economist Daron Acemoglu, and former National Security Adviser Susan Rice.
[3]
Steve Bannon and Meghan Markle among 800 public figures calling for AI 'superintelligence' ban
Steve Bannon, Meghan Markle and Stephen Fry have joined a group of public figures calling for a "prohibition" on the development of so-called superintelligence in an unlikely alliance against advanced artificial intelligence systems. More than 800 people including AI scientists, politicians, celebrities and religious leaders have signed a statement to prevent the creation of "superintelligence", AI systems that are more intelligent than most humans. "We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in," the statement said. Its signatories include the godfathers of AI Geoffrey Hinton and Yoshua Bengio, former Ireland president Mary Robinson and Prince Harry. The Future of Life Institute (FLI), a non-profit campaign group, published the letter on Wednesday along with a poll showing 5 per cent of Americans support "the current status quo of unregulated development". Nearly three-quarters of respondents were in favour of robust regulation, according to the survey conducted by the campaign group. The Institute's president Max Tegmark told the Financial Times: "It is our humanity that brings us all together here . . . More and more people are starting to think that the biggest threat isn't the other company or even the other country but maybe the machines we are building." Several leading Big Tech groups and AI start-ups including OpenAI, Meta and Google are locked in a fierce competition to be the first to develop "superintelligence" or artificial general intelligence. Both terms generally refer to AI systems that can outperform humans on most tasks. In March 2023, five months after the launch of ChatGPT, tech experts including Elon Musk released a similar statement the FLI organised, which called for a six-month moratorium on all AI development. However, Musk's xAI continues to build AI systems. The latest statement is narrower and is "absolutely not calling for a pause on AI development", Tegmark said. "You don't need superintelligence for curing cancer, for self-driving cars, or to massively improve productivity and efficiency," Tegmark added. Several prominent Chinese scientists have signed the statement, including Andrew Yao and Ya-Qin Zhang, former president of Baidu. Other signatories include former government officials Susan Rice, national security adviser under then-president Barack Obama, and Mike Mullen, chairman of the joint chiefs of staff in the Obama and George W Bush administrations. Tegmark said: "Loss of control is something that is viewed as a national security threat both by the West and in China. They will be against it for their own self-interests, so they don't need to trust each other at all." Other signatories include Apple co-founder Steve Wozniak and Virgin co-founder Richard Branson, as well as faith leaders across religions. The global regulatory landscape for AI is moving slowly, with the most advanced legislation, the EU AI Act, being rolled out in stages despite fierce criticism from industry. In the US, states including California, Utah and Texas have enacted specific laws on AI. A proposed 10-year moratorium on AI regulation was pulled from the federal budget bill in July.
[4]
Prince Harry, Meghan join call for ban on development of AI 'superintelligence'
Prince Harry and his wife Meghan have joined prominent computer scientists, economists, artists, evangelical Christian leaders and American conservative commentators Steve Bannon and Glenn Beck to call for a ban on AI "superintelligence" that threatens humanity. The letter, released Wednesday by a politically and geographically diverse group of public figures, is squarely aimed at tech giants like Google, OpenAI and Meta Platforms that are racing each other to build a form of artificial intelligence designed to surpass humans at many tasks. The 30-word statement says: "We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in." In a preamble, the letter notes that AI tools may bring health and prosperity, but alongside those tools, "many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction." Prince Harry added in a personal note that "the future of AI should serve humanity, not replace it. I believe the true test of progress will be not how fast we move, but how wisely we steer. There is no second chance." Signing alongside the Duke of Sussex was his wife Meghan, the Duchess of Sussex. "This is not a ban or even a moratorium in the usual sense," wrote another signatory, Stuart Russell, an AI pioneer and computer science professor at the University of California, Berkeley. "It's simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?" Also signing were AI pioneers Yoshua Bengio and Geoffrey Hinton, co-winners of the Turing Award, computer science's top prize. Hinton also won a Nobel Prize in physics last year. Both have been vocal in bringing attention to the dangers of a technology they helped create. But the list also has some surprises, including Bannon and Beck, in an attempt by the letter's organizers at the nonprofit Future of Life Institute to appeal to President Donald Trump's Make America Great Again movement even as Trump's White House staff has sought to reduce limits to AI development in the U.S. Also on the list are Apple co-founder Steve Wozniak; British billionaire Richard Branson; the former Chairman of the U.S. Joint Chiefs of Staff Mike Mullen, who served under Republican and Democratic administrations; and Democratic foreign policy expert Susan Rice, who was national security adviser to President Barack Obama. Former Irish President Mary Robinson and several British and European parliamentarians signed, as did actors Stephen Fry and Joseph Gordon-Levitt, and musician will.i.am, who has otherwise embraced AI in music creation. "Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc.," wrote Gordon-Levitt, whose wife Tasha McCauley served on OpenAI's board of directors before the upheaval that led to CEO Sam Altman's temporary ouster in 2023. "But does AI also need to imitate humans, groom our kids, turn us all into slop junkies and make zillions of dollars serving ads? Most people don't want that." The letter is likely to provoke ongoing debates between the AI research community about the likelihood of superhuman AI, the technical paths to reach it and how dangerous it could be. "In the past, it's mostly been the nerds versus the nerds," said Max Tegmark, president of the Future of Life Institute and a professor at the Massachusetts Institute of Technology. "I feel what we're really seeing here is how the criticism has gone very mainstream." Confounding the broader debates is that the same companies that are striving toward what some call superintelligence and others call artificial general intelligence, or AGI, are also sometimes inflating the capabilities of their products, which can make them more marketable and have contributed to concerns about an AI bubble. OpenAI was recently met with ridicule from mathematicians and AI scientists when its researcher claimed ChatGPT had figured out unsolved math problems -- when what it really did was find and summarize what was already online. "There's a ton of stuff that's overhyped and you need to be careful as an investor, but that doesn't change the fact that -- zooming out -- AI has gone much faster in the last four years than most people predicted," Tegmark said. Tegmark's group was also behind a March 2023 letter -- still in the dawn of a commercial AI boom -- that called on tech giants to pause the development of more powerful AI models temporarily. None of the major AI companies heeded that call. And the 2023 letter's most prominent signatory, Elon Musk, was at the same time quietly founding his own AI startup to compete with those he wanted to take a 6-month pause. Asked if he reached out to Musk again this time, Tegmark said he wrote to the CEOs of all major AI developers in the U.S. but didn't expect them to sign. "I really empathize for them, frankly, because they're so stuck in this race to the bottom that they just feel an irresistible pressure to keep going and not get overtaken by the other guy," Tegmark said. "I think that's why it's so important to stigmatize the race to superintelligence, to the point where the U.S. government just steps in."
[5]
Harry and Meghan join AI pioneers in call for ban on superintelligent systems
Nobel laureates also sign letter saying ASI technology should be barred until there is consensus that it can be developed 'safely' The Duke and Duchess of Sussex have joined artificial intelligence pioneers and Nobel laureates in calling for a ban on developing superintelligent AI systems. Harry and Meghan are among the signatories of a statement calling for "a prohibition on the development of superintelligence". Artificial superintelligence (ASI) is the term for AI systems, yet to be developed, that exceed human levels of intelligence at all cognitive tasks. The statement calls for the ban to stay in place until there is "broad scientific consensus" on developing ASI "safely and controllably" and once there is "strong public buy-in". It has also been signed by the AI pioneer and Nobel laureate Geoffrey Hinton, along with his fellow "godfather" of modern AI, Yoshua Bengio; the Apple co-founder Steve Wozniak; the UK entrepreneur Richard Branson; Susan Rice, a former US national security adviser under Barack Obama; the former Irish president Mary Robinson, and the British author and broadcaster Stephen Fry. Other Nobel laureates who signed include Beatrice Fihn, Frank Wilczek, John C Mather, and Daron AcemoÄŸlu. The statement, targeted at governments, tech firms and lawmakers, was organised by the Future of Life Institute (FLI), a US-based AI safety group that called for a hiatus in developing powerful AI systems in 2023, soon after the emergence of ChatGPT made AI a political and public talking point around the world. In July, Mark Zuckerberg, the chief executive of the Facebook parent Meta, one of the big AI developers in the US, said development of superintelligence was "now in sight". However, some experts have said talk of ASI reflects competitive positioning among tech companies spending hundreds of billions of dollars on AI this year alone, rather than the sector being close to achieving any technical breakthroughs. Nonetheless, FLI says the prospect of ASI being achieved "in the coming decade" carries a host of threats ranging from taking all human jobs to losses of civil liberties, exposing countries to national security risks and even threatening humanity with extinction. Existential fears about AI focus on the potential ability of a system to evade human control and safety guidelines and trigger actions contrary to human interests. FLI released a US national poll showing that approximately three-quarters of Americans want robust regulation on advanced AI, with six out 10 believing that superhuman AI should not be made until it is proven safe or controllable. The survey of 2,000 US adults added that only 5% supported the status quo of fast, unregulated development. The leading AI companies in the US, including the ChatGPT developer OpenAI and Google, have made the development of artificial general intelligence - the theoretical state where AI matches human levels of intelligence at most cognitive tasks - an explicit goal of their work. Although this is one notch below ASI, some experts also warn it could carry an existential risk by, for instance, being able to improve itself towards reaching superintelligent levels, while also carrying an implicit threat for the modern labour market.
[6]
Hundreds of Power Players, From Steve Wozniak to Steve Bannon, Just Signed a Letter Calling for Prohibition on Development of AI Superintelligence
Hundreds of public figures -- including multiple AI "godfathers" and a staggeringly idiosyncratic array of religious, media, and tech figures -- just signed a letter calling for a "prohibition" on the race to build AI superintelligence. Simply titled the "Statement on Superintelligence," the letter, which was put forward by the Future of Life Institute (FLI), is extremely concise: it calls for a "prohibition on the development of superintelligence," which it says should not be "lifted before there is broad scientific consensus that it will be done safely and controllably" as well as with "strong public buy-in." The letter cites recent polling from FLI, which was cofounded by the Massachusetts Institute of Technology professor Max Tegmark, showing that only five percent of Americans are in favor of the rapid and unregulated development of advanced AI tools, while more than 73 percent support "robust" regulatory action on AI. Around 64 percent, meanwhile, said they felt that until superintelligence -- or an AI model that surpasses human-level intelligence -- could be proven to be safe or controllable, it shouldn't be built. Signatories include prominent tech and business figures like Apple cofounder Steve Wozniak and Virgin founder Richard Branson; influential right-wing media voices like "War Room" host Steve Bannon and talk radio host Glenn Beck, as well as left-leaning entertainers like Joseph Gordon-Levitt; Prince Harry and Meghan, Duke and Duchess of Sussex; Mike Mullen, the retired US Navy Admiral who served as Chairman of the Joint Chiefs of Staff under former presidents George W. Bush and Barack Obama; friar Paolo Benanti, who serves as the Pope's AI advisor; and a large consortium of AI experts and scientists including Turing award winner Yoshua Bengio and Nobel Prize laureate Geoffrey Hinton, two consequential AI researchers who each hold the title "godfather of AI." "Frontier AI systems could surpass most individuals across most cognitive tasks within just a few years," Bengio, a Professor at the University of Montreal, said in a press release. "These advances could unlock solutions to major global challenges, but they also carry significant risks." "To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use," he added. "We also need to make sure the public has a much stronger say in decisions that will shape our collective future." Crucially, the letter takes the position that superintelligence -- a lofty, scifi-esque vision for AI's future -- is an achievable technical goal, a position that some experts are skeptical of (or at least, believe to be a long way off.) It's also worth pointing out that the letter doesn't clarify the reality that AI doesn't have to reach superintelligence to cause chaos: as it stands, generative AI tools like chatbots and image and video-creation tools -- primitive technologies when compared against imagined future superintelligent AI systems -- are upending education, transforming the web into an increasingly misinformation-prone and unreal environment, expediting the creation and dissemination of nonconsensual and illegal pornography, and sending users of all ages spinning into mental health crises and reality breaks that have resulted in outcomes like divorce, homelessness, jail, involuntary commitments, self-harm, and death. It's also interesting who didn't sign the letter. Notable missing names include OpenAI CEO Sam Altman, DeepMind cofounder and Microsoft AI CEO Mustafa Suleyman, Anthropic CEO Dario Amodei, White House AI and crypto czar David Sacks, and xAI founder Elon Musk, the latter of whom signed a previous FLI letter from 2023 calling for a pause on the development of AI models more advanced than OpenAI's GPT-4. (That letter, of course, pretty much did nothing: GPT-5 was released this past summer.) Altman, too, has signed similar letters calling for awareness about large-scale future AI risks, making silence at this juncture striking. Given that this is one of multiple please-stop-advanced-AI-development-until-we-regulate letters to crop up since the release of ChatGPT in late 2022, whether this one will prove to have any bite is an open question. Still, the latest FLI letter does highlight the breadth of ideologies united on the belief that AI should be regulated, and that how we build AI, and by whom, should be a democratic process. In other words, the public should have a say in what humanity's technological future looks like -- and that shaping AI development shouldn't be done in a Wild West-like Silicon Valley vacuum lacking regulatory oversight and accountability. "Many people want powerful AI tools for science, medicine, productivity, and other benefits," FLI cofounder Anthony Aguirre said in press release. "But the path AI corporations are taking, of racing toward smarter-thanhuman AI that is designed to replace people, is wildly out of step with what the public wants, scientists think is safe, or religious leaders feel is right." "Nobody developing these AI systems has been asking humanity if this is OK," Aguirre added. "We did -- and they think it's unacceptable."
[7]
Open Letter Calls for Ban on Superintelligent AI Development
Among the signatories are five Nobel laureates; two so-called "Godfathers of AI;" Steve Wozniak, a co-founder of Apple; Steve Bannon, a close ally of President Trump; Paolo Benanti, an adviser to the Pope; and even Harry and Meghan, the Duke and Duchess of Sussex. "We call for a prohibition on the development of superintelligence, not lifted before there is The letter was coordinated and published by the Future of Life Institute, a nonprofit that in 2023 published a different open letter calling for a six-month pause on the development of powerful AI systems. Although widely-circulated, that letter did not achieve its goal. Organizers said they decided to mount a new campaign, with a more specific focus on superintelligence, because they believe the technology -- which they define as a system that can surpass human performance on all useful tasks -- could arrive in as little as one to two years. "Time is running out," says Anthony Aguirre, the FLI's executive director, in an interview with TIME. The only thing likely to stop AI companies barreling toward superintelligence, he says, "is for there to be widespread realization among society at all its levels that this is not actually what we want."
[8]
From Prince Harry to Steve Bannon, unlikely coalition calls for ban on superintelligent AI
Prince Harry and Meghan, Duchess of Sussex, in Düsseldorf, Germany, in September 2022. Rolf Vennenbernd / Getty Images Hundreds of public figures, including Nobel Prize-winning scientists, former military leaders, artists and British royalty, signed a statement Wednesday calling for a ban on work that could lead to computer superintelligence, a yet-to-be-reached stage of artificial intelligence that they said could one day pose a threat to humanity. The statement proposes "a prohibition on the development of superintelligence" until there is both "broad scientific consensus that it will be done safely and controllably" and "strong public buy-in." Organized by AI researchers concerned about the fast pace of technological advances, the statement had more than 800 signatures Tuesday night from a diverse group of people. The signers include Nobel laureate and AI researcher Geoffrey Hinton, former Joint Chiefs of Staff Chairman Mike Mullen, rapper Will.i.am, former Trump White House aide Steve Bannon and U.K. Prince Harry and his wife, Meghan Markle. The statement adds to a growing list of calls for an AI slowdown at a time when AI is threatening to remake large swaths of the economy and culture. OpenAI, Google, Meta and other tech companies are pouring billions of dollars into new AI models and the data centers that power them, while businesses of all kinds are looking for ways to add AI features to a broad range of products and services. Some AI researchers believe AI systems are advancing fast enough that soon they'll demonstrate what's known as artificial general intelligence, or the ability to perform intellectual tasks as a human could. From there, researchers and tech executives believe what could follow might be superintelligence, in which AI models perform better than even the most expert humans. The statement is a product of the Future of Life Institute, a nonprofit group that works on large-scale risks such as nuclear weapons, biotechnology and AI. Among its early backers in 2015 was tech billionaire Elon Musk, who's now part of the AI race with his startup xAI. Now, the institute says, its biggest recent donor is Vitalik Buterin, a co-founder of the Ethereum blockchain, and it says it doesn't accept donations from big tech companies or from companies seeking to build artificial general intelligence. Its executive director, Anthony Aguirre, a physicist at the University of California, Santa Cruz, said AI developments are happening faster than the public can understand what's happening or what's next. "We've, at some level, had this path chosen for us by the AI companies and founders and the economic system that's driving them, but no one's really asked almost anybody else, 'Is this what we want?'" he said in an interview. "It's been quite surprising to me that there has been less outright discussion of 'Do we want these things? Do we want human-replacing AI systems?'" he said. "It's kind of taken as: Well, this is where it's going, so buckle up, and we'll just have to deal with the consequences. But I don't think that's how it actually is. We have many choices as to how we develop technologies, including this one." The statement isn't aimed at any one organization or government in particular. Aguirre said he hopes to force a conversation that includes not only major AI companies, but also politicians in the United States, China and elsewhere. He said the Trump administration's pro-industry views on AI need balance. "This is not what the public wants. They don't want to be in a race for this," he said. He said there might eventually need to be an international treaty on advanced AI, as there is for other potentially dangerous technologies. The White House didn't immediately respond to a request for comment on the statement Tuesday, ahead of its official release. Americans are almost evenly split over the potential impact of AI, according to an NBC News Decision Desk Poll powered by SurveyMonkey this year. While 44% of U.S. adults surveyed said they thought AI would make their and their families' lives better, 42% said they thought it would make their futures worse. Top tech executives, who have offered predictions about superintelligence and signaled that they are working toward it as a goal, didn't sign the statement. Meta CEO Mark Zuckerberg said in July that superintelligence was "now in sight." Musk posted on X in February that the advent of digital superintelligence "is happening in real-time" and has earlier warned about "robots going down the street killing people," though now Tesla, where Musk is CEO, is working to develop humanoid robots. OpenAI CEO Sam Altman said last month that he'd be surprised if superintelligence didn't arrive by 2030 and wrote in a January blog post that his company was turning its attention there. Several tech companies didn't immediately respond to requests for comment on the statement. Last week, the Future of Life Institute told NBC News that OpenAI had issued subpoenas to it and its president as a form of retaliation for calling for AI oversight. OpenAI Chief Strategy Officer Jason Kwon wrote on Oct. 11 that the subpoena was a result of OpenAI's suspicions around the funding sources of several nonprofit groups that had been critical of its restructuring. Other signers of the statement include Apple co-founder Steve Wozniak, Virgin Group co-founder Richard Branson, conservative talk show host Glenn Beck, former U.S. national security adviser Susan Rice, Nobel-winning physicist John Mather, Turing Award winner and AI researcher Yoshua Bengio and the Rev. Paolo Benanti, a Vatican AI adviser. Several AI researchers based in China also signed the statement. Aguirre said the goal was to have a broad set of signers from across society. "We want this to be social permission for people to talk about it, but also we want to very much represent that this is not a niche issue of some nerds in Silicon Valley, who are often the only people at the table. This is an issue for all of humanity," he said.
[9]
Prince Harry, Meghan join call for ban on development of AI 'superintelligence'
Prince Harry and his wife Meghan have joined prominent computer scientists, economists, artists, evangelical Christian leaders and American conservative commentators Steve Bannon and Glenn Beck to call for a ban on AI "superintelligence" that threatens humanity. The letter, released Wednesday by a politically and geographically diverse group of public figures, is squarely aimed at tech giants like Google, OpenAI and Meta Platforms that are racing each other to build a form of artificial intelligence designed to surpass humans at many tasks. The letter calls for a ban unless some conditions are met The 30-word statement says: "We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in." In a preamble, the letter notes that AI tools may bring health and prosperity, but alongside those tools, "many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction." Who signed and what they're saying about it Prince Harry added in a personal note that "the future of AI should serve humanity, not replace it. I believe the true test of progress will be not how fast we move, but how wisely we steer. There is no second chance." Signing alongside the Duke of Sussex was his wife Meghan, the Duchess of Sussex. "This is not a ban or even a moratorium in the usual sense," wrote another signatory, Stuart Russell, an AI pioneer and computer science professor at the University of California, Berkeley. "It's simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?" Also signing were AI pioneers Yoshua Bengio and Geoffrey Hinton, co-winners of the Turing Award, computer science's top prize. Hinton also won a Nobel Prize in physics last year. Both have been vocal in bringing attention to the dangers of a technology they helped create. But the list also has some surprises, including Bannon and Beck, in an attempt by the letter's organizers at the nonprofit Future of Life Institute to appeal to President Donald Trump's Make America Great Again movement even as Trump's White House staff has sought to reduce limits to AI development in the U.S. Also on the list are Apple co-founder Steve Wozniak; British billionaire Richard Branson; the former Chairman of the U.S. Joint Chiefs of Staff Mike Mullen, who served under Republican and Democratic administrations; and Democratic foreign policy expert Susan Rice, who was national security adviser to President Barack Obama. Former Irish President Mary Robinson and several British and European parliamentarians signed, as did actors Stephen Fry and Joseph Gordon-Levitt, and musician will.i.am, who has otherwise embraced AI in music creation. "Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc.," wrote Gordon-Levitt, whose wife Tasha McCauley served on OpenAI's board of directors before the upheaval that led to CEO Sam Altman's temporary ouster in 2023. "But does AI also need to imitate humans, groom our kids, turn us all into slop junkies and make zillions of dollars serving ads? Most people don't want that." Are worries about AI superintelligence also feeding AI hype? The letter is likely to provoke ongoing debates between the AI research community about the likelihood of superhuman AI, the technical paths to reach it and how dangerous it could be. "In the past, it's mostly been the nerds versus the nerds," said Max Tegmark, president of the Future of Life Institute and a professor at the Massachusetts Institute of Technology. "I feel what we're really seeing here is how the criticism has gone very mainstream." Confounding the broader debates is that the same companies that are striving toward what some call superintelligence and others call artificial general intelligence, or AGI, are also sometimes inflating the capabilities of their products, which can make them more marketable and have contributed to concerns about an AI bubble. OpenAI was recently met with ridicule from mathematicians and AI scientists when its researcher claimed ChatGPT had figured out unsolved math problems -- when what it really did was find and summarize what was already online. "There's a ton of stuff that's overhyped and you need to be careful as an investor, but that doesn't change the fact that -- zooming out -- AI has gone much faster in the last four years than most people predicted," Tegmark said. Tegmark's group was also behind a March 2023 letter -- still in the dawn of a commercial AI boom -- that called on tech giants to pause the development of more powerful AI models temporarily. None of the major AI companies heeded that call. And the 2023 letter's most prominent signatory, Elon Musk, was at the same time quietly founding his own AI startup to compete with those he wanted to take a 6-month pause. Asked if he reached out to Musk again this time, Tegmark said he wrote to the CEOs of all major AI developers in the U.S. but didn't expect them to sign. "I really empathize for them, frankly, because they're so stuck in this race to the bottom that they just feel an irresistible pressure to keep going and not get overtaken by the other guy," Tegmark said. "I think that's why it's so important to stigmatize the race to superintelligence, to the point where the U.S. government just steps in."
Share
Share
Copy Link
A diverse group of over 800 public figures, including AI pioneers, celebrities, and political leaders, have signed a statement calling for a prohibition on the development of AI superintelligence. The initiative, organized by the Future of Life Institute, aims to ensure AI safety and public consensus.
In a remarkable display of unity across diverse fields, over 800 public figures have joined forces to call for a prohibition on the development of AI superintelligence. The initiative, spearheaded by the Future of Life Institute (FLI), has garnered support from an eclectic group of signatories, including AI pioneers, celebrities, political leaders, and religious figures
1
3
.The statement, released on Wednesday, calls for "a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in"
4
. This move comes amidst growing concerns about the potential risks associated with advanced AI systems that could significantly outperform humans on most cognitive tasks2
.The list of signatories is as diverse as it is impressive:
4
3
3
4
5
Source: Financial Times News
The signatories express a range of concerns about the rapid development of superintelligent AI systems. These include potential threats to human economic obsolescence, loss of civil liberties, national security risks, and even the possibility of human extinction
4
. Prince Harry emphasized that "the future of AI should serve humanity, not replace it"4
.Source: AP NEWS
Related Stories
A poll conducted by FLI revealed that approximately 75% of Americans support robust regulation on advanced AI, with only 5% favoring the current unregulated development
3
5
. Despite this public sentiment, the global regulatory landscape for AI is moving slowly. The EU AI Act is being rolled out in stages, while in the US, only a few states have enacted specific AI laws3
.Major tech companies and AI startups, including OpenAI, Meta, and Google, are in fierce competition to develop superintelligence or artificial general intelligence
3
. However, this initiative is not calling for a complete pause on AI development. Max Tegmark, president of FLI, clarified that many beneficial AI applications don't require superintelligence3
.Source: TIME
As the debate intensifies, this unprecedented coalition of diverse voices highlights the growing mainstream concern about the potential risks and ethical implications of advanced AI systems. The call for a ban on superintelligence development until safety and public consensus are achieved marks a significant moment in the ongoing dialogue about the future of AI and its impact on society.
Summarized by
Navi
[2]
[3]
Today•Technology
17 Sept 2024
22 Sept 2025•Policy and Regulation
1
Business and Economy
2
Business and Economy
3
Technology