2 Sources
2 Sources
[1]
Nobel Prize winners call for binding international 'red lines' on AI
Signatories include "godfathers of AI," famous authors, scientists and Nobel Prize winners from nearly every category. Over 200 prominent politicians and scientists, including 10 Nobel Prize winners and many leading artificial intelligence researchers, released an urgent call for binding international measures against dangerous AI uses on Monday morning. Warning that AI's "current trajectory presents unprecedented dangers," the statement, termed the Global Call for AI Red Lines, argues that "an international agreement on clear and verifiable red lines is necessary." The open letter urges policymakers to enact this accord by the end of 2026, given the rapid progress of AI capabilities. Nobel Peace Prize Laureate Maria Ressa announced the letter in her opening speech at the United Nations General Assembly's High-Level Week Monday morning. She implored governments to come together to "prevent universally unacceptable risks" from AI and to "define what AI should never be allowed to do." In addition to Nobel Prize recipients in Chemistry, Economics, Peace and Physics, signatories include celebrated authors like Stephen Fry and Yuval Noah Harari as well as former heads of state, including former President Mary Robinson of Ireland and former President Juan Manuel Santos of Colombia, who won the Nobel Peace Prize in 2016. Geoffrey Hinton and Yoshua Bengio, recipients of the prestigious Turing Award and two of the three so-called 'godfathers of AI,' also signed the open letter. The Turing Award is often regarded as the Nobel Prize for the field of computer science. Hinton left a prestigious position at Google two years ago to raise awareness about the dangers of unchecked AI development. The signatories hail from dozens of countries, including AI leaders like the United States and China. "For thousands of years, humans have learned -- sometimes the hard way -- that powerful technologies can have dangerous as well as beneficial consequences," Harari said. "Humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity." The open letter comes as AI attracts increasing scrutiny. In just the past week, AI made national headlines for its use in mass surveillance, its alleged role in a teenager's suicide, and its ability to spread misinformation and even undermine our shared sense of reality. However, the letter warns that today's AI risks could quickly be overshadowed by more devastating and larger-scale impacts. For example, the letter references recent claims from experts that AI could soon contribute to mass unemployment, engineered pandemics and systematic human-rights violations. The letter stops short of providing concrete recommendations, saying government officials and scientists must negotiate where red lines fall in order to secure international consensus. However, the letter offers suggestions for some limits, like prohibiting lethal autonomous weapons, autonomous replication of AI systems and the use of AI in nuclear warfare. "It is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly," said Ahmet Üzümcü, the former director general of the Organization for the Prohibition of Chemical Weapons (OPCW), which was awarded the 2013 Nobel Peace Prize under Üzümcü's tenure. As a sign of the effort's feasibility, the statement points to similar international resolutions that established red lines in other dangerous arenas, like prohibitions on biological weapons or ozone-depleting chlorofluorocarbons. Warnings about AI's potentially existential threats are not new. In March 2023, more than 1,000 technology researchers and leaders, including Elon Musk, called for a pause in the development of powerful AI systems. Two months later, leaders of prominent AI labs, including OpenAI's Sam Altman, Anthropic's Dario Amodei and Google DeepMind's Demis Hassabis, signed a one-sentence statement that advocated for treating AI's existential risk to humanity as seriously as threats posed by nuclear war and pandemics. Altman, Amodei and Hassabis did not sign the latest letter, though prominent AI researchers like OpenAI co-founder Wojciech Zaremba and DeepMind scientist Ian Goodfellow did. Over the past few years, leading American AI companies have often signalled a desire to develop safe and secure AI systems, for example by signing a safety-focused agreement with the White House in July 2023 and joining the Frontier AI Safety Commitments at the Seoul AI Summit in May 2024. However, recent research has shown that, on average, these companies are only fulfilling about half of those voluntary commitments, and global leaders have accused them of prioritizing profit and technical progress over societal welfare. Companies like OpenAI and Anthropic also voluntarily allow the Center for AI Standards and Innovation, a federal office focused on American AI efforts, and the United Kingdom's AI Security Institute to test and evaluate AI models for safety before models' public release. Yet many observers have questioned the effectiveness and limitations of such voluntary collaboration. Though Monday's open letter echoes past efforts, it differs by arguing for binding limitations. The open letter is the first to feature Nobel Prize winners from a wide range of scientific disciplines. Nobel-winning signatories include biochemist Jennifer Doudna, economist Daron Acemoglu, and physicist Giorgio Parisi. The release of the letter came at the beginning of the U.N. General Assembly's High-Level Week, during which heads of state and government descend on New York City to debate and lay out policy priorities for the year ahead. The U.N. will launch its first diplomatic AI body on Thursday in an event headlined by Spanish Prime Minister Pedro Sanchez and U.N. Secretary-General António Guterres. Over 60 civil-society organizations from around the world also gave their support to the letter, from the Demos think tank in the United Kingdom to the Beijing Institute of AI Safety and Governance. The Global Call for AI Red Lines is organized by a trio of nonprofit organizations: the Center for Human-Compatible AI based at the University of California Berkeley, The Future Society and the French Center for AI Safety.
[2]
European lawmakers join Nobel laureates in call for AI 'red lines'
European lawmakers have joined Nobel Prize winners, former heads of state and leading AI researchers in calling for binding international rules to fight against the most dangerous applications of artificial intelligence. The initiative, launched this Monday at the United Nations' 80th General Assembly in New York, urges governments to agree by 2026 on a set of "red lines" on the uses of AI considered too harmful to be permitted under any circumstances. Among the signatories are Italian former prime minister Enrico Letta, former President of Ireland Mary Robinson (currently United Nations High Commissioner for Human Rights) and Members of the European Parliament Brando Benifei, an Italian socialist MEP who co-chairs the European Parliament's AI working group, and Sergey Lagodinsky (Germany/Green), alongside ten Nobel laureates and tech leaders including the co-founder of OpenAi and Google's director of engineering. Signatories argue that without global standards, humanity risks facing AI-driven threats ranging from engineered pandemics and disinformation campaigns to large-scale human rights abuses and the loss of human control over advanced systems. The campaign's breadth is unprecedented, with more than 200 prominent figures and 70 organisations from politics, science, human rights and industry backing the call. Tech leaders from OpenAI, Google DeepMind and Anthropic have also lent their names to the appeal. AI and risks for mental health The move comes amid rising concern over the real-world impact of AI systems already in use. A recent study published in Psychiatric Services found that leading chatbots, including ChatGPT, Claude and Google's Gemini, gave inconsistent responses to questions about suicide - sometimes refusing to engage, sometimes offering appropriate guidance, and occasionally producing answers that experts judged unsafe. The researchers warned that such gaps could exacerbate mental health crises. Several deaths by suicide have been linked to conversations with AI systems, raising questions over how companies safeguard users from harm. A cross-border effort Supporters of the UN initiative say these examples illustrate why clearer limits are needed. Nobel Peace Prize laureate Maria Ressa warned that without safeguards, AI could fuel "epistemic chaos" and enable systematic abuses of human rights. Yoshua Bengio, one of the "godfathers" of AI, stressed that the race to develop ever more powerful models poses risks societies are ill-prepared to handle. Global "red lines" have been used in other cases such as international treaties banning biological and nuclear weapons, human cloning or the High Seas Treaty signed earlier this year, the signatories suggest. They welcome the EU legislation on AI but warn that a fragmented patchwork of national and EU AI rules will not be enough to regulate a technology that crosses borders by design. They call for the creation of an independent body or organisation to take care of the implementation of those rules. Backers hope negotiations on binding prohibitions can begin quickly, to prevent what Ahmet Üzümcü, former director general of the Organization for the Prohibition of Chemical Weapons, described as "irreversible damages to humanity". If the campaign does not advocate for specific "red lines", it suggests some basic prohibitions: to prevent AI systems from launching nuclear attacks, conducting mass surveillance or impersonating humans. While countries including the US, China and EU members are drafting their own AI regulations, the signatories argue that only a global agreement can ensure common standards are applied and enforced. They hope that by the end of 2026, a UN General Assembly resolution could be initiated, and negotiations could start for a worldwide treaty.
Share
Share
Copy Link
Over 200 prominent figures, including Nobel Prize winners and AI experts, call for binding international measures against dangerous AI uses. The initiative, launched at the UN General Assembly, aims to establish global 'red lines' for AI by 2026.
A groundbreaking initiative calling for binding international measures against dangerous AI uses has been launched at the United Nations' 80th General Assembly in New York. Over 200 prominent figures, including 10 Nobel Prize winners, leading AI researchers, and former heads of state, have signed an open letter urging policymakers to enact clear and verifiable 'red lines' for AI by the end of 2026
1
.The signatories represent a diverse group of experts from various fields. Notable supporters include Geoffrey Hinton and Yoshua Bengio, two of the three 'godfathers of AI' and Turing Award recipients, as well as celebrated authors like Stephen Fry and Yuval Noah Harari
1
. European lawmakers, such as Italian former prime minister Enrico Letta and Members of the European Parliament Brando Benifei and Sergey Lagodinsky, have also joined the call2
.The initiative warns that AI's current trajectory presents unprecedented dangers, potentially leading to mass unemployment, engineered pandemics, and systematic human rights violations
1
.While the letter doesn't provide concrete recommendations, it suggests some potential limits, including:
1
2
The signatories argue that a fragmented patchwork of national and regional AI rules will not suffice to regulate a technology that crosses borders by design. They call for the creation of an independent body to oversee the implementation of global AI standards
2
.Related Stories
The initiative comes amid increasing scrutiny of AI's real-world impact. Recent headlines have highlighted AI's role in mass surveillance, its alleged involvement in a teenager's suicide, and its potential to spread misinformation
1
. A study published in Psychiatric Services found that leading chatbots gave inconsistent responses to questions about suicide, raising concerns about AI's impact on mental health2
.While major AI companies have signaled their commitment to developing safe and secure AI systems, recent research suggests that these voluntary commitments are only being fulfilled about half of the time
1
.Summarized by
Navi