9 Sources
9 Sources
[1]
A 'global call for AI red lines' sounds the alarm about the lack of international AI policy
On Monday, more than 200 former heads of state, diplomats, Nobel laureates, AI leaders, scientists, and others all agreed on one thing: There should be an international agreement on "red lines" that AI should never cross -- for instance, not allowing AI to impersonate a human being or self-replicate. They, along with more than 70 organizations that address AI, have all signed the Global Call for AI Red Lines initiative, a call for governments to reach an "international political agreement on 'red lines' for AI by the end of 2026." Signatories include British Canadian computer scientist Geoffrey Hinton, OpenAI cofounder Wojciech Zaremba, Anthropic CISO Jason Clinton, Google DeepMind research scientist Ian Goodfellow, and others. "The goal is not to react after a major incident occurs... but to prevent large-scale, potentially irreversible risks before they happen," Charbel-Raphaël Segerie, executive director of the French Center for AI Safety (CeSIA), said during a Monday briefing with reporters. He added, "If nations cannot yet agree on what they want to do with AI, they must at least agree on what AI must never do." The announcement comes ahead of the 80th United Nations General Assembly high-level week in New York, and the initiative was led by CeSIA, the Future Society, and UC Berkeley's Center for Human-Compatible Artificial Intelligence. Nobel Peace Prize laureate Maria Ressa mentioned the initiative during her opening remarks at the assembly when calling for efforts to "end Big Tech impunity through global accountability." Some regional AI red lines do exist. For example, the European Union's AI Act that bans some uses of AI deemed "unacceptable" within the EU. There is also an agreement between the US and China that nuclear weapons should stay under human, not AI, control. But there is not yet a global consensus. In the long term, more is needed than "voluntary pledges," Niki Iliadis, director for global governance of AI at The Future Society, said to reporters on Monday. Responsible scaling policies made within AI companies "fall short for real enforcement." Eventually, an independent global institution "with teeth" is needed to define, monitor, and enforce the red lines, she said. "They can comply by not building AGI until they know how to make it safe," Stuart Russell, a professor of computer science at UC Berkeley and a leading AI researcher, said during the briefing. "Just as nuclear power developers did not build nuclear plants until they had some idea how to stop them from exploding, the AI industry must choose a different technology path, one that builds in safety from the beginning, and we must know that they are doing it." Red lines do not impede economic development or innovation, as some critics of AI regulation argue, Russell said. "You can have AI for economic development without having AGI that we don't know how to control," he said. "This supposed dichotomy, if you want medical diagnosis then you have to accept world-destroying AGI -- I just think it's nonsense."
[2]
AI experts urge UN to draw red lines around the tech
ai-pocalypse Ten Nobel Prize winners are among the more than 200 people who've signed a letter calling on the United Nations to define and enforce "red lines" that prohibit some uses of AI. "Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world," the signers argue, before arguing that AI "could soon far surpass human capabilities and escalate risks such as engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children, national and international security concerns, mass unemployment, and systematic human rights violations." Their letter is available at a post from the group at its website redlines.ai, where the signatories call on the UN to prohibit use of AI in circumstances that the group feels are too dangerous, including giving AI systems direct control of nuclear weapons, using it for mass surveillance, and impersonating humans without disclosure of AI involvement. The group asks the UN to set up global enforced controls on AI by the end of 2026 and warns that, once unleashed, no one might be able to control them. Signatories to the call include Geoffrey Hinton, who won a Nobel Prize for work on AI, Turing Award winner Yoshua Bengio, OpenAI co-founder and ChatGPT developer Wojciech Zaremba, Anthropic's CISO Jason Clinton, and Google DeepMind's research scientist Ian Goodfellow, along with a host of Chocolate Factory colleagues. DeepMind's CEO Demis Hassabis didn't sign the proposal, nor did OpenAI's Sam Altman, which could make for some awkward meetings. It will become increasingly difficult to exert meaningful human control in the coming years The group wants the UN to act by next year, because they fear that slower action will come to late to effectively regulate AI. "Left unchecked, many experts, including those at the forefront of development, warn that it will become increasingly difficult to exert meaningful human control in the coming years," the call argues. The signatories to the red lines proposal point out that the UN has already developed similar agreements such as the 1970 Treaty on the Non-Proliferation of Nuclear Weapons, although it glosses over the fact that several nuclear-armed nations either didn't sign up for it (India, Israel, and Pakistan) or withdrew from the pact like North Korea in 2003. It fired off its first bomb three years later. On the other hand the 1987 Montreal Treaty to ban the use of ozone-depleting chemicals has largely worked. Most of the major AI builders have also signed up to the Frontier AI Safety Commitments, decided last May, in which signatories signed a non-binding resolution to pull the plug on an AI system that looks like it's getting too dangerous. Despite the noble intentions of the authors, it's unlikely the UN is going to give this much attention as between the ongoing war in Ukraine, the situation in Gaza, and many other pressing world problems, and the agenda at this week's UN General Assembly is already packed. ®
[3]
AI Experts Urgently Call on Governments to Think About Maybe Doing Something
Everyone seems to recognize the fact that artificial intelligence is a rapidly developing and emerging technology that has the potential for immense harm if operated without safeguards, but basically no one (except for the European Union, sort of) can agree on how to regulate it. So, instead of trying to set up a clear and narrow path for how we will allow AI to operate, experts in the field have opted for a new approach: how about we just figure out what extreme examples we all think are bad and just agree to that? On Monday, a group of politicians, scientists, and academics took to the United Nations General Assembly to announce the Global Call for AI Red Lines, a plea for the governments of the world to come together and agree on the broadest of guardrails to prevent "universally unacceptable risks" that could result from the deployment of AI. The goal of the group is to get these red lines established by the end of 2026. The proposal has amassed more than 200 signatures thus far from industry experts, political leaders, and Nobel Prize winners. The former President of Ireland, Mary Robinson, and the former President of Colombia, Juan Manuel Santos, are on board, as are Nobel winners Stephen Fry and Yuval Noah Harari. Geoffrey Hinton and Yoshua Bengio, two of the three men commonly referred to as the "Godfathers of AI" due to their foundational work in the space, also added their names to the list. Now, what are those red lines? Well, that's still up to governments to decide. The call doesn't include specific policy prescriptions or recommendations, though it does call out a couple of examples of what could be a red line. Prohibiting the launch of nuclear weapons or use in mass surveillance efforts would be a potential red line for AI uses, the group says, while prohibiting the creation of AI that cannot be terminated by human override would be a possible red line for AI behavior. But they're very clear: don't set these in stone, they're just examples, you can make your own rules. The only thing the group offers concretely is that any global agreement should be built on three pillars: "a clear list of prohibitions; robust, auditable verification mechanisms; and the appointment of an independent body established by the Parties to oversee implementation." The details, though, are for governments to agree to. And that's kinda the hard part. The call recommends that countries host some summits and working groups to figure this all out, but there are surely many competing motives at play in those conversations. The United States, for instance, has already committed to not allowing AI to control nuclear weapons (an agreement made under the Biden administration, so lord knows if that is still in play). But recent reports indicated that parts of the Trump administration's intelligence community have already gotten annoyed by the fact that some AI companies won't let them use their tools for domestic surveillance efforts. So would America get on board for such a proposal? Maybe we'll find out by the end of 2026... if we make it that long.
[4]
U.N. experts want AI 'red lines.' Here's what they might be.
Maria Angelita Ressa, Nobel Peace Prize winner 2021, tells the United Nations why AI "red lines" are needed. Credit: Timothy A. Clary / AFP The AI Red Lines initiative launched at the United Nations General Assembly Tuesday -- the perfect place for a very nonspecific declaration. More than 200 Nobel laureates and other artificial intelligence experts (including OpenAI co-founder Wojciech Zaremba), plus 70 organizations that deal with AI (including Google DeepMind and Anthropic), signed a letter calling for global "red lines to prevent unacceptable AI risks." However, it was marked as much by what it didn't say as what it did. "AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy," the letter said, laying out a deadline of 2026 for its recommendation to be implemented: "An international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks." Fair enough, but what red lines, exactly? The letter says only that these parameters "should build upon and enforce existing global frameworks and voluntary corporate commitments, ensuring that all advanced AI providers are accountable to shared thresholds." The lack of specifics may be necessary to keep a very loose coalition of signatories together. They include AI alarmists like 77-year-old Geoffrey Hinton, the so-called "AI godfather" who has spent the last three years predicting various forms of doom from the impending arrival of AGI (artificial general intelligence); the list also includes AI skeptics like cognitive scientist Gary Marcus, who has spent the last three years telling us that AGI isn't coming any time soon. What could they all agree on? For that matter, what could governments already at loggerheads over AI, mainly the U.S. and China, agree on, and trust each other to implement? Good question. This Tweet is currently unavailable. It might be loading or has been removed. Probably the most concrete answer by a signatory came from Stuart Russell, veteran computer science professor at UC Berkeley, in the wake of a previous attempt to talk red lines at the 2023 Global AI Safety Summit. In a paper titled "Make AI safe or make safe AI?" Russell wrote that AI companies offer "after-the-fact attempts to reduce unacceptable behavior once an AI system has been built." He contrasted that with the red lines approach: ensure built-in safety in the design from the very start, and "unacceptable behavior" won't be possible in the first place. "It should be possible for developers to say, with high confidence, that their systems will not exhibit harmful behaviors," Russell wrote. "An important side effect of red line regulation will be to substantially increase developers' safety engineering capabilities." In his paper, Russell got as far as four red line examples: AI systems should not attempt to replicate themselves; they should never attempt to break into other computer systems; they should not be allowed to give instructions on manufacturing bioweapons. And their output should not allow any "false and harmful statements about real people." From the standpoint of 2025, we might add red lines that deal with the current ongoing threats of AI psychosis, and AI chatbots that can allegedly be manipulated to give advice on suicide. We can all agree on that, right? Trouble is, Russell also believes that no Large Language Model (LLM) is "capable of demonstrating compliance", even with his four minimal red-line requirements. Why? Because they are predictive word engines that fundamentally don't understand what they're saying. They are not capable of reasoning, even on basic logic puzzles, and increasingly "hallucinate" answers to satisfy their users. So true AI red line safety, arguably, would mean none of the current AI models would be allowed on the market. That doesn't bother Russell; as he points out, we don't care that compliance is difficult when it comes to medicine or nuclear power. We regulate regardless of outcome. But the notion that AI companies will just voluntarily shut down their models until they can prove to regulators that no harm will come to users? This is a greater hallucination than anything ChatGPT can come up with.
[5]
The UN's AI warnings grow louder
If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? What to Know: The UN Takes On AI AI takes the podium -- The United Nations General Assembly met this week in New York. While the assembly members spent much of their time on the crises in Palestine and Sudan, they also devoted a good chunk to AI. On Monday, Nobel Peace Prize laureate Maria Ressa called attention to a campaign for "AI Red Lines," imploring governments to come together to "prevent universally unacceptable risks" from AI. Over 200 prominent politicians and scientists, including 10 Nobel Prize winners, signed onto the statement. "A new curtain" -- On Wednesday, the Security Council engaged in an open debate on "artificial intelligence and international peace and security." Over three hours, each country took turns delivering roughly the same spiel: That AI held the promise for both good and harm. Over and over, representatives declared that AI was not sci-fi but a fact of modern life, and that international regulatory guardrails needed to be developed immediately, especially around autonomous weapons and nuclear.
[6]
Scientists urge global AI 'red lines' as leaders gather at UN
Technology veterans, politicians and Nobel Prize winners called on nations around the world Monday to quickly establish "red lines" too dangerous for artificial intelligence to cross. More than 200 prominent figures including 10 Nobel laureates and scientists working at AI giants Anthropic, Google DeepMind, Microsoft and OpenAI signed on to a letter released at the start of the latest session of the United Nations General Assembly. "AI holds immense potential to advance human well-being, yet its current trajectory presents unprecedented dangers," the letter read. "Governments must act decisively before the window for meaningful intervention closes." AI red lines would be internationally agreed bans on uses deemed too risky under any circumstances, according to creators of the letter. Examples given included entrusting AI systems with command of nuclear arsenals or any kind of lethal autonomous weapons system. Other red lines could be allowing AI to be used for mass surveillance, social scoring, cyberattacks, or impersonating people, according to those behind the campaign. Those who signed the message urged governments to have AI red lines in place by the end of next year given the pace the technology is advancing. "AI could soon far surpass human capabilities and escalate risks such as engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children, national and international security concerns, mass unemployment, and systematic human rights violations," the letter read. "Left unchecked, many experts, including those at the forefront of development, warn that it will become increasingly difficult to exert meaningful human control in the coming years."
[7]
Nobel Prize winners call for binding international 'red lines' on AI
Signatories include "godfathers of AI," famous authors, scientists and Nobel Prize winners from nearly every category. Over 200 prominent politicians and scientists, including 10 Nobel Prize winners and many leading artificial intelligence researchers, released an urgent call for binding international measures against dangerous AI uses on Monday morning. Warning that AI's "current trajectory presents unprecedented dangers," the statement, termed the Global Call for AI Red Lines, argues that "an international agreement on clear and verifiable red lines is necessary." The open letter urges policymakers to enact this accord by the end of 2026, given the rapid progress of AI capabilities. Nobel Peace Prize Laureate Maria Ressa announced the letter in her opening speech at the United Nations General Assembly's High-Level Week Monday morning. She implored governments to come together to "prevent universally unacceptable risks" from AI and to "define what AI should never be allowed to do." In addition to Nobel Prize recipients in Chemistry, Economics, Peace and Physics, signatories include celebrated authors like Stephen Fry and Yuval Noah Harari as well as former heads of state, including former President Mary Robinson of Ireland and former President Juan Manuel Santos of Colombia, who won the Nobel Peace Prize in 2016. Geoffrey Hinton and Yoshua Bengio, recipients of the prestigious Turing Award and two of the three so-called 'godfathers of AI,' also signed the open letter. The Turing Award is often regarded as the Nobel Prize for the field of computer science. Hinton left a prestigious position at Google two years ago to raise awareness about the dangers of unchecked AI development. The signatories hail from dozens of countries, including AI leaders like the United States and China. "For thousands of years, humans have learned -- sometimes the hard way -- that powerful technologies can have dangerous as well as beneficial consequences," Harari said. "Humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity." The open letter comes as AI attracts increasing scrutiny. In just the past week, AI made national headlines for its use in mass surveillance, its alleged role in a teenager's suicide, and its ability to spread misinformation and even undermine our shared sense of reality. However, the letter warns that today's AI risks could quickly be overshadowed by more devastating and larger-scale impacts. For example, the letter references recent claims from experts that AI could soon contribute to mass unemployment, engineered pandemics and systematic human-rights violations. The letter stops short of providing concrete recommendations, saying government officials and scientists must negotiate where red lines fall in order to secure international consensus. However, the letter offers suggestions for some limits, like prohibiting lethal autonomous weapons, autonomous replication of AI systems and the use of AI in nuclear warfare. "It is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly," said Ahmet Üzümcü, the former director general of the Organization for the Prohibition of Chemical Weapons (OPCW), which was awarded the 2013 Nobel Peace Prize under Üzümcü's tenure. As a sign of the effort's feasibility, the statement points to similar international resolutions that established red lines in other dangerous arenas, like prohibitions on biological weapons or ozone-depleting chlorofluorocarbons. Warnings about AI's potentially existential threats are not new. In March 2023, more than 1,000 technology researchers and leaders, including Elon Musk, called for a pause in the development of powerful AI systems. Two months later, leaders of prominent AI labs, including OpenAI's Sam Altman, Anthropic's Dario Amodei and Google DeepMind's Demis Hassabis, signed a one-sentence statement that advocated for treating AI's existential risk to humanity as seriously as threats posed by nuclear war and pandemics. Altman, Amodei and Hassabis did not sign the latest letter, though prominent AI researchers like OpenAI co-founder Wojciech Zaremba and DeepMind scientist Ian Goodfellow did. Over the past few years, leading American AI companies have often signalled a desire to develop safe and secure AI systems, for example by signing a safety-focused agreement with the White House in July 2023 and joining the Frontier AI Safety Commitments at the Seoul AI Summit in May 2024. However, recent research has shown that, on average, these companies are only fulfilling about half of those voluntary commitments, and global leaders have accused them of prioritizing profit and technical progress over societal welfare. Companies like OpenAI and Anthropic also voluntarily allow the Center for AI Standards and Innovation, a federal office focused on American AI efforts, and the United Kingdom's AI Security Institute to test and evaluate AI models for safety before models' public release. Yet many observers have questioned the effectiveness and limitations of such voluntary collaboration. Though Monday's open letter echoes past efforts, it differs by arguing for binding limitations. The open letter is the first to feature Nobel Prize winners from a wide range of scientific disciplines. Nobel-winning signatories include biochemist Jennifer Doudna, economist Daron Acemoglu, and physicist Giorgio Parisi. The release of the letter came at the beginning of the U.N. General Assembly's High-Level Week, during which heads of state and government descend on New York City to debate and lay out policy priorities for the year ahead. The U.N. will launch its first diplomatic AI body on Thursday in an event headlined by Spanish Prime Minister Pedro Sanchez and U.N. Secretary-General António Guterres. Over 60 civil-society organizations from around the world also gave their support to the letter, from the Demos think tank in the United Kingdom to the Beijing Institute of AI Safety and Governance. The Global Call for AI Red Lines is organized by a trio of nonprofit organizations: the Center for Human-Compatible AI based at the University of California Berkeley, The Future Society and the French Center for AI Safety.
[8]
European lawmakers join Nobel laureates in call for AI 'red lines'
European lawmakers have joined Nobel Prize winners, former heads of state and leading AI researchers in calling for binding international rules to fight against the most dangerous applications of artificial intelligence. The initiative, launched this Monday at the United Nations' 80th General Assembly in New York, urges governments to agree by 2026 on a set of "red lines" on the uses of AI considered too harmful to be permitted under any circumstances. Among the signatories are Italian former prime minister Enrico Letta, former President of Ireland Mary Robinson (currently United Nations High Commissioner for Human Rights) and Members of the European Parliament Brando Benifei, an Italian socialist MEP who co-chairs the European Parliament's AI working group, and Sergey Lagodinsky (Germany/Green), alongside ten Nobel laureates and tech leaders including the co-founder of OpenAi and Google's director of engineering. Signatories argue that without global standards, humanity risks facing AI-driven threats ranging from engineered pandemics and disinformation campaigns to large-scale human rights abuses and the loss of human control over advanced systems. The campaign's breadth is unprecedented, with more than 200 prominent figures and 70 organisations from politics, science, human rights and industry backing the call. Tech leaders from OpenAI, Google DeepMind and Anthropic have also lent their names to the appeal. AI and risks for mental health The move comes amid rising concern over the real-world impact of AI systems already in use. A recent study published in Psychiatric Services found that leading chatbots, including ChatGPT, Claude and Google's Gemini, gave inconsistent responses to questions about suicide - sometimes refusing to engage, sometimes offering appropriate guidance, and occasionally producing answers that experts judged unsafe. The researchers warned that such gaps could exacerbate mental health crises. Several deaths by suicide have been linked to conversations with AI systems, raising questions over how companies safeguard users from harm. A cross-border effort Supporters of the UN initiative say these examples illustrate why clearer limits are needed. Nobel Peace Prize laureate Maria Ressa warned that without safeguards, AI could fuel "epistemic chaos" and enable systematic abuses of human rights. Yoshua Bengio, one of the "godfathers" of AI, stressed that the race to develop ever more powerful models poses risks societies are ill-prepared to handle. Global "red lines" have been used in other cases such as international treaties banning biological and nuclear weapons, human cloning or the High Seas Treaty signed earlier this year, the signatories suggest. They welcome the EU legislation on AI but warn that a fragmented patchwork of national and EU AI rules will not be enough to regulate a technology that crosses borders by design. They call for the creation of an independent body or organisation to take care of the implementation of those rules. Backers hope negotiations on binding prohibitions can begin quickly, to prevent what Ahmet Üzümcü, former director general of the Organization for the Prohibition of Chemical Weapons, described as "irreversible damages to humanity". If the campaign does not advocate for specific "red lines", it suggests some basic prohibitions: to prevent AI systems from launching nuclear attacks, conducting mass surveillance or impersonating humans. While countries including the US, China and EU members are drafting their own AI regulations, the signatories argue that only a global agreement can ensure common standards are applied and enforced. They hope that by the end of 2026, a UN General Assembly resolution could be initiated, and negotiations could start for a worldwide treaty.
[9]
Mary Robinson, Geoffrey Hinton call for AI 'red lines' in new letter
Former president of Ireland Mary Robinson. Image: University of Michigan School for Environment and Sustainability/ Flickr (CC BY 2.0) The letter warns of AI escalating widespread disinformation and mass unemployment. More than 200 prominent figures globally, including 10 Nobel laureates and eight former key political leaders have signed a petition calling for guardrails to protect against the "unprecedented dangers" presented by AI. The signatories include Geoffrey Hinton, also known as the "Godfather of AI", former Irish president Mary Robinson and Wojciech Zaremba, one of OpenAI's co-founders. The AI race's biggest face, Sam Altman, however, did not take part in the petition. "AI holds immense potential to advance human wellbeing, yet its current trajectory presents unprecedented dangers," the letter reads. It warns of artificial intelligence technology escalating the risks of engineered pandemics, widespread disinformation, mass unemployment and systematic human rights violations. The letter comes as the United Nations General Assembly is convening in New York this week. "Some advanced AI systems have already exhibited deceptive and harmful behaviour," the letter says. Environmental and economic risks aside, large language models such as OpenAI's ChatGPT have also been blamed by users for a new wave of AI-induced mental health conditions. The signatories want governments to ensure the development of AI tech is constrained under conditions that would ensure potential risks are mitigated. They are asking for robust and operational protective guardrails by the end of 2026. "The current race towards ever more capable and autonomous AI systems poses major risks to our societies and we urgently need international collaboration to address them," said Yoshua Bengio, a signatory and the 2018 Turing Award Winner. "Establishing red lines is a crucial step towards preventing unacceptable AI risks." While the signatories have not outlined exact remedies, a similar letter from 2024, also signed by Hinton and Bengio, demands that no AI system should "substantially" increase the capability to design weapons of mass destruction, no AI system should be able to autonomously execute mass cyberattacks, or improve itself without human approval. "In the depths of the Cold War, international scientific and governmental coordination helped avert thermonuclear catastrophe. Humanity again needs to coordinate to avert a catastrophe that could arise from unprecedented technology," last year's letter by the International Dialogues on AI Safety read. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news. Former president of Ireland Mary Robinson. Image: University of Michigan School for Environment and Sustainability/ Flickr (CC BY 2.0)
Share
Share
Copy Link
Over 200 prominent figures, including Nobel laureates and AI experts, have signed a petition calling for the United Nations to establish 'red lines' for AI development and use by 2026. The initiative aims to prevent potential catastrophic risks associated with unchecked AI advancement.
Over 200 leading figures, including Nobel laureates and AI experts, have launched the Global Call for AI Red Lines at the UN General Assembly
1
. The initiative urges governments worldwide to establish international 'red lines' for AI by 2026, aiming to proactively prevent catastrophic risks from unchecked AI development2
.Source: NBC News
The core objective is prevention, as highlighted by Charbel-Raphaël Segerie of the French Center for AI Safety (CeSIA), who stressed preempting large-scale, irreversible dangers
1
. Signatories are concerned about advanced AI's potential for deceptive and harmful behaviors, even with autonomy. They warn of risks like engineered pandemics, widespread disinformation, mass manipulation, national security threats, and systematic human rights violations2
.Source: The Register
While specific red lines are for governments to define, examples include prohibiting AI control of nuclear weapons, banning AI in mass surveillance, preventing undisclosed AI impersonation, and ensuring human override capabilities
3
. The initiative advocates for a global agreement with three pillars: clear prohibitions, auditable verification mechanisms, and an independent oversight body3
.Related Stories
UC Berkeley's Professor Stuart Russell argues that red lines do not hinder economic progress, stating, "You can have AI for economic development without having AGI that we don't know how to control"
1
. He also proposes integrating safety into AI design from the outset4
. However, critics point to the proposal's lack of concrete policy, delegating red line specifics to governments. Furthermore, some experts doubt current Large Language Models (LLMs) can meet even minimal compliance, given their nature as predictive engines lacking true understanding4
.The call has gained traction at the UN, with Nobel Peace Prize laureate Maria Ressa referencing it in her remarks
5
. The UN Security Council also discussed "artificial intelligence and international peace and security," with nations emphasizing urgent international regulatory guardrails, especially for autonomous weapons and nuclear technology5
.Source: Mashable
Summarized by
Navi
[1]
[2]
[5]