10 Sources
10 Sources
[1]
AI joins list of global challenges on agenda for UN meeting
Artificial intelligence is joining the list of big and complex global challenges that world leaders and diplomats will tackle at this week's annual high-level United Nations meetup. Since the AI boom kicked off with ChatGPT's debut about three years ago, the technology's breathtaking capabilities have amazed the world. Tech companies have raced to develop bigger and better AI systems even as experts warn of its risks, including existential threats like engineered pandemics and large-scale disinformation, and call for safeguards. The U.N.'s adoption of a new governance architecture is the latest and biggest effort to rein in AI. Previous multilateral efforts, including three AI summits organized by Britain, South Korea and France, have resulted only in non-binding pledges. Last month, the General Assembly adopted a resolution to set up two key bodies on AI -- a global forum and an independent scientific panel of experts -- in a milestone move to shepherd global governance efforts for the technology. On Wednesday, a U.N. Security Council meeting will convene an open debate on the issue. Among the questions to be addressed: How can the Council help ensure the responsible application of AI to comply wih international law and support peace processes and conflict prevention? And on Thursday, as part of the body's annual meeting, U.N. Secretary-General António Guterres will hold a meeting to launch the forum, called the Global Dialogue on AI Governance. It's a venue for governments and "stakeholders" to discuss international cooperation and share ideas and solutions. It's scheduled to meet formally in Geneva next year and in New York in 2027. Meanwhile, recruitment is expected to get underway to find 40 experts for the scientific panel, including two co-chairs, one from a developed country and one from a developing nation. The panel has drawn comparisons with the U.N.'s climate change panel and its flagship annual COP meeting. The new bodies represent "a symbolic triumph." They are "by far the world's most globally inclusive approach to governing AI," Isabella Wilkinson, a research fellow at the London-based think tank Chatham House, wrote in a blog post. "But in practice, the new mechanisms look like they will be mostly powerless," she added. Among the possible issues is whether the U.N.'s lumbering administration is able to regulate a fast-moving technology like AI. Ahead of the meeting, a group of influential experts called for governments to agree on so-called red lines for AI to take effect by the end of next year, saying that the technology needs "minimum guardrails" designed to prevent the "most urgent and unacceptable risks." The group, including senior employees at ChatGPT maker OpenAI, Google's AI research lab DeepMind and chatbot maker Anthropic, wants governments to sign an internationally binding agreement on AI. They point out that the world has previously agreed on treaties banning nuclear testing and biological weapons and protecting the high seas. "The idea is very simple," said one of the backers, Stuart Russell, an AI professor at the University of California, Berkeley. "As we do with medicines and nuclear power stations, we can require developers to prove safety as a condition of market access." Russell suggested that U.N. governance could resemble the workings of another U.N.-affiliated body, the International Civil Aviation Organization, which coordinates with safety regulators across different countries and makes sure they're all working off the same page. And rather than laying out a set of rules that are set in stone, diplomats could draw up a "framework convention" that's flexible enough to be updated to reflect AI's latest advances, he said.
[2]
The world pushes ahead on AI safety -- with or without the U.S.
Why it matters: Companies that work across the globe will be dealing with different regimes, compliance costs and expectations -- and the U.S. could get left out of this AI conversation. Driving the news: That was made clear at last week's UN General Assembly in New York City, where the first Global Dialogue on AI Governance took place. * Goals for this "global dialogue" are to align rules "to help build safe, secure and trustworthy AI systems," per a speech by UN Secretary-General António Guterres. Threat level: If there are AI disasters in the future, the U.S. may not be part of any global agreements on how to mitigate or deal with them. Zoom in: The Trump administration has its own ideas for how to best control and deploy AI. * "We totally reject all efforts by international bodies to assert centralized control and global governance of AI," Office of Science and Technology Policy director Michael Kratsios said in his remarks to the UN debate on global AI. Policymakers from Finland, Singapore and India -- along with participants of AI safety institutes from Canada, China, the OECD and Singapore -- were among the panelists at AI Safety Connect last week. * No U.S. officials spoke at the event. * AI Safety Connect, held at UNGA, was meant to spur discussion on what "red lines" for AI should look like globally. What they're saying: "I would want [the U.S. government] to be more publicly supportive, and I wish that we could actually have a quicker move towards global governance of AI regime with full U.S. support," Nicolas Miailhe, co-founder of AI Safety Connect and founder of Paris-based startup PRISM Eval, told Axios. * Uma Kalkar, Miailhe's chief of staff, told Axios: "We've had U.S. presence [at our events], not necessarily always government presence." * Kalkar: "It's not something that's being ignored, and it's not something that's being sidelined... It's about who's ready to have those conversations in those specific multilateral spaces." What we're watching: World leaders have gathered for the last two years to discuss AI governance at summits in Paris and outside London. * Vice President J.D. Vance attended this year's summit, telling the world that "the AI future is not going to be won by hand-wringing about safety." * India will host the next global AI summit in New Delhi in February, and how the U.S. approaches that forum will send a major signal to the international community. Sign up for Axios AI+ Government, our new Friday newsletter focusing on how governments encourage, regulate and use AI.
[3]
US and Russia criticise plans to limit AI weapons
Dmitriy Polyanskiy, Russia's deputy representative to the UN, said that his country would oppose proposed AI restrictions from the Security Council, describing the technology as a "crucial element of national security." He criticised the Council's composition - 15 member states including the US, Britain, and France - as reflecting a "disproportionate over representation" of Western nations and interests, but said that Russia would welcome broader UN regulation on AI not decided by the council. The high-level debate on the technology's potential for transforming warfare came against the backdrop of an arms race in Ukraine in which both sides are beginning to field weapons - particularly drones - enhanced by AI. A day earlier, Volodymyr Zelensky, the Ukrainian President, used his speech to the UN General Assembly to call for global rules on how AI can be utilised in warfare. "It's only a matter of time, not much, before drones are fighting drones, attacking critical infrastructure and targeting people all by themselves, fully autonomous and [with] no human involved, except the few who control AI systems," he warned. Amongst the key issues discussed during the debate was the creation of a legally binding treaty by 2026 to ban lethal autonomous weapons that operate using AI without human control. António Guterres, the UN Secretary General, has previously described such weapons as "morally repugnant" and, in his opening speech, warned that AI posed an existential threat to humanity. "Humanity's fate cannot be left to an algorithm," he said "AI is no longer a distant horizon - it is here, transforming daily life, the information space, and the global economy at breathtaking speed," he said. AI, he said, could help to predict food insecurity, support de-mining operations, and even identify outbreaks of violence before they get out of control - if it is used responsibly. "But without guardrails, it can also be weaponised," he warned, pointing to the increased use of AI targeting in recent conflicts, as well as cyberattacks and deepfakes - realistic AI-generated imagery - that are proving to be highly destabilising. Decisions on nuclear weapons, he added, "must rest with humans - not machines". The secretary general's warning was echoed by David Lammy, the British Deputy Prime Minister, who said "no aspect of life, war or peace, will escape" the AI revolution. The technology could fuel future conflicts, such as by making "ultra-novel" chemical and biological weapons available to "malign actors". "The United Kingdom committed to using AI responsibly, and together, here at the United Nations, we must ensure AI strengthens peace and security," Mr Lammy added. Mr Guterres reiterated his four priorities for global AI governance: ensuring human authority over "life-and-death decisions"; creating strong regulation; curbing disinformation; and closing access between rich and poor countries so that AI benefits all of humanity rather than deepening inequality. He stressed that military applications of AI must remain compliant with international law and that "human control and judgment must be preserved in every use of force." Since the global boom in artificial intelligence - kicked off by ChatGPT's launch three years ago - the technology has been described as a turning point for humanity, with some saying its impact could become far greater than that of the industrial revolution. Wednesday's open debate at the Security Council revolved around how world leaders can help ensure the responsible use of AI to comply with international law. Yoshua Bengio, professor at the Université de Montréal and chair of the International AI Safety Report, told the Council that AI could surpass human intelligence "within five years" and may soon act "irreversibly out of anyone's control, putting humanity at risk." As part of its General Assembly this week, the UN said it was implementing a "global dialogue on artificial intelligence governance," to assemble ideas and best practices on AI governance. The organisation also said it would create a panel of scientific experts to analyse the research on AI risks and benefits, in the vein of previous efforts to research climate change and nuclear policy. Protect yourself and your family by learning more about Global Health Security
[4]
U.S. rejects international AI oversight at U.N. General Assembly
NEW YORK -- The United States clashed with world leaders over artificial intelligence at the United Nations General Assembly this week, rejecting calls for global oversight as many pushed for new collaborative frameworks. While many heads of state, corporate leaders and prominent figures endorsed a need for urgent international collaboration on AI, the U.S. delegation criticized the role of the U.N. and pushed back on the idea of centralized governance of AI. Representing the U.S. in Wednesday's Security Council meeting on AI, Michael Kratsios, the director of the Office of Science and Technology Policy, said, "We totally reject all efforts by international bodies to assert centralized control and global governance of AI." The path to a flourishing future powered by AI does not lie in "bureaucratic management," Kratsios said, but instead in "the independence and sovereignty of nations." While Kratsios shot down the idea of combined AI governance, President Donald Trump said in his speech to the General Assembly on Tuesday that the White House will be "pioneering an AI verification system that everyone can trust" to enforce the Biological Weapons Convention. "Hopefully, the U.N. can play a constructive role, and it will also be one of the early projects under AI," Trump said. AI "could be one of the great things ever, but it also can be dangerous, but it can be put to tremendous use and tremendous good.". In a statement to NBC News, a State Department spokesperson said, "The United States supports like-minded nations working together to encourage the development of AI in line with our shared values. The US position in international bodies is to vigorously advocate for international AI governance approaches that promote innovation, reflect American values, and counter authoritarian influence." The comments rejecting collaborative efforts around AI governance stood in stark contrast to many of the initiatives being launched at the General Assembly. On Thursday, the U.N. introduced the Global Dialogue on AI Governance, the U.N.'s first body dedicated to AI governance involving all member states. U.N. Secretary-General António Guterres said the body would "lay the cornerstones of a global AI ecosystem that can keep pace with the fastest-moving technology in human history." Speaking after Guterres, Nobel Prize recipient Daron Acemoglu outlined the growing stakes of AI's rapid development, arguing that "AI is the biggest threat that humanity has faced." But in an interview with NBC News, Amandeep Singh Gill, the U.N.'s special envoy for digital and emerging technologies, told NBC News that the United States' critical perception of the U.N.'s role in international AI governance was misconstrued. "I think it's a misrepresentation to say that the U.N. is somehow getting into the regulation of AI," Gill said. "These are not top-down power grabs in terms of regulation. The regulation stays where regulation can be done in sovereign jurisdictions." Instead, the U.N.'s mechanisms "will provide platforms for international cooperation on AI governance," Gill said. In remarks immediately following Kratsios' comments, China's Vice Minister of Foreign Affairs Ma Zhaoxu said, "It is vital to jointly foster an open, inclusive, fair and nondiscriminatory environment for technological development and firmly oppose unilateralism and protectionism." "We support the U.N. playing a central role in AI governance," Ma said. One day after Kratsios' remarks at the Security Council, Spanish Prime Minister Pedro Sánchez seemed to push back on Kratsios and gave full-throated support for international cooperation on AI and the U.N.'s role in AI governance. "We need to coordinate a shared vision of AI at a global level, with the U.N. as the legitimate and inclusive forum to forge consensus around common interests," Sánchez said. "The time is now, when multilateralism is being most questioned and attacked, that we need to reaffirm how suitable it is in addressing challenges such as those represented by AI." Reacting to the week's developments, Renan Araujo, director of programs for the Washington, D.C.-based Institute for AI Policy and Strategy, told NBC News that "no one wants to see a burdensome, bureaucratic governance structure, and the U.S. has succeeded in starting bilateral and minilateral coalitions. At the same time, we should expect AI-related challenges to become more transnational in nature as AI capabilities become more advanced." This is not the first time the U.N. has addressed AI, having passed the Global Digital Compact last year. The compact laid the foundation for the AI dialogue and for an independent international scientific panel to evaluate AI's abilities, risks and pathways forward. Guterres announced that nominations to this panel are now open. While Thursday's event marked the launch of the global dialogue and panel, the dialogue will have its first full meeting in Geneva in summer 2026, in tandem with the International Telecommunication Union's annual AI for Good summit. The dialogue's exact functions and first actions will be charted out over the coming months.
[5]
Could we see global regulation on AI with UN's new AI forum?
As world leaders weigh its promise and peril at this week's high-level meetings, the United Nations heralds a COP meeting like body for international AI governance and an expert panel to present annual reports at the forum. Artificial intelligence (AI) took center stage at this week's annual high-level United Nations (UN) meeting in New York. Leaders at the UN Security Council addressed AI's possible benefits and harms in security, military use and misinformation. "The question is not whether AI will influence international peace and security, but how we will shape its influence used responsibly," UN Secretary-General Antonio Guterres said in opening remarks at Wednesday's meeting. "AI can strengthen prevention and protection, anticipating food insecurity and displacement, supporting de-mining, helping identify potential outbreaks of violence, and so much more. But without guardrails, it can also be weaponised," Guterres added. Wednesday's general debate centred around how the Council can help ensure the responsible application of AI to comply with international law and support peace processes and conflict prevention. How have world leaders reacted? Several European leaders stressed the need for the Council to lead the way on ensuring that AI is not used by militaries without human oversight to avert potentially devastating escalations or misfires. Greek Prime Minister Kyriakos Mitsotakis called on the Council to "rise to the occasion - just as it once rose to meet the challenges of nuclear weapons or peacekeeping, so too now it must rise to govern the age of AI." British Deputy Prime Minister David Lammy stressed deep AI analysis of situation data holds a promise for peace, saying AI is capable of keeping "ultra-accurate, real-time logistics, ultra-accurate real-time sentiment analysis, ultra-early warning systems". UN sets up new bodies for AI Last month, the UN General Assembly (UNGA) announced that it will set up two key bodies on AI - an independent scientific panel of experts and a global forum. The UN said in a statement that the new governance architecture will be a much more inclusive form of international governance and address the issues surrounding AI, and ensure that it benefits all people. The Scientific Panel, for which 40 experts will be appointed through nominations, will present annual reports at the forum named Global Dialogue on AI Governance to take place in 2026 in Geneva and 2027 in New York. The new establishment is seen as the latest and biggest effort to rein in AI. Experts have called it "a symbolic triumph". They are "by far the world's most globally inclusive approach to governing AI," Isabella Wilkinson, a research fellow at the London-based think tank Chatham House, wrote in a blog post. Britain, France, and South Korea have all held global AI summits but none of them have resulted in binding pledges for AI safety. However, Wilkinson is sceptical that the UN's lumbering administration can regulate a fast-moving technology such as AI. "But in practice, the new mechanisms look like they will be mostly powerless," she added. The UN chief will hold a meeting to officially launch the two new bodies on Thursday. It will be the first time that all 193 Member States of the UN will have a say in the way international AI governance is developed, according to the UN. Previously, leading AI experts and Nobel Prize winners, including senior figures from OpenAI, Google DeepMind and Anthropic, had issued a call for the United Nations to spearhead a binding global treaty setting "minimum guardrails" for AI designed to prevent the "most urgent and unacceptable risks". Among those who signed the call were European lawmakers, including Italian former prime minister Enrico Letta and former president of Ireland Mary Robinson, who is currently a United Nations high commissioner for human rights.
[6]
How the U.N.'s 2025 General Assembly will address the global AI boom
Diplomats will attempt to rein in the technology's explosive growth at this week's high-level meetings in New York. Artificial intelligence is joining the list of big and complex global challenges that world leaders and diplomats will tackle at this week's annual high-level United Nations meetup. Since the AI boom kicked off with ChatGPT's debut about three years ago, the technology's breathtaking capabilities have amazed the world. Tech companies have raced to develop better AI systems even as experts warn of its risks, including existential threats like engineered pandemics, large-scale misinformation or rogue AIs running out of control, and call for safeguards. The U.N.'s adoption of a new governance architecture is the latest and biggest effort to rein in AI. Previous multilateral efforts, including three AI summits organized by Britain, South Korea and France, have resulted only in non-binding pledges. Last month, the General Assembly adopted a resolution to set up two key bodies on AI -- a global forum and an independent scientific panel of experts -- in a milestone move to shepherd global governance efforts for the technology. On Wednesday, a U.N. Security Council meeting will convene an open debate on the issue. Among the questions to be addressed: How can the Council help ensure the responsible application of AI to comply with international law and support peace processes and conflict prevention? And on Thursday, as part of the body's annual meeting, U.N. Secretary-General António Guterres will hold a meeting to launch the forum, called the Global Dialogue on AI Governance. It's a venue for governments and "stakeholders" to discuss international cooperation and share ideas and solutions. It's scheduled to meet formally in Geneva next year and in New York in 2027. Meanwhile, recruitment is expected to get underway to find 40 experts for the scientific panel, including two co-chairs, one from a developed country and one from a developing nation. The panel has drawn comparisons with the U.N.'s climate change panel and its flagship annual COP meeting. The new bodies represent "a symbolic triumph." They are "by far the world's most globally inclusive approach to governing AI," Isabella Wilkinson, a research fellow at the London-based think tank Chatham House, wrote in a blog post. "But in practice, the new mechanisms look like they will be mostly powerless," she added. Among the possible issues is whether the U.N.'s lumbering administration is able to regulate a fast-moving technology like AI. Ahead of the meeting, a group of influential experts called for governments to agree on so-called red lines for AI to take effect by the end of next year, saying that the technology needs "minimum guardrails" designed to prevent the "most urgent and unacceptable risks." The group, including senior employees at ChatGPT maker OpenAI, Google's AI research lab DeepMind and chatbot maker Anthropic, wants governments to sign an internationally binding agreement on AI. They point out that the world has previously agreed on treaties banning nuclear testing and biological weapons and protecting the high seas. "The idea is very simple," said one of the backers, Stuart Russell, a computer science professor and director of University of California, Berkeley's Center for Human Compatible AI. "As we do with medicines and nuclear power stations, we can require developers to prove safety as a condition of market access." Russell suggested that U.N. governance could resemble the workings of another U.N.-affiliated body, the International Civil Aviation Organization, which coordinates with safety regulators across different countries and makes sure they're all working off the same page. And rather than laying out a set of rules that are set in stone, diplomats could draw up a "framework convention" that's flexible enough to be updated to reflect AI's latest advances, he said.
[7]
AI joins list of global challenges on agenda for UN meeting
World leaders and diplomats are tackling artificial intelligence at the United Nations' annual high-level meeting Artificial intelligence is joining the list of big and complex global challenges that world leaders and diplomats will tackle at this week's annual high-level United Nations meetup. Since the AI boom kicked off with ChatGPT's debut about three years ago, the technology's breathtaking capabilities have amazed the world. Tech companies have raced to develop bigger and better AI systems even as experts warn of its risks, including existential threats like engineered pandemics and large-scale disinformation, and call for safeguards. The U.N.'s adoption of a new governance architecture is the latest and biggest effort to rein in AI. Previous multilateral efforts, including three AI summits organized by Britain, South Korea and France, have resulted only in non-binding pledges. Last month, the General Assembly adopted a resolution to set up two key bodies on AI -- a global forum and an independent scientific panel of experts -- in a milestone move to shepherd global governance efforts for the technology. On Wednesday, a U.N. Security Council meeting will convene an open debate on the issue. Among the questions to be addressed: How can the Council help ensure the responsible application of AI to comply wih international law and support peace processes and conflict prevention? And on Thursday, as part of the body's annual meeting, U.N. Secretary-General António Guterres will hold a meeting to launch the forum, called the Global Dialogue on AI Governance. It's a venue for governments and "stakeholders" to discuss international cooperation and share ideas and solutions. It's scheduled to meet formally in Geneva next year and in New York in 2027. Meanwhile, recruitment is expected to get underway to find 40 experts for the scientific panel, including two co-chairs, one from a developed country and one from a developing nation. The panel has drawn comparisons with the U.N.'s climate change panel and its flagship annual COP meeting. The new bodies represent "a symbolic triumph." They are "by far the world's most globally inclusive approach to governing AI," Isabella Wilkinson, a research fellow at the London-based think tank Chatham House, wrote in a blog post. "But in practice, the new mechanisms look like they will be mostly powerless," she added. Among the possible issues is whether the U.N.'s lumbering administration is able to regulate a fast-moving technology like AI. Ahead of the meeting, a group of influential experts called for governments to agree on so-called red lines for AI to take effect by the end of next year, saying that the technology needs "minimum guardrails" designed to prevent the "most urgent and unacceptable risks." The group, including senior employees at ChatGPT maker OpenAI, Google's AI research lab DeepMind and chatbot maker Anthropic, wants governments to sign an internationally binding agreement on AI. They point out that the world has previously agreed on treaties banning nuclear testing and biological weapons and protecting the high seas. "The idea is very simple," said one of the backers, Stuart Russell, an AI professor at the University of California, Berkeley. "As we do with medicines and nuclear power stations, we can require developers to prove safety as a condition of market access." Russell suggested that U.N. governance could resemble the workings of another U.N.-affiliated body, the International Civil Aviation Organization, which coordinates with safety regulators across different countries and makes sure they're all working off the same page. And rather than laying out a set of rules that are set in stone, diplomats could draw up a "framework convention" that's flexible enough to be updated to reflect AI's latest advances, he said.
[8]
AI joins list of global challenges on agenda for UN meeting
Artificial intelligence is joining the list of big and complex global challenges that world leaders and diplomats will tackle at this week's annual high-level United Nations meetup. Since the AI boom kicked off with ChatGPT's debut about three years ago, the technology's breathtaking capabilities have amazed the world. Tech companies have raced to develop bigger and better AI systems even as experts warn of its risks, including existential threats like engineered pandemics and large-scale disinformation, and call for safeguards. The U.N.'s adoption of a new governance architecture is the latest and biggest effort to rein in AI. Previous multilateral efforts, including three AI summits organized by Britain, South Korea and France, have resulted only in non-binding pledges. Last month, the General Assembly adopted a resolution to set up two key bodies on AI -- a global forum and an independent scientific panel of experts -- in a milestone move to shepherd global governance efforts for the technology. On Wednesday, a U.N. Security Council meeting will convene an open debate on the issue. Among the questions to be addressed: How can the Council help ensure the responsible application of AI to comply wih international law and support peace processes and conflict prevention? And on Thursday, as part of the body's annual meeting, U.N. Secretary-General António Guterres will hold a meeting to launch the forum, called the Global Dialogue on AI Governance. It's a venue for governments and "stakeholders" to discuss international cooperation and share ideas and solutions. It's scheduled to meet formally in Geneva next year and in New York in 2027. Meanwhile, recruitment is expected to get underway to find 40 experts for the scientific panel, including two co-chairs, one from a developed country and one from a developing nation. The panel has drawn comparisons with the U.N.'s climate change panel and its flagship annual COP meeting. The new bodies represent "a symbolic triumph." They are "by far the world's most globally inclusive approach to governing AI," Isabella Wilkinson, a research fellow at the London-based think tank Chatham House, wrote in a blog post. "But in practice, the new mechanisms look like they will be mostly powerless," she added. Among the possible issues is whether the U.N.'s lumbering administration is able to regulate a fast-moving technology like AI. Ahead of the meeting, a group of influential experts called for governments to agree on so-called red lines for AI to take effect by the end of next year, saying that the technology needs "minimum guardrails" designed to prevent the "most urgent and unacceptable risks." The group, including senior employees at ChatGPT maker OpenAI, Google's AI research lab DeepMind and chatbot maker Anthropic, wants governments to sign an internationally binding agreement on AI. They point out that the world has previously agreed on treaties banning nuclear testing and biological weapons and protecting the high seas. "The idea is very simple," said one of the backers, Stuart Russell, an AI professor at the University of California, Berkeley. "As we do with medicines and nuclear power stations, we can require developers to prove safety as a condition of market access." Russell suggested that U.N. governance could resemble the workings of another U.N.-affiliated body, the International Civil Aviation Organization, which coordinates with safety regulators across different countries and makes sure they're all working off the same page. And rather than laying out a set of rules that are set in stone, diplomats could draw up a "framework convention" that's flexible enough to be updated to reflect AI's latest advances, he said.
[9]
AI Joins List of Global Challenges on Agenda for UN Meeting
Artificial intelligence is joining the list of big and complex global challenges that world leaders and diplomats will tackle at this week's annual high-level United Nations meetup. Since the AI boom kicked off with ChatGPT's debut about three years ago, the technology's breathtaking capabilities have amazed the world. Tech companies have raced to develop bigger and better AI systems even as experts warn of its risks, including existential threats like engineered pandemics and large-scale disinformation, and call for safeguards. The U.N.'s adoption of a new governance architecture is the latest and biggest effort to rein in AI. Previous multilateral efforts, including three AI summits organized by Britain, South Korea and France, have resulted only in non-binding pledges. Last month, the General Assembly adopted a resolution to set up two key bodies on AI -- a global forum and an independent scientific panel of experts -- in a milestone move to shepherd global governance efforts for the technology. On Wednesday, a U.N. Security Council meeting will convene an open debate on the issue. Among the questions to be addressed: How can the Council help ensure the responsible application of AI to comply wih international law and support peace processes and conflict prevention? And on Thursday, as part of the body's annual meeting, U.N. Secretary-General António Guterres will hold a meeting to launch the forum, called the Global Dialogue on AI Governance. It's a venue for governments and "stakeholders" to discuss international cooperation and share ideas and solutions. It's scheduled to meet formally in Geneva next year and in New York in 2027. Meanwhile, recruitment is expected to get underway to find 40 experts for the scientific panel, including two co-chairs, one from a developed country and one from a developing nation. The panel has drawn comparisons with the U.N.'s climate change panel and its flagship annual COP meeting. The new bodies represent "a symbolic triumph." They are "by far the world's most globally inclusive approach to governing AI," Isabella Wilkinson, a research fellow at the London-based think tank Chatham House, wrote in a blog post. "But in practice, the new mechanisms look like they will be mostly powerless," she added. Among the possible issues is whether the U.N.'s lumbering administration is able to regulate a fast-moving technology like AI. Ahead of the meeting, a group of influential experts called for governments to agree on so-called red lines for AI to take effect by the end of next year, saying that the technology needs "minimum guardrails" designed to prevent the "most urgent and unacceptable risks." The group, including senior employees at ChatGPT maker OpenAI, Google's AI research lab DeepMind and chatbot maker Anthropic, wants governments to sign an internationally binding agreement on AI. They point out that the world has previously agreed on treaties banning nuclear testing and biological weapons and protecting the high seas. "The idea is very simple," said one of the backers, Stuart Russell, an AI professor at the University of California, Berkeley. "As we do with medicines and nuclear power stations, we can require developers to prove safety as a condition of market access." Russell suggested that U.N. governance could resemble the workings of another U.N.-affiliated body, the International Civil Aviation Organization, which coordinates with safety regulators across different countries and makes sure they're all working off the same page. And rather than laying out a set of rules that are set in stone, diplomats could draw up a "framework convention" that's flexible enough to be updated to reflect AI's latest advances, he said.
[10]
AI joins list of global challenges on agenda for UN meeting - The Economic Times
World leaders at the UN are addressing AI's global challenges, establishing a global forum and scientific panel for governance. A Security Council debate will explore AI's responsible application for peace, while the Global Dialogue on AI Governance launches to foster international cooperation.Artificial intelligence is joining the list of big and complex global challenges that world leaders and diplomats will tackle at this week's annual high-level United Nations meetup. Since the AI boom kicked off with ChatGPT's debut about three years ago, the technology's breathtaking capabilities have amazed the world. Tech companies have raced to develop bigger and better AI systems even as experts warn of its risks, including existential threats like engineered pandemics and large-scale disinformation, and call for safeguards. The U.N.'s adoption of a new governance architecture is the latest and biggest effort to rein in AI. Previous multilateral efforts, including three AI summits organised by Britain, South Korea and France, have resulted only in non-binding pledges. Last month, the General Assembly adopted a resolution to set up two key bodies on AI - a global forum and an independent scientific panel of experts - in a milestone move to shepherd global governance efforts for the technology. On Wednesday, a U.N. Security Council meeting will convene an open debate on the issue. Among the questions to be addressed: How can the Council help ensure the responsible application of AI to comply with international law and support peace processes and conflict prevention? And on Thursday, as part of the body's annual meeting, U.N. Secretary-General Antonio Guterres will hold a meeting to launch the forum, called the Global Dialogue on AI Governance. It's a venue for governments and "stakeholders" to discuss international cooperation and share ideas and solutions. It's scheduled to meet formally in Geneva next year and in New York in 2027. Meanwhile, recruitment is expected to get underway to find 40 experts for the scientific panel, including two co-chairs, one from a developed country and one from a developing nation. The panel has drawn comparisons with the U.N.'s climate change panel and its flagship annual COP meeting. The new bodies represent "a symbolic triumph." They are "by far the world's most globally inclusive approach to governing AI," Isabella Wilkinson, a research fellow at the London-based think tank Chatham House, wrote in a blog post. "But in practice, the new mechanisms look like they will be mostly powerless," she added. Among the possible issues is whether the U.N.'s lumbering administration is able to regulate a fast-moving technology like AI. Ahead of the meeting, a group of influential experts called for governments to agree on so-called red lines for AI to take effect by the end of next year, saying that the technology needs "minimum guardrails" designed to prevent the "most urgent and unacceptable risks." The group, including senior employees at ChatGPT maker OpenAI, Google's AI research lab DeepMind and chatbot maker Anthropic, wants governments to sign an internationally binding agreement on AI. They point out that the world has previously agreed on treaties banning nuclear testing and biological weapons and protecting the high seas. "The idea is very simple," said one of the backers, Stuart Russell, an AI professor at the University of California, Berkeley. "As we do with medicines and nuclear power stations, we can require developers to prove safety as a condition of market access." Russell suggested that U.N. governance could resemble the workings of another U.N.-affiliated body, the International Civil Aviation Organization, which coordinates with safety regulators across different countries and makes sure they're all working off the same page. And rather than laying out a set of rules that are set in stone, diplomats could draw up a "framework convention" that's flexible enough to be updated to reflect AI's latest advances, he said.
Share
Share
Copy Link
The United Nations is launching initiatives for global AI governance, sparking debates on international cooperation and regulation. This move highlights growing concerns about AI's impact on security, ethics, and global stability, alongside differing national approaches.
The United Nations is launching key initiatives for global AI governance, bringing world leaders to the General Assembly to address AI's profound impact
1
2
. The goal is a coordinated international approach to manage AI's ethical benefits and risks responsibly.Source: AP NEWS
Last month, the UN established a global forum and independent scientific panel for AI oversight
1
. The Global Dialogue on AI Governance will convene in 2026/2027 to foster cooperation; Secretary-General Guterres stated these efforts will "lay the cornerstones of a global AI ecosystem"4
5
. However, divisions are clear: European leaders advocate for UN Security Council involvement in military AI5
, while the US firmly rejects "all efforts by international bodies to assert centralized control and global governance of AI"4
. This highlights significant geopolitical hurdles to unified regulation2
.Source: Axios
Related Stories
The UN Security Council meeting revealed severe concerns about AI's impact on international peace. Guterres warned AI could be weaponized without safeguards
3
5
. Experts, like Nobel laureate Daron Acemoglu, call AI "the biggest threat humanity has faced"4
, citing risks from autonomous weapons to novel biological agents3
. Unified global AI oversight remains challenging due to divergent major power views, especially from the US2
4
. Success hinges on international cooperation for responsible AI development1
3
5
.Source: The Telegraph
Summarized by
Navi
[3]
19 Sept 2024
12 Feb 2025•Policy and Regulation
22 Sept 2025•Policy and Regulation