5 Sources
5 Sources
[1]
AI joins list of global challenges on agenda for UN meeting
Artificial intelligence is joining the list of big and complex global challenges that world leaders and diplomats will tackle at this week's annual high-level United Nations meetup. Since the AI boom kicked off with ChatGPT's debut about three years ago, the technology's breathtaking capabilities have amazed the world. Tech companies have raced to develop bigger and better AI systems even as experts warn of its risks, including existential threats like engineered pandemics and large-scale disinformation, and call for safeguards. The U.N.'s adoption of a new governance architecture is the latest and biggest effort to rein in AI. Previous multilateral efforts, including three AI summits organized by Britain, South Korea and France, have resulted only in non-binding pledges. Last month, the General Assembly adopted a resolution to set up two key bodies on AI -- a global forum and an independent scientific panel of experts -- in a milestone move to shepherd global governance efforts for the technology. On Wednesday, a U.N. Security Council meeting will convene an open debate on the issue. Among the questions to be addressed: How can the Council help ensure the responsible application of AI to comply wih international law and support peace processes and conflict prevention? And on Thursday, as part of the body's annual meeting, U.N. Secretary-General AntΓ³nio Guterres will hold a meeting to launch the forum, called the Global Dialogue on AI Governance. It's a venue for governments and "stakeholders" to discuss international cooperation and share ideas and solutions. It's scheduled to meet formally in Geneva next year and in New York in 2027. Meanwhile, recruitment is expected to get underway to find 40 experts for the scientific panel, including two co-chairs, one from a developed country and one from a developing nation. The panel has drawn comparisons with the U.N.'s climate change panel and its flagship annual COP meeting. The new bodies represent "a symbolic triumph." They are "by far the world's most globally inclusive approach to governing AI," Isabella Wilkinson, a research fellow at the London-based think tank Chatham House, wrote in a blog post. "But in practice, the new mechanisms look like they will be mostly powerless," she added. Among the possible issues is whether the U.N.'s lumbering administration is able to regulate a fast-moving technology like AI. Ahead of the meeting, a group of influential experts called for governments to agree on so-called red lines for AI to take effect by the end of next year, saying that the technology needs "minimum guardrails" designed to prevent the "most urgent and unacceptable risks." The group, including senior employees at ChatGPT maker OpenAI, Google's AI research lab DeepMind and chatbot maker Anthropic, wants governments to sign an internationally binding agreement on AI. They point out that the world has previously agreed on treaties banning nuclear testing and biological weapons and protecting the high seas. "The idea is very simple," said one of the backers, Stuart Russell, an AI professor at the University of California, Berkeley. "As we do with medicines and nuclear power stations, we can require developers to prove safety as a condition of market access." Russell suggested that U.N. governance could resemble the workings of another U.N.-affiliated body, the International Civil Aviation Organization, which coordinates with safety regulators across different countries and makes sure they're all working off the same page. And rather than laying out a set of rules that are set in stone, diplomats could draw up a "framework convention" that's flexible enough to be updated to reflect AI's latest advances, he said.
[2]
AI joins list of global challenges on agenda for UN meeting
World leaders and diplomats are tackling artificial intelligence at the United Nations' annual high-level meeting Artificial intelligence is joining the list of big and complex global challenges that world leaders and diplomats will tackle at this week's annual high-level United Nations meetup. Since the AI boom kicked off with ChatGPT's debut about three years ago, the technology's breathtaking capabilities have amazed the world. Tech companies have raced to develop bigger and better AI systems even as experts warn of its risks, including existential threats like engineered pandemics and large-scale disinformation, and call for safeguards. The U.N.'s adoption of a new governance architecture is the latest and biggest effort to rein in AI. Previous multilateral efforts, including three AI summits organized by Britain, South Korea and France, have resulted only in non-binding pledges. Last month, the General Assembly adopted a resolution to set up two key bodies on AI -- a global forum and an independent scientific panel of experts -- in a milestone move to shepherd global governance efforts for the technology. On Wednesday, a U.N. Security Council meeting will convene an open debate on the issue. Among the questions to be addressed: How can the Council help ensure the responsible application of AI to comply wih international law and support peace processes and conflict prevention? And on Thursday, as part of the body's annual meeting, U.N. Secretary-General AntΓ³nio Guterres will hold a meeting to launch the forum, called the Global Dialogue on AI Governance. It's a venue for governments and "stakeholders" to discuss international cooperation and share ideas and solutions. It's scheduled to meet formally in Geneva next year and in New York in 2027. Meanwhile, recruitment is expected to get underway to find 40 experts for the scientific panel, including two co-chairs, one from a developed country and one from a developing nation. The panel has drawn comparisons with the U.N.'s climate change panel and its flagship annual COP meeting. The new bodies represent "a symbolic triumph." They are "by far the world's most globally inclusive approach to governing AI," Isabella Wilkinson, a research fellow at the London-based think tank Chatham House, wrote in a blog post. "But in practice, the new mechanisms look like they will be mostly powerless," she added. Among the possible issues is whether the U.N.'s lumbering administration is able to regulate a fast-moving technology like AI. Ahead of the meeting, a group of influential experts called for governments to agree on so-called red lines for AI to take effect by the end of next year, saying that the technology needs "minimum guardrails" designed to prevent the "most urgent and unacceptable risks." The group, including senior employees at ChatGPT maker OpenAI, Google's AI research lab DeepMind and chatbot maker Anthropic, wants governments to sign an internationally binding agreement on AI. They point out that the world has previously agreed on treaties banning nuclear testing and biological weapons and protecting the high seas. "The idea is very simple," said one of the backers, Stuart Russell, an AI professor at the University of California, Berkeley. "As we do with medicines and nuclear power stations, we can require developers to prove safety as a condition of market access." Russell suggested that U.N. governance could resemble the workings of another U.N.-affiliated body, the International Civil Aviation Organization, which coordinates with safety regulators across different countries and makes sure they're all working off the same page. And rather than laying out a set of rules that are set in stone, diplomats could draw up a "framework convention" that's flexible enough to be updated to reflect AI's latest advances, he said.
[3]
AI joins list of global challenges on agenda for UN meeting
Artificial intelligence is joining the list of big and complex global challenges that world leaders and diplomats will tackle at this week's annual high-level United Nations meetup. Since the AI boom kicked off with ChatGPT's debut about three years ago, the technology's breathtaking capabilities have amazed the world. Tech companies have raced to develop bigger and better AI systems even as experts warn of its risks, including existential threats like engineered pandemics and large-scale disinformation, and call for safeguards. The U.N.'s adoption of a new governance architecture is the latest and biggest effort to rein in AI. Previous multilateral efforts, including three AI summits organized by Britain, South Korea and France, have resulted only in non-binding pledges. Last month, the General Assembly adopted a resolution to set up two key bodies on AI -- a global forum and an independent scientific panel of experts -- in a milestone move to shepherd global governance efforts for the technology. On Wednesday, a U.N. Security Council meeting will convene an open debate on the issue. Among the questions to be addressed: How can the Council help ensure the responsible application of AI to comply wih international law and support peace processes and conflict prevention? And on Thursday, as part of the body's annual meeting, U.N. Secretary-General AntΓ³nio Guterres will hold a meeting to launch the forum, called the Global Dialogue on AI Governance. It's a venue for governments and "stakeholders" to discuss international cooperation and share ideas and solutions. It's scheduled to meet formally in Geneva next year and in New York in 2027. Meanwhile, recruitment is expected to get underway to find 40 experts for the scientific panel, including two co-chairs, one from a developed country and one from a developing nation. The panel has drawn comparisons with the U.N.'s climate change panel and its flagship annual COP meeting. The new bodies represent "a symbolic triumph." They are "by far the world's most globally inclusive approach to governing AI," Isabella Wilkinson, a research fellow at the London-based think tank Chatham House, wrote in a blog post. "But in practice, the new mechanisms look like they will be mostly powerless," she added. Among the possible issues is whether the U.N.'s lumbering administration is able to regulate a fast-moving technology like AI. Ahead of the meeting, a group of influential experts called for governments to agree on so-called red lines for AI to take effect by the end of next year, saying that the technology needs "minimum guardrails" designed to prevent the "most urgent and unacceptable risks." The group, including senior employees at ChatGPT maker OpenAI, Google's AI research lab DeepMind and chatbot maker Anthropic, wants governments to sign an internationally binding agreement on AI. They point out that the world has previously agreed on treaties banning nuclear testing and biological weapons and protecting the high seas. "The idea is very simple," said one of the backers, Stuart Russell, an AI professor at the University of California, Berkeley. "As we do with medicines and nuclear power stations, we can require developers to prove safety as a condition of market access." Russell suggested that U.N. governance could resemble the workings of another U.N.-affiliated body, the International Civil Aviation Organization, which coordinates with safety regulators across different countries and makes sure they're all working off the same page. And rather than laying out a set of rules that are set in stone, diplomats could draw up a "framework convention" that's flexible enough to be updated to reflect AI's latest advances, he said.
[4]
AI Joins List of Global Challenges on Agenda for UN Meeting
Artificial intelligence is joining the list of big and complex global challenges that world leaders and diplomats will tackle at this week's annual high-level United Nations meetup. Since the AI boom kicked off with ChatGPT's debut about three years ago, the technology's breathtaking capabilities have amazed the world. Tech companies have raced to develop bigger and better AI systems even as experts warn of its risks, including existential threats like engineered pandemics and large-scale disinformation, and call for safeguards. The U.N.'s adoption of a new governance architecture is the latest and biggest effort to rein in AI. Previous multilateral efforts, including three AI summits organized by Britain, South Korea and France, have resulted only in non-binding pledges. Last month, the General Assembly adopted a resolution to set up two key bodies on AI -- a global forum and an independent scientific panel of experts -- in a milestone move to shepherd global governance efforts for the technology. On Wednesday, a U.N. Security Council meeting will convene an open debate on the issue. Among the questions to be addressed: How can the Council help ensure the responsible application of AI to comply wih international law and support peace processes and conflict prevention? And on Thursday, as part of the body's annual meeting, U.N. Secretary-General AntΓ³nio Guterres will hold a meeting to launch the forum, called the Global Dialogue on AI Governance. It's a venue for governments and "stakeholders" to discuss international cooperation and share ideas and solutions. It's scheduled to meet formally in Geneva next year and in New York in 2027. Meanwhile, recruitment is expected to get underway to find 40 experts for the scientific panel, including two co-chairs, one from a developed country and one from a developing nation. The panel has drawn comparisons with the U.N.'s climate change panel and its flagship annual COP meeting. The new bodies represent "a symbolic triumph." They are "by far the world's most globally inclusive approach to governing AI," Isabella Wilkinson, a research fellow at the London-based think tank Chatham House, wrote in a blog post. "But in practice, the new mechanisms look like they will be mostly powerless," she added. Among the possible issues is whether the U.N.'s lumbering administration is able to regulate a fast-moving technology like AI. Ahead of the meeting, a group of influential experts called for governments to agree on so-called red lines for AI to take effect by the end of next year, saying that the technology needs "minimum guardrails" designed to prevent the "most urgent and unacceptable risks." The group, including senior employees at ChatGPT maker OpenAI, Google's AI research lab DeepMind and chatbot maker Anthropic, wants governments to sign an internationally binding agreement on AI. They point out that the world has previously agreed on treaties banning nuclear testing and biological weapons and protecting the high seas. "The idea is very simple," said one of the backers, Stuart Russell, an AI professor at the University of California, Berkeley. "As we do with medicines and nuclear power stations, we can require developers to prove safety as a condition of market access." Russell suggested that U.N. governance could resemble the workings of another U.N.-affiliated body, the International Civil Aviation Organization, which coordinates with safety regulators across different countries and makes sure they're all working off the same page. And rather than laying out a set of rules that are set in stone, diplomats could draw up a "framework convention" that's flexible enough to be updated to reflect AI's latest advances, he said.
[5]
AI joins list of global challenges on agenda for UN meeting - The Economic Times
World leaders at the UN are addressing AI's global challenges, establishing a global forum and scientific panel for governance. A Security Council debate will explore AI's responsible application for peace, while the Global Dialogue on AI Governance launches to foster international cooperation.Artificial intelligence is joining the list of big and complex global challenges that world leaders and diplomats will tackle at this week's annual high-level United Nations meetup. Since the AI boom kicked off with ChatGPT's debut about three years ago, the technology's breathtaking capabilities have amazed the world. Tech companies have raced to develop bigger and better AI systems even as experts warn of its risks, including existential threats like engineered pandemics and large-scale disinformation, and call for safeguards. The U.N.'s adoption of a new governance architecture is the latest and biggest effort to rein in AI. Previous multilateral efforts, including three AI summits organised by Britain, South Korea and France, have resulted only in non-binding pledges. Last month, the General Assembly adopted a resolution to set up two key bodies on AI - a global forum and an independent scientific panel of experts - in a milestone move to shepherd global governance efforts for the technology. On Wednesday, a U.N. Security Council meeting will convene an open debate on the issue. Among the questions to be addressed: How can the Council help ensure the responsible application of AI to comply with international law and support peace processes and conflict prevention? And on Thursday, as part of the body's annual meeting, U.N. Secretary-General Antonio Guterres will hold a meeting to launch the forum, called the Global Dialogue on AI Governance. It's a venue for governments and "stakeholders" to discuss international cooperation and share ideas and solutions. It's scheduled to meet formally in Geneva next year and in New York in 2027. Meanwhile, recruitment is expected to get underway to find 40 experts for the scientific panel, including two co-chairs, one from a developed country and one from a developing nation. The panel has drawn comparisons with the U.N.'s climate change panel and its flagship annual COP meeting. The new bodies represent "a symbolic triumph." They are "by far the world's most globally inclusive approach to governing AI," Isabella Wilkinson, a research fellow at the London-based think tank Chatham House, wrote in a blog post. "But in practice, the new mechanisms look like they will be mostly powerless," she added. Among the possible issues is whether the U.N.'s lumbering administration is able to regulate a fast-moving technology like AI. Ahead of the meeting, a group of influential experts called for governments to agree on so-called red lines for AI to take effect by the end of next year, saying that the technology needs "minimum guardrails" designed to prevent the "most urgent and unacceptable risks." The group, including senior employees at ChatGPT maker OpenAI, Google's AI research lab DeepMind and chatbot maker Anthropic, wants governments to sign an internationally binding agreement on AI. They point out that the world has previously agreed on treaties banning nuclear testing and biological weapons and protecting the high seas. "The idea is very simple," said one of the backers, Stuart Russell, an AI professor at the University of California, Berkeley. "As we do with medicines and nuclear power stations, we can require developers to prove safety as a condition of market access." Russell suggested that U.N. governance could resemble the workings of another U.N.-affiliated body, the International Civil Aviation Organization, which coordinates with safety regulators across different countries and makes sure they're all working off the same page. And rather than laying out a set of rules that are set in stone, diplomats could draw up a "framework convention" that's flexible enough to be updated to reflect AI's latest advances, he said.
Share
Share
Copy Link
The United Nations is set to address artificial intelligence as a major global challenge during its annual high-level meeting. New initiatives include the establishment of a global forum and an expert panel to guide AI governance efforts.
The United Nations is stepping up to address the challenges posed by artificial intelligence (AI) at its annual high-level meeting. This move comes as AI's rapid advancement has captivated the world, prompting both excitement and concern among experts and policymakers
1
.Last month, the UN General Assembly adopted a resolution to establish two key bodies focused on AI governance:
These initiatives represent the UN's most significant effort to date in reining in AI technology. Previous multilateral efforts, including AI summits organized by Britain, South Korea, and France, have only resulted in non-binding pledges
3
.The UN's agenda includes:
4
While these new bodies represent a "symbolic triumph" in global AI governance, some experts express skepticism about their effectiveness. Isabella Wilkinson, a research fellow at Chatham House, notes that "in practice, the new mechanisms look like they will be mostly powerless," questioning whether the UN's bureaucracy can keep pace with rapidly evolving AI technology
5
.Related Stories
A group of influential experts, including employees from OpenAI, Google's DeepMind, and Anthropic, are calling for governments to agree on "red lines" for AI by the end of next year. They advocate for:
Stuart Russell, an AI professor at UC Berkeley, suggests that UN governance could follow a model similar to the International Civil Aviation Organization, coordinating safety regulations across countries
1
.As the UN embarks on this ambitious effort to govern AI on a global scale, the challenge lies in creating a framework that is both effective and flexible enough to adapt to AI's rapid advancements. The success of these initiatives will depend on the ability of world leaders and experts to collaborate and establish meaningful guidelines that can keep pace with technological progress.
Summarized by
Navi
[3]
[4]