Curated by THEOUTPOST
On Mon, 23 Sept, 4:02 PM UTC
2 Sources
[1]
The United Nations has a plan to govern AI - but has it bought the industry's hype?
The UN advisory board on AI was first convened on October 26, 2023. The purpose of this committee is to advance recommendations for the international governance of AI. It says this approach is needed to ensure the benefits of AI, such as opening new areas of scientific inquiry, are evenly distributed, while the risks of this technology, such as mass surveillance and the spread of misinformation, are mitigated.The United Nations Secretary-General's Advisory Board on Artificial Intelligence (AI) has released its final report on governing AI for humanity. The report presents a blueprint for addressing AI-related risks while still enabling the potential of this technology. It also includes a call to action for all governments and stakeholders to work together in governing AI to foster development and protection of all human rights. On the surface, this report seems to be a positive step forward for AI, encouraging developments while also mitigating potential harms. However, the finer details of the report expose a number of concerns. Reminiscent of the IPCC The UN advisory board on AI was first convened on October 26, 2023. The purpose of this committee is to advance recommendations for the international governance of AI. It says this approach is needed to ensure the benefits of AI, such as opening new areas of scientific inquiry, are evenly distributed, while the risks of this technology, such as mass surveillance and the spread of misinformation, are mitigated. The advisory board consists of 39 members from a diversity of regions and professional sectors. Among them are industry representatives from Microsoft, Mozilla, Sony, Collinear AI and OpenAI. The committee is reminiscent of the UN's Intergovernmental Panel on Climate Change (IPCC) which aims to provide key input into international climate change negotiations. The inclusion of prominent industry representatives in the advisory board on AI is a point of difference from the IPCC. This may have advantages, such as a more informed understanding of AI technologies. But it may also have disadvantages, such as biased viewpoints in favour of commercial interests. The recent release of the final report on governing AI for humanity provides a vital insight into what we can likely expect from this committee. What's in the report? The final report on governing AI for humanity follows an interim report released in December 2023. It proposes seven recommendations for addressing gaps in current AI governance arrangements. These include the creation of an independent international scientific panel on AI, the creation of an AI standards exchange and the creation of a global AI data framework. The report also ends with a call to action for all governments and relevant stakeholders to collectively govern AI. What's disconcerting about the report are the imbalanced and at times contradictory claims made throughout. For example, the report rightly advocates for governance measures to address the impact of AI on concentrated power and wealth, geopolitical and geoeconomic implications. However, it also claims that: no one currently understands all of AI's inner workings enough to fully control its outputs or predict its evolution. This claim is not factually correct on many accounts. It is true that there are some "black box" systems - those in which the input is known, but the computational process for generating outputs is not. But AI systems more generally are well understood on a technical level. AI reflects a spectrum of capabilities. This spectrum ranges from generative AI systems such as ChatGPT, through to deep learning systems such as facial recognition. The assumption that all these systems embody the same level of impenetrable complexity is not accurate. The inclusion of this claim calls into question the advantages of including industry representatives in the advisory board, as they should be bringing a more informed understanding of AI technologies. The other issue this claim raises is the notion of AI evolving of its own accord. What has been interesting about the rise of AI over recent years is the accompanying narratives which falsely position AI as a system of agency. This inaccurate narrative shifts perceived liability and responsibility away from those who design and develop these systems, providing a creative scapegoat for industry. Despite the subtle undertone of powerlessness in the face of AI technologies and the imbalanced claims made throughout, the report does positively progress the discourse in some ways. A small step forward Overall, the report and its call to action are a positive step forward because they emphasise that AI can be governed and regulated, despite contradictory claims throughout the report which imply otherwise. The inclusion of the term "hallucinations" is a salient example of these contradictions. The term itself was popularised by OpenAI's chief executive Sam Altman when he used the term to reframe nonsensical outputs as part of the "magic" of AI. Hallucinations is not a technically accepted term - it's a creative marketing agenda. Pushing for governance of AI while simultaneously endorsing a term which implies a technology that cannot be governed is not constructive. What the report lacks is consistency in how AI is perceived and understood. It also lacks application specificity - a common limitation among many AI initiatives. A global approach to AI governance will only work if it is able to capture the nuances of application and domain specificity. The report is one step forward in the right direction. However, it will need refinement and amendments to ensure it encourages developments while mitigating the many harms of AI. (The Conversation) NSA NSA
[2]
The United Nations has a plan to govern AI - but has it bought the industry's hype?
Australian National University provides funding as a member of The Conversation AU. The United Nations Secretary-General's Advisory Board on Artificial Intelligence (AI) has released its final report on governing AI for humanity. The report presents a blueprint for addressing AI-related risks while still enabling the potential of this technology. It also includes a call to action for all governments and stakeholders to work together in governing AI to foster development and protection of all human rights. On the surface, this report seems to be a positive step forward for AI, encouraging developments while also mitigating potential harms. However, the finer details of the report expose a number of concerns. Reminiscent of the IPCC The UN advisory board on AI was first convened on October 26, 2023. The purpose of this committee is to advance recommendations for the international governance of AI. It says this approach is needed to ensure the benefits of AI, such as opening new areas of scientific inquiry, are evenly distributed, while the risks of this technology, such as mass surveillance and the spread of misinformation, are mitigated. The advisory board consists of 39 members from a diversity of regions and professional sectors. Among them are industry representatives from Microsoft, Mozilla, Sony, Collinear AI and OpenAI. The committee is reminiscent of the UN's Intergovernmental Panel on Climate Change (IPCC) which aims to provide key input into international climate change negotiations. The inclusion of prominent industry representatives in the advisory board on AI is a point of difference from the IPCC. This may have advantages, such as a more informed understanding of AI technologies. But it may also have disadvantages, such as biased viewpoints in favour of commercial interests. The recent release of the final report on governing AI for humanity provides a vital insight into what we can likely expect from this committee. What's in the report? The final report on governing AI for humanity follows an interim report released in December 2023. It proposes seven recommendations for addressing gaps in current AI governance arrangements. These include the creation of an independent international scientific panel on AI, the creation of an AI standards exchange and the creation of a global AI data framework. The report also ends with a call to action for all governments and relevant stakeholders to collectively govern AI. What's disconcerting about the report are the imbalanced and at times contradictory claims made throughout. For example, the report rightly advocates for governance measures to address the impact of AI on concentrated power and wealth, geopolitical and geoeconomic implications. However, it also claims that: no one currently understands all of AI's inner workings enough to fully control its outputs or predict its evolution. This claim is not factually correct on many accounts. It is true that there are some "black box" systems - those in which the input is known, but the computational process for generating outputs is not. But AI systems more generally are well understood on a technical level. AI reflects a spectrum of capabilities. This spectrum ranges from generative AI systems such as ChatGPT, through to deep learning systems such as facial recognition. The assumption that all these systems embody the same level of impenetrable complexity is not accurate. The inclusion of this claim calls into question the advantages of including industry representatives in the advisory board, as they should be bringing a more informed understanding of AI technologies. The other issue this claim raises is the notion of AI evolving of its own accord. What has been interesting about the rise of AI over recent years is the accompanying narratives which falsely position AI as a system of agency. This inaccurate narrative shifts perceived liability and responsibility away from those who design and develop these systems, providing a creative scapegoat for industry. Despite the subtle undertone of powerlessness in the face of AI technologies and the imbalanced claims made throughout, the report does positively progress the discourse in some ways. A small step forward Overall, the report and its call to action are a positive step forward because they emphasise that AI can be governed and regulated, despite contradictory claims throughout the report which imply otherwise. The inclusion of the term "hallucinations" is a salient example of these contradictions. The term itself was popularised by OpenAI's chief executive Sam Altman when he used the term to reframe nonsensical outputs as part of the "magic" of AI. Hallucinations is not a technically accepted term - it's a creative marketing agenda. Pushing for governance of AI while simultaneously endorsing a term which implies a technology that cannot be governed is not constructive. What the report lacks is consistency in how AI is perceived and understood. It also lacks application specificity - a common limitation among many AI initiatives. A global approach to AI governance will only work if it is able to capture the nuances of application and domain specificity. The report is one step forward in the right direction. However, it will need refinement and amendments to ensure it encourages developments while mitigating the many harms of AI.
Share
Share
Copy Link
The United Nations has proposed a plan to govern artificial intelligence, raising questions about whether it has been influenced by industry hype. This development comes as AI technology rapidly advances, prompting global discussions on regulation and ethical use.
The United Nations has stepped into the global artificial intelligence arena with a proposed plan for AI governance. This move comes as AI technology continues to advance at an unprecedented pace, raising concerns about its potential impacts on society, economy, and human rights 1.
At the heart of this initiative is the High-Level Advisory Body on AI, established by UN Secretary-General António Guterres. This 39-member panel, comprised of experts from various fields, has been tasked with developing recommendations for the international governance of AI by the end of 2023 2.
While the UN's effort is laudable, some experts have raised concerns about the potential influence of industry hype on the advisory body's recommendations. The rapid commercialization of AI technologies, particularly large language models like ChatGPT, has created a buzz that may overshadow critical issues 2.
The UN faces a delicate balancing act between fostering innovation and implementing necessary regulations. The advisory body's interim report acknowledges both the potential benefits of AI in addressing global challenges and the risks it poses to human rights, privacy, and social cohesion 1.
The UN's initiative joins a growing landscape of AI governance efforts worldwide. Countries like the United States, China, and members of the European Union have already begun developing their own AI regulations. The challenge for the UN lies in creating a framework that can be universally applied while respecting national sovereignty 2.
One of the most contentious aspects of the UN's approach is its focus on potential existential risks posed by AI. Critics argue that this emphasis may divert attention from more immediate concerns, such as AI's impact on employment, privacy, and social inequality 2.
As the High-Level Advisory Body prepares to deliver its final recommendations, the global community watches with keen interest. The success of the UN's AI governance plan will depend on its ability to cut through the hype, address real-world concerns, and create a framework that can adapt to the rapidly evolving AI landscape 1.
Reference
[1]
United Nations experts urge the establishment of a global governance framework for artificial intelligence, emphasizing the need to address both risks and benefits of AI technology on an international scale.
11 Sources
11 Sources
The United Nations' advisory body has put forward seven recommendations for governing artificial intelligence globally. This comes as major tech companies like Meta and OpenAI face regulatory challenges and calls for responsible AI development.
10 Sources
10 Sources
As AI rapidly advances, experts and policymakers stress the critical need for a global governance framework to ensure responsible development and implementation.
2 Sources
2 Sources
The AI Action Summit in Paris marks a significant shift in global attitudes towards AI, emphasizing economic opportunities over safety concerns. This change in focus has sparked debate among industry leaders and experts about the balance between innovation and risk management.
7 Sources
7 Sources
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved