Curated by THEOUTPOST
On Thu, 19 Sept, 4:05 PM UTC
10 Sources
[1]
AI News: UN Calls for Global AI Governance As Meta & OpenAI Face Challenges
OpenAI restructures safety oversight amid criticism, creating an independent body to oversee AI model safety. AI News: The United Nations has issued seven recommendations for reducing the risks of artificial intelligence (AI) based on input from a UN advisory body. The final report of the council's advisory body focuses on the importance of developing a unified approach to the regulation of AI and will be considered at a UN meeting scheduled for later this month. AI News: UN Calls for Global AI Governance The council of 39 experts noted that large multinational corporations have been able to dominate the development of AI technologies given the increasing rate of growth, which is a major concern. The panel stressed that there is an 'unavoidable' need for the governance of artificial intelligence on a global scale, since the creation and use of artificial intelligence cannot be solely attributed to market mechanisms. According to the UN report, to counter the lack of information between the AI labs and the rest of the world, it is suggested that a panel should be formed to disseminate accurate and independent information on artificial intelligence. The recommendations include the creation of a global AI fund to address the capacity and collaboration differences especially in the developing countries that cannot afford to use AI. The report also provides recommendations on how to establish a global artificial intelligencedata framework for the purpose of increasing transparency and accountability, and the establishment of a policy dialogue that would be aimed at addressing all the matters concerning the governance of artificial intelligence. While the report did not propose a new International organization for the regulation, it pointed out that if risks associated with the new technology were to escalate then there may be the need for a more powerful global body with the mandate to enforce the regulation of the technology. The United Nation's approach is different from that of some countries, including the United States, which has recently approved of 'a blueprint for action' to manage AI in military use - something China has not endorsed. Calls for Regulatory Harmonization in Europe Concurrent with the AI news, leaders, including Yann LeCun, Meta's Chief AI Scientist and many CEOs and academics from Europe, have demanded to know how the regulation will work in Europe. In an open letter, they stated that the EU has the potential to reap the economic benefits of AI if the rules do not hinder the freedom of research and ethical implementation of AI. Meta's upcoming multimodal artificial intelligence model, Llama, will not be released in the EU due to regulatory restrictions, which shows the conflict between innovation and regulation. The open letter argues that excessively stringent rules can hinder the EU's ability to advance in the field, and calls on the policymakers to implement the measures that will allow for the development of a robust artificial intelligence industry while addressing the risks. The letter emphasizes the need for coherent laws that can foster the advancement of AI while not hindering its growth like the warning on Apple iPhone OS as reported by CoinGape. OpenAI Restructures Safety Oversight Amid Criticism In addition, there are concerns about how OpenAI has positioned itself where the principles of safety and regulation of AI are concerned. As a result of the criticism from the US politicians and the former employees, the CEO of the company, Sam Altman, stepped down from the company's Safety and Security Committee. This committee was formed in the first place to monitor the safety of the artificial intelligence technology and has now been reshaped into an independent authority that can hold back on new model releases until safety risks are addressed. The new oversight group comprises individuals like Nicole Seligman, former US Army General Paul Nakasone, and Quora CEO Adam D'Angelo, whose role is to ensure that the safety measures put in place by OpenAI are in line with the organization's objectives. This United Nations AI news comes at the heels of allegations of internal strife, with former researchers claiming that OpenAI is more focused on profit-making than actual artificial intelligence governance.
[2]
UN advisory body makes seven recommendations for governing AI
An artificial-intelligence advisory body at the United Nations on Thursday released its final report proposing seven recommendations to address AI-related risks and gaps in governance. The UN last year created a 39-member advisory body to address issues in the international governance of AI. The recommendations will be discussed during a UN summit held in September. The advisory body called for the establishment of a panel to provide impartial and reliable scientific knowledge about AI and address information asymmetries between AI labs and the rest of the world. Since the release of Microsoft-backed OpenAI's ChatGPT in 2022, the use of AI has spread rapidly, raising concerns about fuelling misinformation, fake news and infringement of copyrighted material. Only a handful of countries have created laws to govern the spread of AI tools. The European Union has been ahead of the rest by passing a comprehensive AI Act compared with the United States' approach of voluntary compliance while China has aimed to maintain social stability and state control. Finally, the UN report proposed setting up a small AI office to support and coordinate the implementation of these proposals.
[3]
UN advisory body makes seven recommendations for governing AI
An artificial-intelligence advisory body at the United Nations on Thursday released its final report proposing seven recommendations to address AI-related risks and gaps in governance. The U.N. last year created a 39-member advisory body to address issues in the international governance of AI. The recommendations will be discussed during a U.N. summit held in September. The advisory body called for the establishment of a panel to provide impartial and reliable scientific knowledge about AI and address information asymmetries between AI labs and the rest of the world. Since the release of Microsoft-backed OpenAI's ChatGPT in 2022, the use of AI has spread rapidly, raising concerns about fueling misinformation, fake news and infringement of copyrighted material. LinkedIn users say their data is being collected for generative AI training without permission Only a handful of countries have created laws to govern the spread of AI tools. The European Union has been ahead of the rest by passing a comprehensive AI Act compared with the United States' approach of voluntary compliance while China has aimed to maintain social stability and state control. The United States was among about 60 countries that endorsed a "blueprint for action" to govern responsible use of AI in the military on September 10, while China did not support the legally non-binding document. With the development of AI in the hands of a few multinational companies, there is a danger that the technology could be imposed on people without them having a say in how it is used, the U.N. said in a statement. It also recommended a new policy dialogue on AI governance, creating an AI standards exchange and a global AI capacity development network to boost governance capacities. Among other proposals, the U.N. wants a global AI fund to be established, which would address gaps in capacity and collaboration. It also advocates the formation of a global AI data framework to ensure transparency and accountability. Finally, the U.N. report proposed setting up a small AI office to support and coordinate the implementation of these proposals. Published - September 19, 2024 02:50 pm IST Read Comments
[4]
UN Advisory Body Makes Seven Recommendations for Governing AI
STOCKHOLM (Reuters) - An artificial-intelligence advisory body at the United Nations on Thursday released its final report proposing seven recommendations to address AI-related risks and gaps in governance. The U.N. last year created a 39-member advisory body to address issues in the international governance of AI. The recommendations will be discussed during a U.N. summit held in September. The advisory body called for the establishment of a panel to provide impartial and reliable scientific knowledge about AI and address information asymmetries between AI labs and the rest of the world. Since the release of Microsoft-backed OpenAI's ChatGPT in 2022, the use of AI has spread rapidly, raising concerns about fueling misinformation, fake news and infringement of copyrighted material. Only a handful of countries have created laws to govern the spread of AI tools. The European Union has been ahead of the rest by passing a comprehensive AI Act compared with the United States' approach of voluntary compliance while China has aimed to maintain social stability and state control. The United States was among about 60 countries that endorsed a "blueprint for action" to govern responsible use of AI in the military on Sept. 10, while China did not support the legally non-binding document. With the development of AI in the hands of a few multinational companies, there is a danger that the technology could be imposed on people without them having a say in how it is used, the U.N. said in a statement. It also recommended a new policy dialogue on AI governance, creating an AI standards exchange and a global AI capacity development network to boost governance capacities. Among other proposals, the U.N. wants a global AI fund to be established, which would address gaps in capacity and collaboration. It also advocates the formation of a global AI data framework to ensure transparency and accountability. Finally, the U.N. report proposed setting up a small AI office to support and coordinate the implementation of these proposals. (Reporting by Supantha Mukherjee in Stockholm; Editing by Matthew Lewis)
[5]
UN advisory body makes seven recommendations for governing AI
STOCKHOLM (Reuters) - An artificial-intelligence advisory body at the United Nations on Thursday released its final report proposing seven recommendations to address AI-related risks and gaps in governance. The U.N. last year created a 39-member advisory body to address issues in the international governance of AI. The recommendations will be discussed during a U.N. summit held in September. The advisory body called for the establishment of a panel to provide impartial and reliable scientific knowledge about AI and address information asymmetries between AI labs and the rest of the world. Since the release of Microsoft-backed OpenAI's ChatGPT in 2022, the use of AI has spread rapidly, raising concerns about fueling misinformation, fake news and infringement of copyrighted material. Only a handful of countries have created laws to govern the spread of AI tools. The European Union has been ahead of the rest by passing a comprehensive AI Act compared with the United States' approach of voluntary compliance while China has aimed to maintain social stability and state control. The United States was among about 60 countries that endorsed a "blueprint for action" to govern responsible use of AI in the military on Sept. 10, while China did not support the legally non-binding document. With the development of AI in the hands of a few multinational companies, there is a danger that the technology could be imposed on people without them having a say in how it is used, the U.N. said in a statement. It also recommended a new policy dialogue on AI governance, creating an AI standards exchange and a global AI capacity development network to boost governance capacities. Among other proposals, the U.N. wants a global AI fund to be established, which would address gaps in capacity and collaboration. It also advocates the formation of a global AI data framework to ensure transparency and accountability. Finally, the U.N. report proposed setting up a small AI office to support and coordinate the implementation of these proposals. (Reporting by Supantha Mukherjee in Stockholm; Editing by Matthew Lewis)
[6]
UN advisory body makes seven recommendations for governing AI
STOCKHOLM, Sept 19 (Reuters) - An artificial-intelligence advisory body at the United Nations on Thursday released its final report proposing seven recommendations to address AI-related risks and gaps in governance. The U.N. last year created a 39-member advisory body to address issues in the international governance of AI. The recommendations will be discussed during a U.N. summit held in September. The advisory body called for the establishment of a panel to provide impartial and reliable scientific knowledge about AI and address information asymmetries between AI labs and the rest of the world. Advertisement · Scroll to continue Since the release of Microsoft-backed OpenAI's ChatGPT in 2022, the use of AI has spread rapidly, raising concerns about fueling misinformation, fake news and infringement of copyrighted material. Only a handful of countries have created laws to govern the spread of AI tools. The European Union has been ahead of the rest by passing a comprehensive AI Act compared with the United States' approach of voluntary compliance while China has aimed to maintain social stability and state control. Advertisement · Scroll to continue The United States was among about 60 countries that endorsed a "blueprint for action" to govern responsible use of AI in the military on Sept. 10, while China did not support the legally non-binding document. With the development of AI in the hands of a few multinational companies, there is a danger that the technology could be imposed on people without them having a say in how it is used, the U.N. said in a statement. It also recommended a new policy dialogue on AI governance, creating an AI standards exchange and a global AI capacity development network to boost governance capacities. Among other proposals, the U.N. wants a global AI fund to be established, which would address gaps in capacity and collaboration. It also advocates the formation of a global AI data framework to ensure transparency and accountability. Finally, the U.N. report proposed setting up a small AI office to support and coordinate the implementation of these proposals. Reporting by Supantha Mukherjee in Stockholm Editing by Matthew Lewis Our Standards: The Thomson Reuters Trust Principles., opens new tab Supantha Mukherjee Thomson Reuters Supantha leads the European Technology and Telecoms coverage, with a special focus on emerging technologies such as AI and 5G. He has been a journalist for about 18 years. He joined Reuters in 2006 and has covered a variety of beats ranging from financial sector to technology. He is based in Stockholm, Sweden.
[7]
UN advisory body makes seven recommendations for governing AI
An artificial-intelligence advisory body at the United Nations on Thursday released its final report proposing seven recommendations to address AI-related risks and gaps in governance. The U.N. last year created a 39-member advisory body to address issues in the international governance of AI. The recommendations will be discussed during a U.N. summit held in September. Click here to connect with us on WhatsApp The advisory body called for the establishment of a panel to provide impartial and reliable scientific knowledge about AI and address information asymmetries between AI labs and the rest of the world. Since the release of Microsoft-backed OpenAI's ChatGPT in 2022, the use of AI has spread rapidly, raising concerns about fueling misinformation, fake news and infringement of copyrighted material. Only a handful of countries have created laws to govern the spread of AI tools. The European Union has been ahead of the rest by passing a comprehensive AI Act compared with the United States' approach of voluntary compliance while China has aimed to maintain social stability and state control. The United States was among about 60 countries that endorsed a "blueprint for action" to govern responsible use of AI in the military on Sept. 10, while China did not support the legally non-binding document. More From This Section Mission specialist for Titan sub owner to testify before US Coast Guard Voters view Harris more favourably as she settles into role: AP-NORC poll China aircraft lessor orders 50 737 MAX jets in major boost to Boeing Taiwan denounces China for suspending tariff exemptions on farm goods US House Republicans fail to pass spending bill as shutdown approaches With the development of AI in the hands of a few multinational companies, there is a danger that the technology could be imposed on people without them having a say in how it is used, the U.N. said in a statement. It also recommended a new policy dialogue on AI governance, creating an AI standards exchange and a global AI capacity development network to boost governance capacities. Among other proposals, the U.N. wants a global AI fund to be established, which would address gaps in capacity and collaboration. It also advocates the formation of a global AI data framework to ensure transparency and accountability. Finally, the U.N. report proposed setting up a small AI office to support and coordinate the implementation of these proposals. Also Read AI Alliance adds 7 Indian members including Infosys, AI4Bharat & Sarvam AI FinOps platform Bluecopa raises $1.8 million in pre-Series A round Blackrock, Microsoft join hands to raise $30 billion for AI investments California guv signs laws to crack down on AI-created election deepfakes AI-led dating platform Schmooze raises $4 million series A round
[8]
UN advisory body makes seven recommendations for governing AI
Since the release of Microsoft-backed OpenAI's ChatGPT in 2022, the use of AI has spread rapidly, raising concerns about fueling misinformation, fake news and infringement of copyrighted material. Only a handful of countries have created laws to govern the spread of AI tools. The European Union has been ahead of the rest by passing a comprehensive AI Act compared with the United States' approach of voluntary compliance while China has aimed to maintain social stability and state control. The United States was among about 60 countries that endorsed a "blueprint for action" to govern responsible use of AI in the military on Sept. 10, while China did not support the legally non-binding document. With the development of AI in the hands of a few multinational companies, there is a danger that the technology could be imposed on people without them having a say in how it is used, the U.N. said in a statement. It also recommended a new policy dialogue on AI governance, creating an AI standards exchange and a global AI capacity development network to boost governance capacities. Among other proposals, the U.N. wants a global AI fund to be established, which would address gaps in capacity and collaboration. It also advocates the formation of a global AI data framework to ensure transparency and accountability. Finally, the U.N. report proposed setting up a small AI office to support and coordinate the implementation of these proposals. (Reporting by Supantha Mukherjee in Stockholm; Editing by Matthew Lewis)
[9]
UN AI body: Global AI governance gaps must be addressed
The final report from the UN's AI advisory body pointed to a 'global governance deficit with respect to AI' and made several recommendations to address this. Entire parts of the world have been completely left out of conversations around AI governance. That's according to a report from the United Nations (UN) Secretary-General's High-level Advisory Body on Artificial Intelligence (HLAB-AI), which was released today (19 September). The Governing AI for Humanity report discussed in detail the risks and challenges around governing AI and created key recommendations for addressing these risks. "As experts, we remain optimistic about the future of AI and its potential for good. That optimism depends, however, on realism about the risks and the inadequacy of structures and incentives currently in place," HLAB-AI stated in its report. 'Whole parts of the world have been left out of international AI governance conversations' "We also need to be realistic about international suspicions that could get in the way of the global collective action needed for effective and equitable governance. The technology is too important, and the stakes are too high, to rely only on market forces and a fragmented patchwork of national and multilateral action." The AI advisory body was launched in October 2023 and includes well-known cognitive scientist Dr Abeba Birhane and OpenAI's chief technology officer Mira Murati. At the time of its launch, UN secretary-general António Guterres said he set it up to maximise AI's benefits to humanity while containing and diminishing the risks. Today's report said that while there is no shortage of documents and dialogues focused on AI governance, none of them are "truly global in reach", adding that while seven countries are parties to all the sampled AI governance efforts, 118 are parties to none - particularly in the global south. "In terms of representation, whole parts of the world have been left out of international AI governance conversations," it stated. "Equity demands that more voices play meaningful roles in decisions about how to govern technology that affects us. The concentration of decision-making in the AI technology sector cannot be justified." Risks and challenges While problems such as bias in AI systems, questionable AI-enabled facial recognition and AI-generated disinformation are undoubtedly a threat to society, the report stated that putting together a comprehensive list of AI risks for all time is "a fool's errand" given how quickly the tech is evolving. "We believe that it is more useful to look at risks from the perspective of vulnerable communities and the commons," it said. "Framing risks based on vulnerabilities can shift the focus of policy agendas from the 'what' of each risk to 'who' is at risk and 'where', as well as who should be accountable in each case." In an interview with SiliconRepublic.com, international lawyer and advisory board member Jimena Sofía Viveros Álvarez said she believes the biggest threats are the weaponised uses of AI and autonomous weapons systems. "Without proper governance, these technologies pose existential risks to humanity, not to mention they threaten human dignity as the decision over life or death is reduced to a set of zeros and ones," she said. "Additionally, an immediate threat is the dual-use nature of AI, as it allows for virtually any system to be easily repurposed to accommodate other use-cases. In this regard, 'civilian' technology could be easily misused by non-State actors, such as organised crime and terrorist groups, who could weaponise it and scale their unlawful operations." Among the challenges that need to be addressed was closing the global gap, particularly in ensuring that countries across the world have a voice. The report said that no global framework currently exists to govern AI and with the technology's development in the hands of a few multinational companies in a few countries, the risks that come with AI could be imposed on most people without their having any say in the decision-making process. "AI governance regimes must be global to be effective. Through a global dialogue we expect global governance will have meaningful and harmonised impact," said Viveros Álvarez. "Our recommendations are aimed at the UN's member states and would need to be taken forward in cooperation with the private sector, technical community, civil society and academic actors, as well as existing international AI governance initiatives." Key recommendations Within the report, the HLAB-AI members made several recommendations to address the concerns and challenges that come with the evolution of AI, including the establishment of an international scientific panel on AI, an AI standards exchange and an AI capacity development network to link up a set of collaborating, United Nations-affiliated capacity development centres. These would all be designed to bring together or source expertise on AI-related initiatives and issue annual reports identifying further risks as well as opportunities and trends. The AI standards exchange would also be responsible for developing and maintaining a register of definitions and applicable standards for measuring and evaluating AI systems and identifying gaps where new standards are needed. The members also recommended creating a suite of online educational opportunities on AI targeted at university students, young researchers, social entrepreneurs and public sector officials as well as a fellowship programme for promising individuals to spend time in academic institutions or technology companies. The report also suggested the creation of a global fund for AI to "put a floor under the AI divide". The final recommendation was to create an AI office within the UN Secretariat, which would report to the secretary-general and act as the 'glue' that brings the report's proposals together. Viveros Álvarez told SiliconRepublic.com that, while the body does not yet recommend an AI agency with enforcement functions at the international levels, "plans for such an organisation should continue to be discussed" so it can be fully effective and operational. A sign of hope The stark report showed that there is much work to do to ensure AI is deployed safely and fairly across the world. And while industry players with vested interests can sometimes complain that regulation stifles innovation, Viveros Álvarez said that this is false. "Establishing safe guidelines and facilitating active dialogues are vital for an innovative environment that ensures AI can be safely used for the benefit and protection of all of humanity," she said. And while the advisory body said that there are divergences across countries and sectors when it comes to AI governance discussions, the "strong desire for dialogue" has given way to hope. "When we look back in five years, the technology landscape could appear drastically different from today. However, if we stay the course and overcome hesitation and doubt, we can look back in five years at an AI governance landscape that is inclusive and empowering for individuals, communities and States everywhere. "It is not technological change itself, but how humanity responds to it, that ultimately matters." Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[10]
The United Nations Wants to Treat AI With the Same Urgency as Climate Change
A United Nations report released today proposes having the international body oversee the first truly global effort for monitoring and governing artificial intelligence. The report, produced by the UN secretary general's High Level Advisory Body on AI, recommends the creation of a body similar to the Intergovernmental Panel on Climate Change to gather up-to-date information on AI and its risks. The report calls for a new policy dialog on AI so that the UN's 193 members can discuss risks and agree upon actions. It further recommends that the UN take steps to empower poorer nations, especially those in the global south, to benefit from AI and contribute to its governance. These should include, it says, creating an AI fund to back projects in these nations, establishing AI standards and data-sharing systems, and creating resources such as training to help nations with AI governance. Some of the report's recommendations could be facilitated by the Global Digital Compact, an existing plan to address digital and data divides between nations. It finally suggests creating an AI office within the UN dedicated to coordinating existing efforts within the UN to meet the report's goals. "You've got an international community that agrees there are both harms and risks as well as opportunities presented by AI," says Alondra Nelson, a professor at the Institute for Advanced Study who served on the UN advisory body at the recommendation of the White House and State Department. The remarkable abilities demonstrated by large language models and chatbots in recent years have sparked hopes of a revolution in economic productivity but have also prompted some experts to warn that AI may be developing too rapidly and could soon become difficult to control. Not long after ChatGPT appeared, many scientists and entrepreneurs signed a letter calling for a six-month pause on the technology's development so that the risks could be assessed. More immediate concerns include the potential for AI to automate disinformation, generate deepfake video and audio, replace workers en masse, and exacerbate societal algorithmic bias on an industrial scale. "There is a sense of urgency, and people feel we need to work together," Nelson says. The UN proposals reflect high interest among policymakers worldwide in regulating AI to mitigate these risks. But it also comes as major powers -- especially the United States and China -- jostle to lead in a technology that promises to have huge economic, scientific, and military benefits, and as these nations stake out their own visions for how it should be used and controlled. In March, the United States introduced a resolution to the UN calling on member states to embrace the development of "safe, secure, and trustworthy AI." In July, China introduced a resolution of its own that emphasized cooperation in the development of AI and making the technology widely available. All UN member states signed both agreements. "AI is part of US-China competition, so there is only so much that they are going to agree on," says Joshua Meltzer, an expert at the Brookings Institute, a Washington, DC, think tank. Key differences, he says, include what norms and values should be embodied by AI and protections around privacy and personal data.
Share
Share
Copy Link
The United Nations' advisory body has put forward seven recommendations for governing artificial intelligence globally. This comes as major tech companies like Meta and OpenAI face regulatory challenges and calls for responsible AI development.
In a significant move towards regulating the rapidly evolving field of artificial intelligence (AI), the United Nations' advisory body has proposed a framework for global AI governance. The recommendations come at a crucial time when tech giants are facing increasing scrutiny and challenges in the AI space 1.
The UN advisory body, comprising 38 experts from various fields, has put forward seven comprehensive recommendations for governing AI on a global scale 2. These recommendations aim to address the potential risks and harness the benefits of AI technology:
The UN advisory body stressed the importance of inclusivity in AI governance, particularly highlighting the need to involve developing countries in decision-making processes. The recommendations emphasize the protection of human rights and the promotion of shared prosperity in the context of AI development and deployment 4.
As the UN pushes for global AI governance, major tech companies are grappling with their own set of challenges. Meta, formerly known as Facebook, has faced criticism over its handling of user data and content moderation. Meanwhile, OpenAI, the creator of ChatGPT, has encountered regulatory hurdles and concerns about the potential misuse of its advanced language models 1.
The UN's recommendations mark a significant step towards creating a unified approach to AI governance. However, implementing these proposals on a global scale will require cooperation from governments, tech companies, and international organizations. As AI continues to advance, balancing innovation with responsible development and use remains a critical challenge for policymakers and industry leaders alike 5.
The coming months will likely see increased dialogue and negotiations as stakeholders work towards establishing a framework that can effectively govern AI while fostering innovation and protecting human rights in an increasingly AI-driven world.
Reference
[2]
[4]
[5]
United Nations experts urge the establishment of a global governance framework for artificial intelligence, emphasizing the need to address both risks and benefits of AI technology on an international scale.
11 Sources
11 Sources
The United Nations has proposed a plan to govern artificial intelligence, raising questions about whether it has been influenced by industry hype. This development comes as AI technology rapidly advances, prompting global discussions on regulation and ethical use.
2 Sources
2 Sources
As AI rapidly advances, experts and policymakers stress the critical need for a global governance framework to ensure responsible development and implementation.
2 Sources
2 Sources
Leading computer scientists and AI experts issue warnings about the potential dangers of advanced AI systems. They call for international cooperation and regulations to ensure human control over AI development.
3 Sources
3 Sources
As the Paris AI summit approaches, countries worldwide are at various stages of regulating artificial intelligence, from the US's "Wild West" approach to the EU's comprehensive rules.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved