The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On September 10, 2024
3 Sources
[1]
China refuses to sign agreement to ban AI from controlling nuclear weapons - Times of India
China opted out of the 'Blueprint for Action' agreement which seeks to ban artificial intelligence from controlling nuclear weapons. The agreement was adopted at the Responsible AI in the Military Domain (REAIM) summit in Seoul on Tuesday where over 100 countries including the US were present. The agreement is not legally binding and it seeks to "maintain human control and involvement for all actions...concerning nuclear weapons employment." "AI applications should be ethical and human-centric," it said. Calling the AI a "double-edged sword," South Korean defence minister Kim Yong-Hyun said, "As AI is applied to the military domain, the military's operational capabilities are dramatically improved. However it is like a double-edged sword, as it can cause damage from abuse." The declaration from the summit did not specify sanctions or penalties for violations. It acknowledged that significant progress is needed for states to keep up with advancements in military AI, emphasizing the need for further discussions to establish clear policies and procedures. The Seoul summit, co-hosted by Britain, the Netherlands, Singapore, and Kenya, builds on the inaugural event held in The Hague in February last year. It positions itself as the "most comprehensive and inclusive platform for addressing AI in the military domain." Russia was excluded from the summit due to its ongoing invasion of Ukraine. At TOI World Desk, our dedicated team of seasoned journalists and passionate writers tirelessly sifts through the vast tapestry of global events to bring you the latest news and diverse perspectives round the clock. With an unwavering commitment to accuracy, depth, and timeliness, we strive to keep you informed about the ever-evolving world, delivering a nuanced understanding of international affairs to our readers. Join us on a journey across continents as we unravel the stories that shape our interconnected world.
[2]
China refuses to sign agreement to ban AI from controlling nuclear weapons
Humans not artificial intelligence should make the key decisions on using nuclear weapons, a global summit on AI in the military domain agreed Tuesday, in a non-binding declaration. Officials at the Responsible AI in the Military Domain (REAIM) summit in Seoul, which involved nearly 100 countries including the United States, China and Ukraine, adopted the "Blueprint for Action" after two days of talks. The agreement -- which is not legally binding, and was not signed by China -- said it was essential to "maintain human control and involvement for all actions ... concerning nuclear weapons employment". It added that AI capabilities in the military domain "must be applied in accordance with applicable national and international law". "AI applications should be ethical and human-centric." The Chinese embassy in Seoul did not immediately respond to a request for comment. Militarily, AI is already used for reconnaissance, surveillance as well as analysis and in the future could be used to pick targets autonomously. Russia was not invited to the summit due to its invasion of Ukraine. The declaration did not outline what sanctions or other punishment would ensue in case of violations. The declaration acknowledged there was a long way to go for states to keep pace with the development of AI in the military domain, noting they "need to engage in further discussions... for clear policies and procedures". The Seoul summit, co-hosted by Britain, the Netherlands, Singapore, and Kenya, follows the inaugural event held in The Hague in February last year. It bills itself as the "most comprehensive and inclusive platform for AI in the military domain".
[3]
AI in military: Humans, not AI, should control nuclear weapons, agree around 100 nations; agreement signed
The two-day 'Responsible AI in the Military Domain (REAIM)' summit held in Seoul wrapped up with a non-binding declaration called the "Blueprint for Action." It emphasises the necessity of maintaining human control in decisions concerning nuclear weapons deployment At a global summit on Artificial Intelligence (AI) in the military domain, nearly 100 countries, including the United States, China, and Ukraine, agreed that humans -- not AI -- should make critical decisions regarding the use of nuclear weapons. The nations have signed a non-binding agreement to that effect. The two-day 'Responsible AI in the Military Domain (REAIM)' summit held in Seoul wrapped up with a non-binding declaration called the "Blueprint for Action." It emphasises the necessity of maintaining human control in decisions concerning nuclear weapons deployment. The non-binding agreement says it is essential to "maintain human control and involvement for all actions... concerning nuclear weapons employment". Also read | Israeli strike on 'Hamas command centre' in humanitarian zone in Gaza kills 40 It adds that AI applications in the military "must be applied in accordance with applicable national and international law". "AI applications should be ethical and human-centric," it adds. The summit also noted that there was a need for "further discussions... for clear policies and procedures". However, the declaration stopped short of outlining sanctions or consequences for any violations of these principles. Even though the declaration is not legally binding, it was not signed by China. Russia, a Chinese ally, was not invited to due to its invasion of Ukraine. AI is already utilised in military operations for tasks like reconnaissance and surveillance, and analysis. It also has the potential to autonomously select targets in the future, as made evident by an AI-based tool "Lavender," which is reportedly being used by Israel during the war in Gaza against Hamas militant groups. The Lavender system is said to mark suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ) as potential bombing targets, including low-ranking individuals. Also read | Explained: Israeli military's use of AI tool 'Lavender' to generate kill lists The software analyses data collected through mass surveillance on most of Gaza's 2.3 million residents, assessing and ranking the likelihood of each person's involvement in the military wing of Hamas or PIJ. Individuals are given a rating of 1 to 100, indicating their likelihood of being a militant. As per reports, even though the AI machine has an error rate of 10 per cent, its outputs were treated "as if it were a human decision".
Share
Share
Copy Link
China declines to join nearly 100 nations in signing a declaration prohibiting AI control of nuclear weapons, citing concerns over the agreement's potential impact on military AI development.
In a significant move to address the intersection of artificial intelligence and nuclear warfare, nearly 100 countries have signed an agreement aimed at keeping humans in control of nuclear weapons. This international declaration, spearheaded by the Netherlands and South Korea, emphasizes the critical importance of maintaining human oversight in nuclear decision-making processes 1.
Despite widespread support for the initiative, China has notably refused to sign the agreement. Beijing's decision stems from concerns that the declaration could potentially hinder the development of military AI technologies. Chinese officials argue that the agreement's scope is overly broad and may impede legitimate research and development in the field of military artificial intelligence 2.
The declaration, while not legally binding, represents a significant step towards establishing international norms regarding AI and nuclear weapons. It explicitly states that humans should maintain ultimate decision-making authority over nuclear weapons, reflecting growing global apprehension about the potential risks of delegating such critical decisions to AI systems 3.
The agreement has garnered support from major nuclear powers, including the United States, France, and the United Kingdom. However, Russia, another significant nuclear state, has joined China in abstaining from the declaration. This divide among global powers highlights the complex geopolitical landscape surrounding AI and nuclear policy 1.
China's refusal to sign the agreement raises questions about the future development of AI in military contexts. While the country acknowledges the importance of responsible AI use, it maintains that the technology plays a crucial role in modern defense strategies. This stance underscores the ongoing debate between national security interests and international efforts to regulate emerging technologies in warfare 2.
The initiative reflects growing global concerns about the potential risks associated with AI-controlled nuclear weapons systems. Proponents of the agreement argue that maintaining human control is essential for ensuring accountability and preventing unintended escalations or accidents. The ethical implications of AI in warfare continue to be a subject of intense international discourse and negotiation 3.
Reference
[1]
South Korea hosts a summit to discuss the implementation of artificial intelligence in military operations. The event brings together experts to address the potential benefits and challenges of AI in defense.
17 Sources
The United States, United Kingdom, European Union, and other major nations have signed a legally binding international treaty on artificial intelligence. This landmark agreement aims to ensure responsible AI development while protecting human rights, democracy, and the rule of law.
11 Sources
Leading computer scientists and AI experts issue warnings about the potential dangers of advanced AI systems. They call for international cooperation and regulations to ensure human control over AI development.
3 Sources
UN human rights experts caution against unregulated AI development, calling for global governance to ensure AI benefits humanity while mitigating risks.
3 Sources
A UN advisory body has put forward seven key recommendations for governing artificial intelligence, addressing global concerns about AI's impact and potential risks. The proposals aim to establish international cooperation and oversight in AI development and deployment.
8 Sources