Curated by THEOUTPOST
On Sun, 9 Feb, 4:00 PM UTC
2 Sources
[1]
Experts call for regulation to avoid 'loss of control' over AI
PARIS (AFP) - Experts from around the world have called for greater regulation of AI to prevent it escaping human control, as global leaders gather in Paris for a summit on the technology. France, co-hosting the Monday and Tuesday gathering with India, has chosen to spotlight AI 'action' in 2025 rather than put the safety concerns front and centre as at the previous meetings in Britain's Bletchley Park in 2023 and the Korean capital Seoul in 2024. The French vision is for governments, businesses and other actors to come out in favour of global governance for AI and make commitments on sustainability, without setting binding rules. "We don't want to spend our time talking only about the risks. There's the very real opportunity aspect as well," said Anne Bouverot, AI envoy for President Emmanuel Macron. Max Tegmark, head of the US-based Future of Life Institute that has regularly warned of AI's dangers, told AFP that France should not miss the opportunity to act. "France has been a wonderful champion of international collaboration and has the opportunity to really lead the rest of the world," the MIT physicist said. "There is a big fork in the road here at the Paris summit and it should be embraced." Tegmark's institute has backed the Sunday launch of a platform dubbed Global Risk and AI Safety Preparedness (GRASP) that aims to map major risks linked to AI and solutions being developed around the world. "We've identified around 300 tools and technologies in answer to these risks," said GRASP coordinator Cyrus Hodes. Results from the survey will be passed to the OECD rich-countries club and members of the Global Partnership on Artificial Intelligence (GPAI), a grouping of almost 30 nations including major European economies, Japan, South Korea and the United States that will meet in Paris Sunday. The past week also saw the presentation of the first International AI Safety Report on Thursday, compiled by 96 experts and backed by 30 countries, the UN, EU and OECD. Risks outlined in the document range from the familiar, such as fake content online, to the far more alarming. "Proof is steadily appearing of additional risks like biological attacks or cyberattacks," the report's coordinator and noted computer scientist Yoshua Bengio told AFP. In the longer term, 2018 Turing Prize winner Bengio fears a possible "loss of control" by humans over AI systems, potentially motivated by "their own will to survive". "A lot of people thought that mastering language at the level of ChatGPT-4 was science fiction as recently as six years ago, and then it happened," said Tegmark, referring to OpenAI's chatbot. "The big problem now is that a lot of people in power still have not understood that we're closer to building artificial general intelligence (AGI) than to figuring out how to control it." AGI refers to an artificial intelligence that would equal or better humans in all fields. Its approach within a few years has been heralded by the likes of OpenAI chief Sam Altman. "If you just eyeball the rate at which these capabilities are increasing, it does make you think that we'll get there by 2026 or 2027," said Dario Amodei, Altman's counterpart at rival Anthropic said in November. "At worst, these American or Chinese companies lose control over this, and then after that Earth will be run by machines," Tegmark said. Stuart Russell, a computer science professor at Berkeley in California, said one of his greatest fears is "weapons systems where the AI that is controlling that weapon system is deciding who to attack, when to attack, and so on." Russell, who is also coordinator of the International Association for Safe and Ethical AI (IASEI), places the responsibility firmly on governments to set up safeguards against armed AIs. Tegmark said the solution is very simple: treating the AI industry the same way all other industries are. "Before somebody can build a new nuclear reactor outside of Paris they have to demonstrate to government-appointed experts that this reactor is safe. That you're not going to lose control over it... it should be the same for AI," said Tegmark.
[2]
Experts call for regulation to avoid 'loss of control' over AI
Experts from around the world have called for greater regulation of AI to prevent it escaping human control, as global leaders gather in Paris for a summit on the technology. "There is a big fork in the road here at the Paris summit and it should be embraced."Experts from around the world have called for greater regulation of AI to prevent it escaping human control, as global leaders gather in Paris for a summit on the technology. France, co-hosting the Monday and Tuesday gathering with India, has chosen to spotlight AI 'action' in 2025 rather than put the safety concerns front and centre as at the previous meetings in Britain's Bletchley Park in 2023 and the Korean capital Seoul in 2024. The French vision is for governments, businesses and other actors to come out in favour of global governance for AI and make commitments on sustainability, without setting binding rules. "We don't want to spend our time talking only about the risks. There's the very real opportunity aspect as well," said Anne Bouverot, AI envoy for President Emmanuel Macron. Max Tegmark, head of the US-based Future of Life Institute that has regularly warned of AI's dangers, told AFP that France should not miss the opportunity to act. "France has been a wonderful champion of international collaboration and has the opportunity to really lead the rest of the world," the MIT physicist said. "There is a big fork in the road here at the Paris summit and it should be embraced." Will to survive Tegmark's institute has backed the Sunday launch of a platform dubbed Global Risk and AI Safety Preparedness (GRASP) that aims to map major risks linked to AI and solutions being developed around the world. "We've identified around 300 tools and technologies in answer to these risks," said GRASP coordinator Cyrus Hodes. Results from the survey will be passed to the OECD rich-countries club and members of the Global Partnership on Artificial Intelligence (GPAI), a grouping of almost 30 nations including major European economies, Japan, South Korea and the United States that will meet in Paris Sunday. The past week also saw the presentation of the first International AI Safety Report on Thursday, compiled by 96 experts and backed by 30 countries, the UN, EU and OECD. Risks outlined in the document range from the familiar, such as fake content online, to the far more alarming. "Proof is steadily appearing of additional risks like biological attacks or cyberattacks," the report's coordinator and noted computer scientist Yoshua Bengio told AFP. In the longer term, 2018 Turing Prize winner Bengio fears a possible "loss of control" by humans over AI systems, potentially motivated by "their own will to survive". "A lot of people thought that mastering language at the level of ChatGPT-4 was science fiction as recently as six years ago, and then it happened," said Tegmark, referring to OpenAI's chatbot. "The big problem now is that a lot of people in power still have not understood that we're closer to building artificial general intelligence (AGI) than to figuring out how to control it." Besting human intelligence? AGI refers to an artificial intelligence that would equal or better humans in all fields. Its approach within a few years has been heralded by the likes of OpenAI chief Sam Altman. "If you just eyeball the rate at which these capabilities are increasing, it does make you think that we'll get there by 2026 or 2027," said Dario Amodei, Altman's counterpart at rival Anthropic said in November. "At worst, these American or Chinese companies lose control over this, and then after that Earth will be run by machines," Tegmark said. Stuart Russell, a computer science professor at Berkeley in California, said one of his greatest fears is "weapons systems where the AI that is controlling that weapon system is deciding who to attack, when to attack, and so on." Russell, who is also coordinator of the International Association for Safe and Ethical AI (IASEI), places the responsibility firmly on governments to set up safeguards against armed AIs. Tegmark said the solution is very simple: treating the AI industry the same way all other industries are. "Before somebody can build a new nuclear reactor outside of Paris they have to demonstrate to government-appointed experts that this reactor is safe. That you're not going to lose control over it... it should be the same for AI," said Tegmark.
Share
Share
Copy Link
As world leaders gather in Paris for an AI summit, experts emphasize the need for greater regulation to prevent AI from escaping human control. The summit aims to address both risks and opportunities associated with AI development.
As global leaders convene in Paris for a summit on artificial intelligence (AI), experts worldwide are calling for increased regulation to prevent AI from escaping human control. The two-day gathering, co-hosted by France and India, aims to address both the risks and opportunities associated with AI development 12.
France has chosen to spotlight AI 'action' in 2025, shifting focus from the safety concerns that dominated previous meetings in Britain's Bletchley Park in 2023 and Seoul in 2024. The French vision promotes global governance for AI and sustainability commitments without imposing binding rules. Anne Bouverot, AI envoy for President Emmanuel Macron, emphasized the importance of discussing opportunities alongside risks 1.
Max Tegmark, head of the US-based Future of Life Institute, urged France to seize the opportunity to lead in international collaboration on AI regulation. The institute has launched the Global Risk and AI Safety Preparedness (GRASP) platform, which aims to map major AI-related risks and solutions worldwide 1.
The first International AI Safety Report, compiled by 96 experts and backed by 30 countries, the UN, EU, and OECD, was recently presented. The report outlines risks ranging from online fake content to more alarming scenarios such as biological attacks and cyberattacks 12.
Experts, including Yoshua Bengio and Max Tegmark, express concerns about the rapid advancement towards Artificial General Intelligence (AGI) and the potential loss of human control over AI systems. Dario Amodei of Anthropic suggested that AGI could be achieved as early as 2026 or 2027 12.
Stuart Russell, a computer science professor at Berkeley, highlighted the need for safeguards against armed AIs and emphasized government responsibility in this area. Tegmark proposed treating the AI industry similarly to other high-risk industries, such as nuclear power, by requiring safety demonstrations before deployment 12.
The summit will involve discussions among members of the Global Partnership on Artificial Intelligence (GPAI), a group of almost 30 nations including major economies. As the AI landscape rapidly evolves, the outcomes of this summit could significantly shape the future of AI governance and safety measures worldwide 12.
Reference
[1]
[2]
The Paris AI Action Summit brings together world leaders and tech executives to discuss AI's future, with debates over regulation, safety, and economic benefits taking center stage.
47 Sources
47 Sources
Leading computer scientists and AI experts issue warnings about the potential dangers of advanced AI systems. They call for international cooperation and regulations to ensure human control over AI development.
3 Sources
3 Sources
The AI Action Summit in Paris marks a significant shift in global attitudes towards AI, emphasizing economic opportunities over safety concerns. This change in focus has sparked debate among industry leaders and experts about the balance between innovation and risk management.
7 Sources
7 Sources
The Paris AI Action Summit concluded with a declaration signed by 60 countries, but the US and UK's refusal to sign highlights growing divisions in global AI governance approaches.
18 Sources
18 Sources
As the Paris AI summit approaches, countries worldwide are at various stages of regulating artificial intelligence, from the US's "Wild West" approach to the EU's comprehensive rules.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved