2 Sources
[1]
AI firms 'unprepared' for dangers of building human-level systems, report warns
Future of Life Institute says companies pursuing artificial general intelligence lack credible plans to ensure safety Artificial intelligence companies are "fundamentally unprepared" for the consequences of creating systems with human-level intellectual performance, according to a leading AI safety group. The Future of Life Institute (FLI) said none of the firms on its AI safety index scored higher than a D for "existential safety planning". One of the five reviewers of the FLI's report said that, despite aiming to develop artificial general intelligence (AGI), none of the companies scrutinised had "anything like a coherent, actionable plan" to ensure the systems remained safe and controllable. AGI refers to a theoretical stage of AI development at which a system is capable of matching a human in carrying out any intellectual task. OpenAI, the developer of ChatGPT, has said its mission is to ensure AGI "benefits all of humanity". Safety campaigners have warned that AGI could pose an existential threat by evading human control and triggering a catastrophic event. The FLI's report said: "The industry is fundamentally unprepared for its own stated goals. Companies claim they will achieve artificial general intelligence (AGI) within the decade, yet none scored above D in existential safety planning." The index evaluates seven AI developers - Google DeepMind, OpenAI, Anthropic, Meta, xAI and China's Zhipu AI and DeepSeek - across six areas including "current harms" and "existential safety". Anthropic received the highest overall safety score with a C+, followed by OpenAI with a C and Google DeepMind with a C-. The FLI is a US-based non-profit that campaigns for safer use of cutting-edge technology and is able to operate independently due to an "unconditional" donation from crypto entrepreneur Vitalik Buterin. SaferAI, another safety-focused non-profit, also released a report on Thursday warning that advanced AI companies have "weak to very weak risk management practices" and labelled their current approach "unacceptable". The FLI safety grades were assigned and reviewed by a panel of AI experts, including British computer scientist Stuart Russell, and Sneha Revanur, founder of AI regulation campaign group Encode Justice. Max Tegmark, a co-founder of FLI and a professor at Massachusetts Institute of Technology, said it was "pretty jarring" that cutting-edge AI firms were aiming to build super-intelligent systems without publishing plans to deal with the consequences. He said: "It's as if someone is building a gigantic nuclear power plant in New York City and it is going to open next week - but there is no plan to prevent it having a meltdown." Tegmark said the technology was continuing to outpace expectations, citing a previously held belief that experts would have decades to address the challenges of AGI. "Now the companies themselves are saying it's a few years away," he said. He added that progress in AI capabilities had been "remarkable" since the global AI summit in Paris in February, with new models such as xAI's Grok 4, Google's Gemini 2.5, and its video generator Veo3, all showing improvements on their forebears. A Google DeepMind spokesperson said the reports did not take into account "all of Google DeepMind's AI safety efforts". They added: "Our comprehensive approach to AI safety and security extends well beyond what's captured." OpenAI, Anthropic, Meta, xAI, Zhipu AI and DeepSeek have also been approached for comment.
[2]
Top AI Firms Fall Short on Safety, New Studies Find
The studies were carried out by the nonprofits SaferAI and the Future of Life Institute (FLI). Each was the second of its kind, in what the groups hope will be a running series that incentivizes top AI companies to improve their practices. "We want to make it really easy for people to see who is not just talking the talk, but who is also walking the walk," says Max Tegmark, president of the FLI. Read More: Some Top AI Labs Have 'Very Weak' Risk Management, Study Finds SaferAI assessed top AI companies' risk management protocols (also known as responsible scaling policies) to score each company on its approach to identifying and mitigating AI risks. No AI company scored better than "weak" in SaferAI's assessment of their risk management maturity. The highest scorer was Anthropic (35%), followed by OpenAI (33%), Meta (22%), and Google DeepMind (20%). Elon Musk's xAI scored 18%. Two companies, Anthropic and Google DeepMind, received lower scores than the first time the study was carried out, in October 2024. The result means that OpenAI has overtaken Google as second place in SaferAI's ratings. Siméon Campos, founder of SaferAI, said Google scored comparatively low despite doing some good safety research, because the company makes few solid commitments in its policies. The company also released a frontier model earlier this year, Gemini 2.5, without sharing safety information -- in what Campos called an "egregious failure." A spokesperson for Google DeepMind told TIME: "We are committed to developing AI safely and securely to benefit society. AI safety measures encompass a wide spectrum of potential mitigations. These recent reports don't take into account all of Google DeepMind's AI safety efforts, nor all of the industry benchmarks. Our comprehensive approach to AI safety and security extends well beyond what's captured."
Share
Copy Link
Recent reports from AI safety organizations reveal that leading AI companies are ill-prepared for the potential dangers of developing human-level AI systems, with most scoring poorly on safety and risk management assessments.
Two prominent AI safety organizations, the Future of Life Institute (FLI) and SaferAI, have released reports highlighting significant shortcomings in the safety and risk management practices of top artificial intelligence companies. These findings come at a crucial time when the race to develop artificial general intelligence (AGI) is intensifying, raising alarm bells about the potential consequences of unprepared advancement 12.
The Future of Life Institute's report paints a concerning picture of the AI industry's readiness for the challenges posed by advanced AI systems. According to their AI safety index:
Max Tegmark, co-founder of FLI and professor at MIT, expressed shock at the industry's approach: "It's as if someone is building a gigantic nuclear power plant in New York City and it is going to open next week - but there is no plan to prevent it having a meltdown" 1.
SaferAI's study focused on the risk management protocols of leading AI companies, revealing equally troubling results:
The reports have sparked debate within the AI community and raised questions about the industry's ability to self-regulate. Google DeepMind responded to the findings, stating that the reports did not account for "all of Google DeepMind's AI safety efforts" and that their approach to AI safety and security extends beyond what was captured in the studies 12.
The urgency of addressing these safety concerns is underscored by the rapid advancements in AI capabilities. Companies like OpenAI have stated their mission to develop AGI that "benefits all of humanity," yet safety campaigners warn of potential existential threats if such systems evade human control 1.
Source: TIME
Recent progress in AI models, such as xAI's Grok 4, Google's Gemini 2.5, and its video generator Veo3, demonstrates the accelerating pace of development in the field 1. This progress, coupled with the lack of robust safety measures, has intensified calls for more stringent oversight and regulation of AI development.
As the AI industry continues to push the boundaries of technological capabilities, these reports serve as a wake-up call for both companies and policymakers. The gap between ambition and safety preparedness highlighted by FLI and SaferAI underscores the need for:
The future of AI holds immense promise, but as these reports indicate, realizing that potential safely and responsibly remains a significant challenge for the industry to address.
OpenAI introduces ChatGPT Agent, a powerful AI assistant capable of performing complex tasks across multiple platforms, marking a significant advancement in agentic AI technology.
26 Sources
Technology
2 hrs ago
26 Sources
Technology
2 hrs ago
Taiwan Semiconductor Manufacturing Co. (TSMC) posts record quarterly profit driven by strong AI chip demand, raising its 2025 revenue growth forecast to 30% despite potential challenges.
7 Sources
Technology
2 hrs ago
7 Sources
Technology
2 hrs ago
Slack introduces a suite of AI-driven tools to improve search, summarization, and communication within its platform, aiming to streamline workplace collaboration and compete with other tech giants in the enterprise productivity space.
9 Sources
Technology
2 hrs ago
9 Sources
Technology
2 hrs ago
Nvidia and AMD are set to resume sales of AI chips to China as part of a broader US-China trade deal involving rare earth elements, sparking debates on national security and technological competition.
3 Sources
Policy and Regulation
10 hrs ago
3 Sources
Policy and Regulation
10 hrs ago
Google introduces advanced AI capabilities to Search, including Gemini 2.5 Pro integration, Deep Search for comprehensive research, and an AI agent for business inquiries.
3 Sources
Technology
2 hrs ago
3 Sources
Technology
2 hrs ago