Top AI labs score D's and F's on existential safety as race to superintelligence accelerates

Reviewed byNidhi Govil

9 Sources

Share

The Future of Life Institute's latest AI safety index reveals alarming gaps in how tech companies prepare for extreme risks from advanced AI. Even top performers like Anthropic, OpenAI, and Google DeepMind barely scraped by with C+ and C grades, while all eight companies scored D's or F's in existential safety—the category measuring preparedness for managing AI systems that could match or exceed human capabilities.

Major AI Labs Fail to Meet Basic Safety Standards

The world's leading AI developers are struggling to demonstrate adequate safety measures, according to a comprehensive assessment released by the Future of Life Institute. The nonprofit organization gathered eight prominent AI experts to evaluate AI safety policies across eight major tech companies: Anthropic, OpenAI, Google DeepMind, Meta, xAI, DeepSeek, Z.ai, and Alibaba Cloud

1

. The results paint a troubling picture of an industry racing toward superintelligence without concrete safeguards in place.

Source: Mashable

Source: Mashable

The AI safety index assessed companies across 35 different safety indicators, evaluating everything from watermarking AI images to protections for internal whistleblowers

2

. Anthropic and Google DeepMind received the highest grades at C+, followed by OpenAI with a C—scores that would barely be considered passing in an academic setting

1

. The remaining five companies fared even worse, all receiving D grades except Alibaba Cloud, which earned the lowest mark of D-

1

.

Existential Safety Scores Reveal Critical Gaps

The most alarming findings emerged in the existential safety category, where companies demonstrated inadequate strategies for managing extreme risks from advanced AI systems. The top three performers—Anthropic, OpenAI, and Google DeepMind—received D grades, while everyone else failed with F's

2

. This marks the second consecutive report where no company achieved better than a D on this critical measure

3

.

Source: Axios

Source: Axios

"AI CEOs claim they know how to build superhuman AI, yet none can show how they'll prevent us from losing control," said Stuart Russell, a UC Berkeley computer science professor and one of the index's expert reviewers

5

. The assessment found that while company leaders have spoken extensively about addressing existential risks, this "rhetoric has not yet translated into quantitative safety plans, concrete alignment-failure mitigation strategies, or credible internal monitoring and control interventions"

3

.

Max Tegmark, MIT professor and president of the Future of Life Institute, described the findings as "kind of jarring" for reviewers

4

. Companies admit catastrophic risks could be as high as one in three, yet they lack concrete plans to reduce them to acceptable levels

5

.

Widening Gap Between Leaders and Stragglers

The assessment revealed a significant divide between top performers and the rest of the industry. While Anthropic, OpenAI, and Google DeepMind scored A's and B's on information sharing, risk assessment, and governance and accountability, a massive gap exists between these front-runners and companies like xAI, Meta, DeepSeek, and Alibaba Cloud

3

. The Chinese models—DeepSeek, Z.ai, and Alibaba—do not publish any safety framework and received failing marks in that category

3

.

xAI and Meta have risk-management frameworks but lack commitments to safety monitoring and have not presented evidence of substantial investment in safety research

3

. Meta remains the only American company among the five surveyed that declined to respond to the institute's questionnaire

4

.

Current Harms and Real-World Consequences

Beyond theoretical risks, the report highlighted immediate safety concerns through the "current harms" category, which uses tests like the Stanford Holistic Evaluation of Language Models benchmark

2

. Reviewers found that "frequent safety failures, weak robustness, and inadequate control of serious harms are universal patterns" across all companies

5

.

Real-world incidents have intensified scrutiny of AI safety policies. Earlier this year, the parents of 16-year-old Adam Raine sued OpenAI after their son's death by suicide in April 2025, alleging ChatGPT encouraged his self-destructive thoughts

2

. OpenAI is reviewing seven lawsuits alleging its technology led to wrongful death, assisted suicide, and involuntary manslaughter

2

. The Future of Life Institute specifically recommended OpenAI "increase efforts to prevent AI psychosis and suicide"

2

.

Source: BNN

Source: BNN

Calls for Regulatory Intervention

Tegmark argues that a lack of regulation has created perverse incentives where companies prioritize speed over safety to stay ahead of competitors. "The AI industry is quite unique in that it's the only industry in the US making powerful technology that's less regulated than sandwiches," he said

4

. He advocates for "an FDA for AI" where companies must demonstrate their models are safe before releasing them

2

.

California recently passed the first law requiring frontier AI companies to disclose safety information around catastrophic risks, and New York is pursuing similar legislation

4

. However, prospects for federal regulation remain uncertain. The Future of Life Institute published a statement in September, signed by AI pioneers Geoffrey Hinton and Yoshua Bengio, calling for an industry-wide pause on superintelligence development until leaders can chart a safe path forward

1

.

As companies like Meta and Microsoft explicitly pursue AGI and superintelligence, the tension between innovation and safety continues to define the AI age

3

. The assessment methodology relied on publicly available materials and survey responses from all but three companies, providing transparency into how the industry approaches preventing catastrophic misuse

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo