California AI Policy Group Urges Proactive Approach to Future AI Risks in New Report

3 Sources

Share

A report co-authored by AI pioneer Fei-Fei Li recommends that AI safety laws should anticipate future risks and increase transparency in frontier AI development, sparking discussions on the future of AI governance.

News article

California Policy Group Releases Interim Report on AI Safety

A new report from the Joint California Policy Working Group on Frontier AI Models, co-led by AI pioneer Fei-Fei Li, suggests that lawmakers should consider potential AI risks that "have not yet been observed in the world" when crafting regulatory policies

1

. The 41-page interim report, released on Tuesday, comes in response to Governor Gavin Newsom's veto of California's controversial AI safety bill, SB 1047, last year

2

.

Key Recommendations for AI Regulation

The report, co-authored by Li, UC Berkeley College of Computing Dean Jennifer Chayes, and Carnegie Endowment for International Peace President Mariano-Florentino Cuéllar, advocates for several key measures:

  1. Increased transparency into frontier AI labs' activities
  2. Public reporting of safety tests, data acquisition practices, and security measures
  3. Enhanced standards for third-party evaluations of AI safety metrics and corporate policies
  4. Expanded whistleblower protections for AI company employees and contractors

    1

    3

Anticipating Future Risks

While acknowledging an "inconclusive level of evidence" for AI's potential to aid in cyberattacks or biological weapons creation, the authors argue that AI policy should anticipate future consequences:

"For example, we do not need to observe a nuclear weapon [exploding] to predict reliably that it could and would cause extensive harm," the report states

1

.

Two-Pronged Strategy for Transparency

The report recommends a "trust but verify" approach to boost AI model development transparency:

  1. Provide avenues for AI model developers and employees to report on areas of public concern
  2. Require submission of testing claims for third-party verification

    1

    3

Industry Stakeholder Input

The report was reviewed by experts across the ideological spectrum, including:

  • Yoshua Bengio, Turing Award winner and AI safety advocate
  • Ion Stoica, Databricks Co-Founder who argued against SB 1047

    1

    2

Reception and Future Implications

The interim report has been well-received by experts on both sides of the AI policymaking debate:

  • Dean Ball, an AI-focused research fellow at George Mason University, called it a promising step for California's AI safety regulation

    1

  • California State Senator Scott Wiener, who introduced SB 1047, stated that the report builds on "urgent conversations around AI governance"

    1

    3

While the report does not endorse specific legislation, it aligns with several components of SB 1047 and Wiener's follow-up bill, SB 53

1

. The final version of the report is due in June 2025, and its recommendations may significantly influence future AI governance discussions and policies

2

3

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo