Curated by THEOUTPOST
On Thu, 20 Mar, 8:03 AM UTC
3 Sources
[1]
Group co-led by Fei-Fei Li suggests that AI safety laws should anticipate future risks
In a new report, a California-based policy group co-led by Fei-Fei Li, an AI pioneer, suggests that lawmakers should consider AI risks that "have not yet been observed in the world" when crafting AI regulatory policies. The 41-page interim report released on Tuesday comes from the Joint California Policy Working Group on Frontier AI Models, an effort organized by Governor Gavin Newsom following his veto of California's controversial AI safety bill, SB 1047. While Newsom found that SB 1047 missed the mark, he acknowledged last year the need for a more extensive assessment of AI risks to inform legislators. In the report, Li, along with co-authors UC Berkeley College of Computing Dean Jennifer Chayes and Carnegie Endowment for International Peace President Mariano-Florentino Cuéllar, argue in favor of laws that would increase transparency into what frontier AI labs such as OpenAI are building. Industry stakeholders from across the ideological spectrum reviewed the report before its publication, including staunch AI safety advocates like Turing Award winner Yoshua Benjio as well as those who argued against SB 1047, such as Databricks Co-Founder Ion Stoica. According to the report, the novel risks posed by AI systems may necessitate laws that would force AI model developers to publicly report their safety tests, data acquisition practices, and security measures. The report also advocates for increased standards around third-party evaluations of these metrics and corporate policies, in addition to expanded whistleblower protections for AI company employees and contractors. Li et al. write there's an "inconclusive level of evidence" for AI's potential to help carry out cyberattacks, create biological weapons, or bring about other "extreme" threats. They also argue, however, that AI policy should not only address current risks, but anticipate future consequences that might occur without sufficient safeguards. "For example, we do not need to observe a nuclear weapon [exploding] to predict reliably that it could and would cause extensive harm," the report states. "If those who speculate about the most extreme risks are right -- and we are uncertain if they will be -- then the stakes and costs for inaction on frontier AI at this current moment are extremely high." The report recommends a two-pronged strategy to boost AI model development transparency: trust but verify. AI model developers and their employees should be provided avenues to report on areas of public concern, the report says, such as internal safety testing, while also being required to submit testing claims for third-party verification. While the report, the final version of which is due out in June 2025, endorses no specific legislation, it's been well received by experts on both sides of the AI policymaking debate. Dean Ball, an AI-focused research fellow at George Mason University who was critical of SB 1047, said in a post on X that the report was a promising step for California's AI safety regulation. It's also a win for AI safety advocates, according to California State Senator Scott Wiener, who introduced SB 1047 last year. Wiener said in a press release that the report builds on "urgent conversations around AI governance we began in the legislature [in 2024]." The report appears to align with several components of SB 1047 and Wiener's follow-up bill, SB 53, such as requiring AI model developers to report the results of safety tests. Taking a broader view, it seems to be a much-needed win for AI safety folks, whose agenda has lost ground in the last year.
[2]
Report co-authored by Fei-Fei Li stresses need for AI regulations to consider future risks - SiliconANGLE
Report co-authored by Fei-Fei Li stresses need for AI regulations to consider future risks A new report co-authored by the artificial intelligence pioneer Fei-Fei Li urges lawmakers to anticipate future risks that have not yet been conceived when drawing up regulations to govern how the technology should be used. The 41-page report by the Joint California Policy Working Group on Frontier AI Models comes after California Governor Gavin Newsom shot down the state's original AI safety bill, SB 1047. He vetoed that divisive legislation last year, saying that legislators need a more extensive assessment of AI risks before they attempt to craft better legislation. Li (pictured) co-authored the report alongside Carnegie Endowment for International Peace President Mariano-Florentino Cuéllar and the University of California at Berkeley College of Computing Dean Jennifer Tour Chayes. In it, they stress the need for regulations that would ensure more transparency into so-called "frontier models" being built by companies such as OpenAI, Google LLC and Anthropic PBC. They also urge lawmakers to consider forcing AI developers to publicly release information such as their data acquisition methods, security measures and safety test results. In addition, the report stressed the need for more rigorous standards regarding third-party evaluations of AI safety and corporate policies. There should also be protections put in place for whistleblowers at AI companies, it recommends. The report was reviewed by numerous AI industry stakeholders prior to being published, including the AI safety advocate Yoshua Bengio and Databricks Inc. co-founder Ion Stoica, who argued against the original SB 1047 bill. One section of the report notes that there is currently an "inconclusive level of evidence" regarding the potential of AI to be used in cyberattacks and the creation of biological weapons. The authors wrote that any AI policies must therefore not only address existing risks, but also any future risks that might arise if sufficient safeguards are not put in place. They use an analogy to stress this point, noting that no one needs to see a nuclear weapon explode to predict the extensive harm it would cause. "If those who speculate about the most extreme risks are right -- and we are uncertain if they will be -- then the stakes and costs for inaction on frontier AI at this current moment are extremely high," the report states. Given this fear of the unknown, the co-authors say the government should implement a two-pronged strategy around AI transparency, focused on the concept of "trust but verify." As part of this, AI developers and their employees should have a legal way to report any new developments that might pose a safety risk without threat of legal action. It's important to note that the current report is still only an interim version, and that the completed report won't be published until June. The report does not endorse any specific legislation, but the safety concerns it highlights have been well-received by experts. For instance, the AI researcher Dean Ball at George Mason University, who notably criticized the SB 1047 bill and was happy to see it vetoed, posted on X that it's a "promising step" for the industry. At the same time, California State Senator Scott Weiner, who first introduced the SB 1047 bill, noted that the report continues the "urgent conversations around AI governance" that were originally raised in his aborted legislation.
[3]
Report co-authored by Fei-Fei Li stresses the need to consider future risks when contemplating AI regulations - SiliconANGLE
Report co-authored by Fei-Fei Li stresses the need to consider future risks when contemplating AI regulations A new report co-authored by the artificial intelligence pioneer Fei-Fei Li has urged lawmakers to anticipate future risks that have not yet been conceived when drawing up regulations to govern how the technology should be used. The 41-page report by the Joint California Policy Working Group on Frontier AI Models comes after California Governor Gavin Newsom shot down the state's original AI safety bill, SB 1047. He vetoed that divisive legislation last year, saying that legislators need a more extensive assessment of AI risks before they attempt to craft better legislation. Li (pictured) co-authored the report alongside Carnegie Endowment for International Peace President Mariano-Florentino Cuéllar and UC Berkeley College of Computing Dean Jennifer. In it, they stress the need for regulations that would ensure more transparency into so-called "frontier models" being built by companies like OpenAI, Google LLC and Anthropic PBC. They also urge lawmakers to consider forcing AI developers to publicly release information such as their data acquisition methods, security measures and safety test results. In addition, the report stressed the need for more rigorous standards regarding third-party evaluations of AI safety and corporate policies. There should also be protections put in place for whistleblowers at AI companies, it recommends. The report was reviewed by numerous AI industry stakeholders prior to being published, including the AI safety advocate Yoshua Benjio, and Databricks Inc. co-founder Ion Stoica, who argued against the original SB 1047 bill. One section of the report notes that there is currently an "inconclusive level of evidence" regarding the potential of AI to be used in cyberattacks and the creation of biological weapons. The authors wrote that any AI policies must therefore not only address existing risks, but also any future risks that might arise if sufficient safeguards are not put in place. They use an analogy to stress this point, noting that no one needs to see a nuclear weapon explode to predict the extensive harm it would cause. "If those who speculate about the most extreme risks are right -- and we are uncertain if they will be -- then the stakes and costs for inaction on frontier AI at this current moment are extremely high," the report states. Given this fear of the unknown, the co-authors say the government should implement a two-pronged strategy around AI transparency, focused on the concept of "trust but verify". As part of this, AI developers and their employees should have a legal way to report any new developments that might pose a safety risk without threat of legal action. It's important to note that the current report is still only an interim version, and that the completed report won't be published until June. The report does not endorse any specific legislation, but the safety concerns it highlights have been well received by experts. For instance, the AI researcher Dean Ball at George Mason University, who notably criticized the SB 1047 bill and was happy to see it vetoed, posted on X that it's a "promising step" for the industry. At the same time, California State Senator Scott Weiner, who first introduced the SB 1047 bill, noted that the report continues the "urgent conversations around AI governance" that were originally raised in his aborted legislation.
Share
Share
Copy Link
A report co-authored by AI pioneer Fei-Fei Li recommends that AI safety laws should anticipate future risks and increase transparency in frontier AI development, sparking discussions on the future of AI governance.
A new report from the Joint California Policy Working Group on Frontier AI Models, co-led by AI pioneer Fei-Fei Li, suggests that lawmakers should consider potential AI risks that "have not yet been observed in the world" when crafting regulatory policies 1. The 41-page interim report, released on Tuesday, comes in response to Governor Gavin Newsom's veto of California's controversial AI safety bill, SB 1047, last year 2.
The report, co-authored by Li, UC Berkeley College of Computing Dean Jennifer Chayes, and Carnegie Endowment for International Peace President Mariano-Florentino Cuéllar, advocates for several key measures:
While acknowledging an "inconclusive level of evidence" for AI's potential to aid in cyberattacks or biological weapons creation, the authors argue that AI policy should anticipate future consequences:
"For example, we do not need to observe a nuclear weapon [exploding] to predict reliably that it could and would cause extensive harm," the report states 1.
The report recommends a "trust but verify" approach to boost AI model development transparency:
The report was reviewed by experts across the ideological spectrum, including:
The interim report has been well-received by experts on both sides of the AI policymaking debate:
While the report does not endorse specific legislation, it aligns with several components of SB 1047 and Wiener's follow-up bill, SB 53 1. The final version of the report is due in June 2025, and its recommendations may significantly influence future AI governance discussions and policies 23.
Reference
[1]
[2]
California Governor Gavin Newsom's veto of Senate Bill 1047, a proposed AI safety regulation, has ignited discussions about balancing innovation with public safety in the rapidly evolving field of artificial intelligence.
9 Sources
9 Sources
California's proposed AI safety bill, SB 1047, has ignited a fierce debate in the tech world. While some industry leaders support the legislation, others, including prominent AI researchers, argue it could stifle innovation and favor large tech companies.
3 Sources
3 Sources
A proposed California bill aimed at regulating artificial intelligence has created a divide among tech companies in Silicon Valley. The legislation has garnered support from some firms while facing opposition from others, highlighting the complex challenges in AI governance.
4 Sources
4 Sources
California's legislature has approved a groundbreaking bill to regulate large AI models, setting the stage for potential nationwide standards. The bill, if signed into law, would require companies to evaluate AI systems for risks and implement mitigation measures.
7 Sources
7 Sources
California's AI safety bill, AB-1047, moves forward with significant amendments following tech industry input. The bill aims to regulate AI development while balancing innovation and safety concerns.
10 Sources
10 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved