2 Sources
[1]
California is trying to regulate its AI giants -- again
Hayden Field is The Verge's senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets. Last September, all eyes were on Senate Bill 1047 as it made its way to California Governor Gavin Newsom's desk -- and died there as he vetoed the buzzy piece of legislation. SB 1047 would have required makers of all large AI models, particularly those that cost $100 million or more to train, to test them for specific dangers. AI industry whistleblowers weren't happy about the veto, and most large tech companies were. But the story didn't end there. Newsom, who had felt the legislation was too stringent and one-size-fits-all, tasked a group of leading AI researchers to help propose an alternative plan -- one that would support the development and the governance of generative AI in California, along with guardrails for its risks. On Tuesday, that report was published. The authors of the 52-page "California Report on Frontier Policy" said that AI capabilities -- including models' chain-of-thought "reasoning" abilities -- have "rapidly improved" since Newsom's decision to veto SB 1047. Using historical case studies, empirical research, modeling, and simulations, they suggested a new framework that would require more transparency and independent scrutiny of AI models. Their report is appearing against the backdrop of a possible 10-year moratorium on states regulating AI, backed by a Republican Congress and companies like OpenAI. The report -- co-led by Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence; Mariano-Florentino Cuéllar, President of the Carnegie Endowment for International Peace; and Jennifer Tour Chayes, Dean of the UC Berkeley College of Computing, Data Science, and Society -- concluded that frontier AI breakthroughs in California could heavily impact agriculture, biotechnology, clean tech, education, finance, medicine and transportation. Its authors agreed it's important to not stifle innovation and "ensure regulatory burdens are such that organizations have the resources to comply." But reducing risks is still paramount, they wrote: "Without proper safeguards... powerful Al could induce severe and, in some cases, potentially irreversible harms." The group published a draft version of their report in March for public comment. But even since then, they wrote in the final version, evidence that these models contribute to "chemical, biological, radiological, and nuclear (CBRN) weapons risks... has grown." Leading companies, they added, have self-reported concerning spikes in their models' capabilities in those areas. The authors have made several changes to the draft report. They now note that California's new AI policy will need to navigate quickly-changing "geopolitical realities." They added more context about the risks that large AI models pose, and they took a harder line on categorizing companies for regulation, saying a focus purely on how much compute their training required was not the best approach. AI's training needs are changing all the time, the authors wrote, and a compute-based definition ignores how these models are adopted in real-world use cases. It can be used as an "initial filter to cheaply screen for entities that may warrant greater scrutiny," but factors like initial risk evaluations and downstream impact assessment are key. That's especially important because the AI industry is still the Wild West when it comes to transparency, with little agreement on best practices and "systemic opacity in key areas" like how data is acquired, safety and security processes, pre-release testing, and potential downstream impact, the authors wrote. The report calls for whistleblower protections, third-party evaluations with safe harbor for researchers conducting those evaluations, and sharing information directly with the public, to enable transparency that goes beyond what current leading AI companies choose to disclose. One of the report's lead writers, Scott Singer, told The Verge that AI policy conversations have "completely shifted on the federal level" since the draft report. He argued that California, however, could help lead a "harmonization effort" among states for "commonsense policies that many people across the country support." That's a contrast to the jumbled patchwork that AI moratorium supporters claim state laws will create. In an op-ed earlier this month, Anthropic CEO Dario Amodei called for a federal transparency standard, requiring leading AI companies "to publicly disclose on their company websites ... how they plan to test for and mitigate national security and other catastrophic risks." But even steps like that aren't enough, the authors of Tuesday's report wrote, because "for a nascent and complex technology being developed and adopted at a remarkably swift pace, developers alone are simply inadequate at fully understanding the technology and, especially, its risks and harms." That's why one of the key tenets of Tuesday's report is the need for third-party risk assessment. The authors concluded that risk assessments would incentivize companies like OpenAI, Anthropic, Google, Microsoft and others to amp up model safety, while helping paint a clearer picture of their models' risks. Currently, leading AI companies typically do their own evaluations or hire second-party contractors to do so. But third-party evaluation is vital, the authors say. Not only are "thousands of individuals... willing to engage in risk evaluation, dwarfing the scale of internal or contracted teams," but also, groups of third-party evaluators have "unmatched diversity, especially when developers primarily reflect certain demographics and geographies that are often very different from those most adversely impacted by AI." But if you're allowing third-party evaluators to test the risks and blind spots of your powerful AI models, you have to give them access -- for meaningful assessments, a lot of access. And that's something companies are hesitant to do. It's not even easy for second-party evaluators to get that level of access. Metr, a company OpenAI partners with for safety tests of its own models, wrote in a blog post that the firm wasn't given as much time to test OpenAI's o3 model as it had been with past models, and that OpenAI didn't give it enough access to data or the models' internal reasoning. Those limitations, Metr wrote, "prevent us from making robust capability assessments." OpenAI later said it was exploring ways to share more data with firms like Metr. Even an API or disclosures of a model's weights may not let third-party evaluators effectively test for risks, the report noted, and companies could use "suppressive" terms of service to ban or threaten legal action against independent researchers that uncover safety flaws. Last March, more than 350 AI industry researchers and others signed an open letter calling for a "safe harbor" for independent AI safety testing, similar to existing protections for third-party cybersecurity testers in other fields. Tuesday's report cites that letter and calls for big changes, as well as reporting options for people harmed by AI systems. "Even perfectly designed safety policies cannot prevent 100% of substantial, adverse outcomes," the authors write. "As foundation models are widely adopted, understanding harms that arise in practice is increasingly important."
[2]
California AI Policy Report Warns of 'Irreversible Harms'
"The opportunity to establish effective AI governance frameworks may not remain open indefinitely," says the report, which was published on June 17. Citing new evidence that AI can help users source nuclear-grade uranium and is on the cusp of letting novices create biological threats, it notes that the cost for inaction at this current moment could be "extremely high." The 53-page document stems from a working group established by Governor Newsom, in a state that has emerged as a central arena for AI legislation. With no comprehensive federal regulation on the horizon, state-level efforts to govern the technology have taken on outsized significance, particularly in California, which is home to many of the world's top AI companies. In 2023, California Senator Scott Wiener sponsored a first-of-its-kind bill, SB 1047, which would have required that large-scale AI developers implement rigorous safety testing and mitigation for their systems, but which critics feared would stifle innovation and squash the open-source AI community. The bill passed both state houses despite fierce industry opposition, but Governor Newsom ultimately vetoed it last September, deeming it "well-intentioned" but not the "best approach to protecting the public." Following that veto, Newsom launched the working group to "develop workable guardrails for deploying GenAI." The group was co-led by "godmother of AI" Fei-Fei Li, a prominent opponent of SB 1047, alongside Mariano-Florentino Cuéllar, member of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research, and Jennifer Tour Chayes dean of the College of Computing, Data Science, and Society at UC Berkeley. The working group evaluated AI's progress, SB 1047's weak points, and solicited feedback from more than 60 experts. "As the global epicenter of AI innovation, California is uniquely positioned to lead in unlocking the transformative potential of frontier AI," Li said in a statement. "Realizing this promise, however, demands thoughtful and responsible stewardship -- grounded in human-centered values, scientific rigor, and broad-based collaboration," she said.
Share
Copy Link
California releases a comprehensive report proposing a new framework for AI regulation, emphasizing transparency, third-party evaluations, and risk mitigation in response to rapidly advancing AI capabilities and potential threats.
In a significant move towards regulating artificial intelligence, California has released a comprehensive report proposing a new framework for AI governance. This 52-page "California Report on Frontier Policy" comes in the wake of Governor Gavin Newsom's veto of Senate Bill 1047 last September, which would have required extensive testing of large AI models for specific dangers 1.
Source: The Verge
The report, co-led by prominent AI researchers and policy experts, acknowledges the rapid improvement in AI capabilities since the veto of SB 1047. It emphasizes the need for a balanced approach that supports AI development while implementing necessary guardrails 1.
Key recommendations include:
Increased Transparency: The report calls for more openness from AI companies regarding data acquisition, safety processes, and potential downstream impacts 1.
Third-Party Evaluations: Independent assessments of AI models are proposed to ensure unbiased risk analysis 1.
Whistleblower Protections: To encourage transparency and accountability within the AI industry 1.
The report stresses the urgency of establishing effective AI governance frameworks, warning that the window for action may not remain open indefinitely. It cites new evidence of AI's potential to aid in sourcing nuclear-grade uranium and creating biological threats, underscoring the high cost of inaction 2.
Source: TIME
This initiative comes against the backdrop of a proposed 10-year moratorium on state-level AI regulation, supported by some in Congress and AI companies. However, the report's authors argue that California could lead a "harmonization effort" among states for common-sense policies 1.
The proposed framework moves away from the compute-based definition of large AI models, recognizing that training needs are constantly evolving. Instead, it suggests a more nuanced approach considering factors like initial risk evaluations and downstream impact assessments 1.
As home to many of the world's top AI companies, California's efforts to govern AI technology have taken on significant importance, especially given the lack of comprehensive federal regulation. The state's approach could set a precedent for AI governance nationwide 2.
This report represents a crucial step in California's ongoing efforts to balance innovation with responsible AI development, potentially shaping the future of AI regulation both within the state and beyond.
Apple's senior VP of Hardware Technologies, Johny Srouji, reveals the company's interest in using generative AI to accelerate chip design processes, potentially revolutionizing their approach to custom silicon development.
11 Sources
Technology
21 hrs ago
11 Sources
Technology
21 hrs ago
A new study reveals that AI reasoning models produce significantly higher CO₂ emissions compared to concise models when answering questions, highlighting the environmental impact of advanced AI technologies.
8 Sources
Technology
13 hrs ago
8 Sources
Technology
13 hrs ago
Meta is reportedly in discussions to bring on former GitHub CEO Nat Friedman and AI investor Daniel Gross to bolster its artificial intelligence efforts, potentially including a partial buyout of their venture fund NFDG.
7 Sources
Business and Economy
21 hrs ago
7 Sources
Business and Economy
21 hrs ago
OpenAI executives anticipate that upcoming AI models will pose a higher risk for potential misuse in bioweapons development, prompting increased safety measures and industry-wide concerns.
2 Sources
Technology
13 hrs ago
2 Sources
Technology
13 hrs ago
European drone manufacturers are flocking to Ukraine, using the ongoing conflict as a real-world laboratory to test and improve their technologies, with implications for both military and civilian applications.
4 Sources
Technology
13 hrs ago
4 Sources
Technology
13 hrs ago