2 Sources
[1]
California lawmaker behind SB 1047 reignites push for mandated AI safety reports | TechCrunch
California State Senator Scott Wiener on Wednesday introduced new amendments to his latest bill, SB 53, that would require the world's largest AI companies to publish safety and security protocols and issue reports when safety incidents occur. If signed into law, California would be the first state to impose meaningful transparency requirements onto leading AI developers, likely including OpenAI, Google, Anthropic, and xAI. Senator Wiener's previous AI bill, SB 1047, included similar requirements for AI model developers to publish safety reports. However, Silicon Valley fought ferociously against that bill, and it was ultimately vetoed by Governor Gavin Newsom. California's Governor then called for a group of AI leaders -- including the leading Stanford researcher co-founder of World Labs Fei Fei Li -- to form a policy group and set goals for the state's AI safety efforts. California's AI policy group recently published their final recommendations, citing a need for "requirements on industry to publish information about their systems" in order to establish a "robust and transparent evidence environment." Senator Wiener's office said in a press release that SB 53's amendments were heavily influenced by this report. "The bill continues to be a work in progress, and I look forward to working with all stakeholders in the coming weeks to refine this proposal into the most scientific and fair law it can be," Senator Wiener said in the release. SB 53 aims to strike a balance that Governor Newsom claimed SB 1047 failed to achieve -- ideally, creating meaningful transparency requirements for the largest AI developers without thwarting the rapid growth of California's AI industry. "These are concerns that my organization and others have been talking about for a while," said Nathan Calvin, VP of State Affairs for the nonprofit AI safety group, Encode, in an interview with TechCrunch. "Having companies explain to the public and government what measures they're taking to address these risks feels like a bare minimum, reasonable step to take." The bill also creates whistleblower protections for employees of AI labs who believe their company's technology poses a "critical risk" to society -- defined in the bill as contributing to the death or injury of more than 100 people, or more than $1 billion in damage. Additionally, the bill aims to create CalCompute, a public cloud computing cluster to support startups and researchers developing large-scale AI. With the new amendments, SB 53 is now headed to California State Assembly Committee on Privacy and Consumer Protection for approval. Should it pass there, the bill will also need to pass through several other legislative bodies before reaching Governor Newsom's desk. On the other side of the U.S., New York Governor Kathy Hochul is now considering a similar AI safety bill, the RAISE Act, which would also require large AI developers to publish safety and security reports. The fate of state AI laws like the RAISE Act and SB 53 were briefly in jeopardy as federal lawmakers considered a 10-year AI moratorium on state AI regulation -- an attempt to limit a "patchwork" of AI laws that companies would have to navigate. However, that proposal failed in a 99-1 Senate vote earlier in July. "Ensuring AI is developed safely should not be controversial -- it should be foundational," said Geoff Ralston, the former president of Y Combinator, in a statement to TechCrunch. "Congress should be leading, demanding transparency and accountability from the companies building frontier models. But with no serious federal action in sight, states must step up. California's SB 53 is a thoughtful, well-structured example of state leadership." Up to this point, lawmakers have failed to get AI companies on board with state-mandated transparency requirements. Anthropic has broadly endorsed the need for increased transparency into AI companies, and even expressed modest optimism about the recommendations from California's AI policy group. But companies such as OpenAI, Google, and Meta have been more resistant to these efforts. Leading AI model developers typically publish safety reports for their AI models, but they've been less consistent in recent months. Google, for example, decided not to publish a safety report for its most advanced AI model ever released, Gemini 2.5 Pro, until months after it was made available. OpenAI also decided not to publish a safety report for its GPT-4.1 model. Later, a third-party study came out that suggested it may be less aligned than previous AI models. SB 53 represents a toned-down version of previous AI safety bills, but it still could force AI companies to publish more information than they do today. For now, they'll be watching closely as Senator Wiener once again tests those boundaries.
[2]
California Lawmaker Pushes to Require AI Companies to Release Safety Policies
A California lawmaker is making another effort to regulate artificial intelligence in the state after legislation that would have held large companies liable for harm caused by their technology was vetoed last year by Governor Gavin Newsom. State Senator Scott Wiener, a San Francisco Democrat, has introduced a bill that would require companies developing AI models above a certain computing performance threshold to publicly release safety and security protocols that assess the potential catastrophic risks to humanity from the technology. Under the law, AI companies also would need to report any "critical safety incidents," such as theft of sensitive technical details, to the state attorney general. Companies that may be affected by the proposed legislation include OpenAI, Alphabet Inc.'s Google and Anthropic.
Share
Copy Link
California State Senator Scott Wiener introduces amendments to SB 53, aiming to require large AI companies to publish safety protocols and incident reports, potentially making California the first state to impose such transparency requirements.
California State Senator Scott Wiener has reignited the debate on AI regulation with new amendments to Senate Bill 53 (SB 53). The bill aims to mandate the publication of safety and security protocols by major AI companies, as well as require reporting of safety incidents 1. If passed, this legislation would make California the first state to impose significant transparency requirements on leading AI developers, potentially affecting industry giants like OpenAI, Google, Anthropic, and xAI 1.
Source: Bloomberg Business
This move follows the veto of Senator Wiener's previous bill, SB 1047, which faced strong opposition from Silicon Valley. Governor Gavin Newsom vetoed the bill and subsequently established a policy group of AI leaders to set goals for the state's AI safety efforts 1. The amendments to SB 53 have been heavily influenced by the recommendations of this policy group, which emphasized the need for "requirements on industry to publish information about their systems" 1.
The amended bill includes several notable provisions:
Mandatory Safety Reports: Companies developing AI models above a certain computing performance threshold would be required to publicly release safety and security protocols 2.
Incident Reporting: AI companies would need to report any "critical safety incidents" to the state attorney general 2.
Whistleblower Protections: The bill offers protection for employees of AI labs who believe their company's technology poses a "critical risk" to society 1.
Public Computing Resource: SB 53 proposes the creation of CalCompute, a public cloud computing cluster to support startups and researchers in developing large-scale AI 1.
The AI industry has shown mixed reactions to transparency requirements. While companies like Anthropic have broadly endorsed the need for increased transparency, others such as OpenAI, Google, and Meta have been more resistant 1. The bill represents a toned-down version of previous AI safety proposals but could still force AI companies to disclose more information than they currently do 1.
Source: TechCrunch
California's push for AI regulation is not isolated. In New York, Governor Kathy Hochul is considering a similar AI safety bill, the RAISE Act 1. These state-level efforts gained momentum after a federal proposal for a 10-year AI moratorium on state AI regulation failed in a 99-1 Senate vote 1.
The amended SB 53 now heads to the California State Assembly Committee on Privacy and Consumer Protection for approval. If successful, it will need to pass through several other legislative bodies before reaching Governor Newsom's desk 1. Senator Wiener has expressed openness to further refinements, stating, "The bill continues to be a work in progress, and I look forward to working with all stakeholders in the coming weeks to refine this proposal into the most scientific and fair law it can be" 1.
As the debate over AI regulation continues, the fate of SB 53 will be closely watched by both the tech industry and policymakers across the nation. Its success or failure could set a precedent for how states approach the challenge of ensuring AI safety and transparency in the absence of comprehensive federal regulation.
Databricks raises $1 billion in a new funding round, valuing the company at over $100 billion. The data analytics firm plans to invest in AI database technology and an AI agent platform, positioning itself for growth in the evolving AI market.
12 Sources
Business
19 hrs ago
12 Sources
Business
19 hrs ago
Microsoft has integrated a new AI-powered COPILOT function into Excel, allowing users to perform complex data analysis and content generation using natural language prompts within spreadsheet cells.
9 Sources
Technology
20 hrs ago
9 Sources
Technology
20 hrs ago
Adobe launches Acrobat Studio, integrating AI assistants and PDF Spaces to transform document management and collaboration, marking a significant evolution in PDF technology.
10 Sources
Technology
19 hrs ago
10 Sources
Technology
19 hrs ago
Meta rolls out an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
5 Sources
Technology
11 hrs ago
5 Sources
Technology
11 hrs ago
Nvidia introduces significant updates to its app, including global DLSS override, Smooth Motion for RTX 40-series GPUs, and improved AI assistant, enhancing gaming performance and user experience.
4 Sources
Technology
20 hrs ago
4 Sources
Technology
20 hrs ago