6 Sources
6 Sources
[1]
California lawmakers pass AI safety bill SB 53 -- but Newsom could still veto | TechCrunch
California's state senate gave final approval early on Saturday morning to a major AI safety bill setting new transparency requirements on large companies. As described by its author, state senator Scott Wiener, SB 53 "requires large AI labs to be transparent about their safety protocols, creates whistleblower protections for [employees] at AI labs & creates a public cloud to expand compute access (CalCompute)." The bill now goes to California Governor Gavin Newsom to sign or veto. He has not commented publicly on SB 53, but last year, he vetoed a more expansive safety bill also authored by Wiener, while signing narrower legislation targeting issues like deepfakes. At the time, Newsom acknowledged the importance of "protecting the public from real threats posed by this technology," but criticized Wiener's previous bill for applying "stringent standards" to large models regardless of whether they were "deployed in high-risk environments, [involved] critical decision-making or the use of sensitive data." Wiener said the new bill was influenced by recommendations from a policy panel of AI experts that Newsom convened after his veto. Politico also reports that SB 53 was recently amended so that companies developing "frontier" AI models while bringing in less than $500 million in annual revenue will only need to disclose high level safety details, while companies making more than that will need to provide more detailed reports. The bill has been opposed by a number of Silicon Valley companies, VC firms, and lobbying groups. In a recent letter to Newsom, OpenAI did not mention SB 53 specifically but argued that to avoid "duplication and inconsistencies," companies should be considered compliant with statewide safety rules as long as they meet federal or European standards. And Andreessen Horowitz's head of AI policy and chief legal officer recently claimed that "many of today's state AI bills -- like proposals in California and New York -- risk" crossing a line by violating constitutional limits on how states can regulate interstate commerce. a16z's co-founders had previously pointed to tech regulation as one of the factors leading them to back Donald Trump's bid for a second term. The Trump administration and its allies subsequently called for a 10-year ban on state AI regulation. Anthropic, meanwhile, has come out in favor of SB 53. "We have long said we would prefer a federal standard," said Anthropic co-founder Jack Clark in a post. "But in the absence of that this creates a solid blueprint for AI governance that cannot be ignored."
[2]
Anthropic endorses California's AI safety bill, SB 53 | TechCrunch
On Monday, Anthropic announced an official endorsement of SB 53, a California bill from state Senator Scott Wiener that would impose first-in-the-nation transparency requirements on the world's largest AI model developers. Anthropic's endorsement marks a rare and major win for SB 53, at a time when major tech groups like CTA and Chamber for Progress are lobbying against the bill. "While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won't wait for consensus in Washington." said Anthropic in a blog post. "The question isn't whether we need AI governance -- it's whether we'll develop it thoughtfully today or reactively tomorrow. SB 53 offers a solid path toward the former." If passed, SB 53 would require frontier AI model developers like OpenAI, Anthropic, Google, and xAI to develop safety frameworks, as well as release public safety and security reports before deploying powerful AI models. The bill would also establish whistleblower protections to employees that come forward with safety concerns. Senator Wiener's bill specifically focuses on limiting AI models from contributing to "catastrophic risks," which the bill defines as the death of at least 50 people or more than a billion dollars in damages. SB 53 focuses on the extreme side of AI risk -- limiting AI models from being used to provide expert-level assistance in the creation of biological weapons, or being used in cyberattacks -- rather than more near-term concerns like AI deepfakes or sycophancy. California's Senate approved a prior version of SB 53, but still needs to hold a final vote on the bill before it can advance to the governor's desk. Governor Gavin Newsom has stayed silent on the bill so far, although he vetoed Senator Weiner's last AI safety bill, SB 1047. Bills regulating frontier AI model developers have faced significant pushback from both Silicon Valley and the Trump administration, which both argue that such efforts could limit America's innovation in the race against China. Investors like Andreessen Horowitz and Y Combinator led some of the pushback against SB 1047, and in recent months, the Trump administration has repeatedly threatened to block states from passing AI regulation altogether. One of the most common arguments against AI safety bills are that states should leave the matter up to federal governments. Andreessen Horowitz's Head of AI Policy, Matt Perault, and Chief Legal Officer, Jai Ramaswamy, published a blog post last week arguing that many of today's state AI bills risk violating the Constitution's Commerce Clause -- which limits state governments from passing laws that go beyond their borders and impair interstate commerce. However, Anthropic co-founder Jack Clark argues in a post on X that the tech industry will build powerful AI systems in the coming years, and can't wait for the federal government to act. "We have long said we would prefer a federal standard," said Clark. "But in the absence of that this creates a solid blueprint for AI governance that cannot be ignored." OpenAI's Chief Global Affairs Officer, Chris Lehane, sent a letter to Governor Newsom in August arguing that he should not pass any AI regulation that would push startups out of California -- although the letter did not mention SB 53 by name. OpenAI's former Head of Policy Research, Miles Brundage, said in a post on X that Lehane's letter was "filled with misleading garbage about SB 53 and AI policy generally." Notably, SB 53 aims to solely regulate the world's largest AI companies -- particularly ones that generated a gross revenue of more than $500 million. Despite the criticism, policy experts say SB 53 is a more modest approach than previous AI safety bills. Dean Ball, a Senior Fellow at the Foundation for American Innovation and former White House AI policy advisor, said in an August blog post that he believes SB 53 has a good chance now of becoming law. Ball, who criticized SB 1047, said SB 53's drafters have "shown respect for technical reality," as well as a "measure of legislative restraint." Senator Wiener previously said that SB 53 was heavily influenced by an expert policy panel Governor Newsom convened -- co-led by leading Stanford researcher and co-founder of World Labs, Fei-Fei Li -- to advise California on how to regulate AI. Most AI labs already have some version of the internal safety policy that SB 53 requires. OpenAI, Google DeepMind, and Anthropic regularly publish safety reports for their models. However, these companies are not bound by anyone but themselves do so, and sometimes they fall behind their self-imposed safety commitments. SB 53 aims to set these requirements as state law, with financial repercussions if an AI lab fails to comply. Earlier in September, California lawmakers amended SB 53 to remove a section of the bill which would have required AI model developers to be audited by third parties. Tech companies have fought these types of third party audits in other AI policy battles previously, arguing that they're overly burdensome.
[3]
California Lawmakers Pass AI Safety Bill, Pending Newsom's Approval
Don't miss out on our latest stories. Add PCMag as a preferred source on Google. California's state Senate has approved an AI safety bill that would force AI labs working on "frontier models" to be transparent about their safety protocols and create whistleblower protections for employees, Politico reports. The bill, SB 53, would also establish new technical initiatives, including the creation of a public cloud to expand compute access, known as CalCompute, which could be based at the University of California. However, the legislation still needs approval from Governor Gavin Newsom to become law. He vetoed a comparable bill late last year, put forward by state Sen. Scott Wiener (D-San Francisco). The original bill was then later revised following consultations with a California tech policy group. Frontier models mean AIs that are trained on broad datasets, with general-purpose applications, for example, ChatGPT or Google Gemini, rather than a niche mobile app. Not all AI firms working on these frontier models are treated equally under the bill. While companies making less than $500 million a year in revenue would still face scrutiny, they would not be expected to conform to the same level of oversight as firms generating more than that. The bill has garnered support from some of the largest AI firms in the industry, including Anthropic, maker of the Claude chatbot. The company's co-founder, Jack Clark, said it "creates a solid blueprint for AI governance that cannot be ignored" in the absence of a federal standard. But many other firms have been far less keen on the legislation. The head of AI policy and chief legal officer at Andreessen Horowitz -- one of the largest investors in the tech industry -- argued that bills like this "require complex, costly administrative processes that don't meaningfully improve safety, and may even make AI products worse." He added that "little tech," for example, startups, has the fewest options to avoid these burdens. Meanwhile, lobbying groups such as the California Chamber of Commerce and TechNet criticized "the bill's focus on 'large developers' to the exclusion of other developers of models with advanced capabilities that pose risks of catastrophic harm," in a letter seen by Politico.
[4]
California Lawmakers Once Again Challenge Newsom's Tech Ties with AI Bill
Last year, California Governor Gavin Newsom vetoed a wildly popular (among the public) and wildly controversial (among tech companies) bill that would have established robust safety guidelines for the development and operation of artificial intelligence models. Now he'll have a second shotâ€"this time with at least part of the tech industry giving him the green light. On Saturday, California lawmakers passed Senate Bill 53, a landmark piece of legislation that would require AI companies to submit to new safety tests. Senate Bill 53, which now awaits the governor's signature to become law in the state, would require companies building "frontier" AI modelsâ€"systems that require massive amounts of data and computing power to operateâ€"to provide more transparency into their processes. That would include disclosing safety incidents involving dangerous or deceptive behavior by autonomous AI systems, providing more clarity into safety and security protocols and risk evaluations, and providing protections for whistleblowers who are concerned about the potential harms that may come from models they are working on. The billâ€"which would apply to the work of companies like OpenAI, Google, xAI, Anthropic, and othersâ€"has certainly been dulled from previous attempts to set up a broad safety framework for the AI industry. The bill that Newsom vetoed last year, for instance, would have established a mandatory "kill switch" for models to address the potential of them going rogue. That's nowhere to be found here. An earlier version of SB 53 also applied the safety requirements to smaller companies, but that has changed. In the version that passed the Senate and Assembly, companies bringing in less than $500 million in annual revenue only have to disclose high-level safety details rather than more granular information, per Politicoâ€"a change made in part at the behest of the tech industry. Whether that's enough to satisfy Newsom (or more specifically, satisfy the tech companies from whom he would like to continue receiving campaign contributions) is yet to be seen. Anthropic recently softened on the legislation, opting to throw its support behind it just days before it officially passed. But trade groups like the Consumer Technology Association (CTA) and Chamber for Progress, which count among its members companies like Amazon, Google, and Meta, have come out in opposition to the bill. OpenAI also signaled its opposition to regulations California has been pursuing without specifically naming SB 53. After the Trump administration tried and failed to implement a 10-year moratorium on states implementing regulations on AI, California has the opportunity to lead on the issueâ€"which makes sense, given most of the companies at the forefront of the space are operating within its borders. But that fact also seems to be part of the reason Newsom is so shy to pull the trigger on regulations despite all his bluster on many other issues. His political ambitions require money to run, and those companies have a whole lot of it to offer.
[5]
Anthropic backs California bill that mandates AI transparency measures
The Anthropic website. Gabby Jones / Bloomberg via Getty Images file Artificial intelligence developer Anthropic became the first major tech company on Monday to endorse a California bill that would regulate the most advanced AI models. Proposed by state Sen. Scott Wiener, if passed, S.B. 53 would create the first broad legal requirements for large developers of AI models in the United States. Among other conditions, the bill would require large AI companies offering services in California to create, publicly share and adhere to safety-focused guidelines and procedures stipulating how each company attempts to mitigate risks from AI. The bill would also strengthen whistleblower requirements, by creating stronger pathways for employees to flag concerns about severe or potentially catastrophic risks that might otherwise go unreported. "With SB 53, developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety," Anthropic said in a statement. The bill largely codifies existing voluntary commitments made by the world's largest AI companies, emphasizing transparency and attention to risks from advanced AI systems. For example, companies including Anthropic, OpenAI, Google and Meta have already committed to assessing how their products could be used for nefarious purposes and to lay out mitigations to prevent these threats. Recent research has shown that AI models can help users execute cyberattacks and lower barriers to acquiring biological weapons. S.B. 53 would make many of these commitments mandatory, requiring companies to post their approaches to AI risk on their websites and to share summaries of "catastrophic risk" assessments directly with a state-level office. The new California bill would only apply to AI companies building cutting-edge models that demand massive computing power. Within that subset of AI companies, the strictest requirements in the bill would only apply to those with annual revenues exceeding $500 million. S.B. 53 would also establish an emergency reporting system through which the AI developer or members of the public could report critical safety incidents related to the model. "Anthropic is a leader on AI safety, and we're really grateful for the company's support," Sen. Wiener told NBC News. The bill appears likely to pass, having received overwhelming support in both the California Assembly and Senate in recent voting rounds. California's Legislature must cast its final vote on the bill by Friday night. "Frontier AI companies have made many voluntary commitments for safety, often without following through. This legislation takes a small but important first step toward making AI safer by making many of these voluntary commitments mandatory," Center for AI Safety Executive director, Dan Hendrycks, told NBC News. "While we need much more rigorous regulation to manage AI risks, S.B. 53 -- and Anthropic's public support for it -- are an encouraging development." However, industry trade groups like the Consumer Technology Association (CTA) and the Chamber of Progress are highly critical of the bill. In a post on X last week, the CTA said "California SB 53 and similar bills will weaken California and U.S. leadership in AI by driving investment and jobs to states or countries with less burdensome and conflicting frameworks." S.B. 53 is an updated, somewhat narrower version of a similar bill proposed by Sen. Wiener last year. That bill, called S.B. 1047, attracted widespread scrutiny from AI developers including OpenAI and initially Anthropic, in addition to industry trade groups like the Chamber of Progress and prominent Silicon Valley investing firms like Andreesen Horowitz. Critics attacked S.B.1047's scope and language about potential penalties in case AI models caused "critical harm." Unlike S.B. 53, S.B. 1047 required developers to undergo annual third-party audits of their adherence to the law and barred developers from releasing models that carried an "unreasonable risk" of individuals using the model to cause critical harms. S.B. 1047 was passed by California's state Legislature but was ultimately vetoed by Gov. Gavin Newsom, who said the bill would throttle AI development and "slow the pace of innovation." Several commentators and bill proponents argued that critics had misrepresented the bill's contents and that industry lobbying played a key role in the bill's veto. Following the veto, Newsom formed a working group charged with providing recommendations for a revised version of S.B. 1047. Led by a group of AI experts, the working group provided its recommendations in the California Report on Frontier AI Policy in June. Originally introduced in January, S.B. 53 incorporates many of the working group's recommendations, emphasizing transparency and the verification of commitments from leading AI labs. "We modeled the bill on that report," Sen. Wiener said. "Whereas S.B. 1047 was more of a liability-focused bill, S.B. 53 is more focused on transparency." Helen Toner, interim director of the Center for Security and Emerging Technology at Georgetown University, highlighted the growing consensus on the need for more insight into frontier AI companies' practices. "S.B. 53 is primarily a transparency bill, and that's no coincidence," Toner said. "The need for more transparency from frontier AI developers is one of the AI policy ideas with the most consensus behind it." Anthropic agrees. "We've long advocated for thoughtful AI regulation and our support for this bill comes after careful consideration of the lessons learned from California's previous attempt at AI regulation," Anthropic said in their statement. Any AI regulation passed in California will likely have a significant impact on AI development nationally and around the world, as California is home to dozens of the world's leading AI companies. "California is really at the beating heart of AI innovation, and we should also be at the heart of a creative AI safety approach," Sen. Wiener said. The role of state legislation is a key issue in AI policy debates, as industry actors, including Anthropic competitor OpenAI, argue that a comprehensive, uniform approach to AI at the federal level is required -- not a collage of state laws. The recently enacted Big Beautiful Bill federal spending package nearly included an amendment prohibiting states from passing AI-related legislation for 10 years, but the amendment was scratched in a late-night reversal. OpenAI's director of Global Affairs, Chris Lehane, responded to Anthropic's announcement by reaffirming OpenAI's preference for federal regulation. "America leads best with clear, nationwide rules, not a patchwork of state or local regulations," he wrote early Monday afternoon on LinkedIn. Anthropic acknowledged this tension in its Monday statement, but said S.B 53 is a step in the right direction given federal inaction. "While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won't wait for consensus in Washington," Anthropic wrote. "Ideally we would have comprehensive, strong pro-safety, pro-innovation federal law in this space. But that has not happened, so California has a responsibility to act," Sen. Wiener said. "I would prefer federal regulation, too, but I'm not holding my breath for that."
[6]
AI Safety Bill Passed By California Legislature, Awaits Governor Gavin Newsom's Signature
The state legislature of California has passed a groundbreaking AI safety bill. The bill, which mandates AI firms to disclose and certify their safety testing procedures, now awaits the signature of Governor Gavin Newsom. The state legislature of California passed a crucial AI safety bill in the early hours of Saturday. The bill, if signed by Governor Gavin Newsom, will require AI companies to disclose and certify their safety testing procedures. The legislation could potentially set a national standard for AI safety, given California's significant influence in the tech industry. However, it presents a challenge for Newsom, who vetoed a similar, more comprehensive bill last year, citing concerns about stifling innovation. According to the report by Politico, state Senator Scott Wiener, the proposer of both bills, has touted the latest one as the "strongest AI safety regulation in the country." This year's bill is a more streamlined approach, based on a report commissioned by Newsom after his veto last year. The bill has garnered support from AI companies like Anthropic, while others like OpenAI have given tentative approval. However, lobbying groups such as the California Chamber of Commerce and TechNet continue to oppose the bill. Also Read: Bank of America Strategist Raises Alarm Over Potential Ai-Driven Market Bubble: 'It Better Be Different This Time' The legislation could set Newsom apart from President Donald Trump, who has been advocating for accelerated AI development to outpace China in the tech race. It also serves as a political test for Wiener, who is vying for Rep. Nancy Pelosi's seat. The passage of this bill is a significant step towards ensuring the safety of AI technologies. It mandates transparency from AI companies about their safety testing procedures, which could potentially prevent mishaps and misuse of AI technologies. The bill also sets a precedent for other states and countries to follow, potentially leading to a global standard for AI safety. The legislation also serves as a political litmus test for both Governor Newsom and State Senator Wiener. For Newsom, his decision on the bill could distinguish him from his predecessor, President Trump, and set a new direction for AI development in the US. For Wiener, the bill's success could bolster his campaign for Rep. Nancy Pelosi's seat. Read Next Elon Musk-Backed California AI Safety Bill SB 1047 Has 57% Chance Of Being Signed By Gavin Newsom, Polymarket Shows Image: Shutterstock/Sheila Fitzgerald Market News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
California lawmakers have passed SB 53, a groundbreaking AI safety bill requiring transparency from large AI companies. The bill, which now awaits Governor Gavin Newsom's signature, has garnered support from some tech firms but faces opposition from others.
California's legislature has passed Senate Bill 53 (SB 53), a pivotal AI safety bill targeting "frontier" models like ChatGPT and Google Gemini
1
. Authored by Senator Scott Wiener, it now awaits Governor Gavin Newsom's signature3
.Source: Benzinga
SB 53 mandates transparency and safety for large AI companies. Provisions include disclosing safety protocols, whistleblower protection, a public cloud (CalCompute), reporting dangerous AI incidents, and detailing security measures
2
. A tiered system applies stricter rules to firms earning over $500 million annually1
. Industry response is mixed: Anthropic supports SB 53 for responsible AI2
. Conversely, trade groups like the Consumer Technology Association and Chamber for Progress oppose it, citing concerns over innovation and regulatory overreach5
. OpenAI has indirectly suggested preference for federal or European standards1
.Source: TechCrunch
Related Stories
Governor Newsom's decision is critical, especially after his veto of a similar bill (SB 1047) last year
4
. The current bill incorporates feedback from a post-veto policy panel1
.Source: TechCrunch
Critics argue state regulations could impede innovation and face constitutional challenges
1
. If signed, SB 53 would make California the first U.S. state with comprehensive legal requirements for major AI developers, potentially setting a national precedent and influencing future federal and global AI governance. This underscores the complex balance between fostering AI innovation and ensuring safety and ethical oversight.Summarized by
Navi
[1]