3 Sources
3 Sources
[1]
California AI bill inches closer to the finish line
Why it matters: Gov. Gavin Newsom has not definitively said whether he will sign the bill, SB53 -- but its momentum is strong, and if signed, the law would have major implications for the country's biggest AI names. What they're saying: Speaking to Bill Clinton about AI at a Clinton Global Initiative event in New York on Wednesday, Newsom said that California "has a sense of responsibility and accountability to lead, so we support risk-taking, but not recklessness." * "We have a bill that's on my desk that we think strikes the right balance. And we worked with industry, but we didn't submit to industry. We're not doing things to them, but we're not doing things necessarily for them." Newsom's office, when asked by Axios if the governor meant SB53, said that he "did not specify which bill he was referencing." * "Beyond that, we don't typically comment on pending legislation," said Newsom spokesperson Tara Gallegos. How it works: SB53, authored by state Sen. Scott Wiener, would require large AI developers to make public disclosures about safety protocols and for them to report safety incidents. * The legislation would also along create whistleblower protections and make public cloud computing available for access to compute for smaller developers and researchers. * Anthropic is the only major AI company to publicly support the bill. What's next: Newsom has until Oct. 12 to sign or veto the legislation.
[2]
'We have no peers:' California AI efforts clash with GOP push for hands-off approach
California state lawmakers have ramped up efforts to regulate artificial intelligence (AI) in their latest session, putting the Golden State on a collision course with a Republican effort to impose a national ban on such policies. As the AI race heats up, President Trump and GOP lawmakers have sought to eliminate regulations they argue could stifle innovation, while states forge ahead with attempts to place guardrails on the technology. But California sits in a unique position. As the home of Silicon Valley and the center of the AI boom, it could play an outsized role in defining the future of AI regulation -- both inside and outside its borders. "We dominate in artificial intelligence. We have no peers," California Gov. Gavin Newsom (D) said Wednesday. "As a consequence of having so much leadership residing in such a concentrated place, California, we have a sense of responsibility and accountability to lead, so we support risk-taking, but not recklessness," he added. The California legislature passed several AI bills in the session that ended in mid-September. Most closely watched is Senate Bill 53, legislation that would require developers of large frontier models to publish frameworks detailing how they assess and mitigate catastrophic risks. It is currently awaiting the governor's signature. "Because California is such a large state, any AI regulations that it enacts could serve as a potential de facto national standard," said Andrew Lokay, a senior research analyst at Beacon Policy Advisors. "Companies could decide to simplify compliance by applying California's rules to their operations beyond the Golden State," he continued. Washington, D.C., is taking notice. Sriram Krishnan, a White House senior policy advisor for AI, argued last week that they "don't want California to set the rules for AI across the country." Rep. Kevin Kiley (R-Calif.) acknowledged his home state "continues to be the center of breathtaking innovation worldwide" but called into question whether it should be the one to regulate AI. "The notion that this is the right body to regulate the most powerful technology in human history, whose workings are actually largely beyond the understanding even of the technology's creators, is a fairly fantastical notion," he said at a hearing last week. "I do think the risk that California is going to drive AI policy for the entire country is a very real one, and I think that a national framework that seeks to stop that from happening is needed and appropriate," Kiley added. With a heavy focus on boosting innovation, the Trump administration and GOP lawmakers have increasingly pushed to preempt state AI laws that they argue could weigh down the technology. Earlier this year, several Republicans sought to include a provision in Trump's "big, beautiful bill" that would have barred state AI regulation for 10 years. The effort exposed a rift within the GOP. Some lawmakers, including Sen. Marsha Blackburn (R-Tenn.) and Rep. Marjorie Taylor Greene (R-Ga.), voiced concerns about the restrictions on states' rights and the preemption of AI-related protections. The Senate ultimately voted 99-1 to remove the provision. Despite the setback, Sen. Ted Cruz (R-Texas), chair of the Senate Commerce Committee, said last week that the moratorium push is "not dead at all." This focus on state laws was also reflected in Trump's AI Action Plan, which called for limiting funding to states over AI rules, tasking the Federal Communications Commission (FCC) with evaluating whether state laws interfere with its mandate and reviewing Federal Trade Commission (FTC) investigations that could "unduly burden AI innovation." As the president charges ahead with this endeavor, Lokay noted that California's push for AI regulation could provide more momentum to efforts to preempt state rules. However, he underscored that there are still many obstacles to passing such a moratorium. Beyond GOP infighting, Congress has long struggled to pass tech legislation, with kids' online safety and digital privacy efforts repeatedly falling short. While lawmakers have taken an interest in AI, a federal framework still appears far off. "A year ago my response to this kind of legislation was the states should not be doing this. We should leave it to the federal government," Appian CEO Matt Calkins said in a statement to The Hill. "One year later and the situation has changed. The federal government is not taking the lead." "In fact, it is flirting with the idea of forbidding or preventing states from creating AI regulation, and so, in the face of that aggressive pronunciation of a federal level interest in AI regulation, I do think it should come from somewhere," he continued. "I'm sorry to see it come from the states, but I think that is one possible way we could arrive at it, it's just going to be more painful." Anthropic appeared to make a similar calculus in throwing its support behind California's S.B. 53. The AI firm endorsed the legislation in early September. "While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won't wait for consensus in Washington," it wrote in a blog post earlier this month. The reception to S.B. 53 so far has been markedly better than its predecessor. The bill, put forward by California state Sen. Scott Wiener (D), is widely viewed as a successor to S.B. 1047 from last year, which cleared the state legislature but was vetoed by Newsom. S.B. 1047 offered a much more heavy-handed approach to AI regulation, pushing for models to undergo safety testing before public release and seeking to hold developers liable for potential severe harm. The measure drew a rare rebuke from several California Democrats in Congress. Former Speaker Nancy Pelosi, Reps. Zoe Lofgren, Ro Khanna and others argued the legislation was overly restrictive and sought to tackle an issue that should be left to federal lawmakers. Anthropic offered lukewarm support for S.B. 1047 last year, suggesting "its benefits likely outweigh its costs" after some amendments were made to the legislation. S.B. 53 hasn't won over all of its detractors. OpenAI has remained critical of California's approach, warning Newsom of the risk they could "inadvertently create a 'CEQA for AI innovation.'" The California Environmental Quality Act (CEQA) has faced criticism for making it more difficult to build in the state. But the state's AI bill has largely received less pushback this year. Khanna told The Hill he views the legislation as a "strong start," underscoring that "they made a number of revisions on it that avoid downstream excessive liability." And Newsom hinted at his support Wednesday, noting that "we have a bill that's on my desk that we think strikes the right balance, and we worked with industry, but we didn't submit to industry."
[3]
California Eyes Regulating AI Models | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Newsom has publicly suggested support for AI oversight legislation. According to Politico, at the Clinton Global Initiative, he said, "We have a bill that's on my desk that we think strikes the right balance ... We worked with industry, but we didn't submit to industry. We're not doing things to them, but we're not necessarily doing things for them." He did not specify which measure he meant. Observers widely believe he referred to SB 53. Politico reported that his remarks follow months of AI policy maneuvering, and state senators told reporters they hope he was referring to SB 53. Bloomberg Law reported that Newsom has endorsed stronger oversight of the AI sector and highlighted California's role in setting early guardrails. What SB 53 Would Require Under SB 53, large frontier AI developers are required to publish safety protocols explaining how they test for catastrophic risks and apply mitigations. Developers must also publish transparency reports for new or modified models, summarizing risk assessments, dangerous capability thresholds, intended uses and safeguards. Critical safety incidents must be reported to California's Office of Emergency Services within 15 days, or within 24 hours if an imminent threat exists. A catastrophic incident is defined as one causing at least 50 deaths or $1 billion in damage. The law also protects whistleblowers, requiring anonymous reporting channels and barring retaliation. Starting in 2030, annual independent audits will be mandatory, with summaries sent to the Attorney General. Penalties escalate up to $10 million for repeated catastrophic-risk violations. Rooted in the Governor's Expert Commission SB 53 is closely aligned with the California Report on Frontier AI Policy, released June 17 by an expert working group convened by Newsom. That report prioritized evidence-based oversight, standardized disclosures, incident tracking and protections for insiders over rigid micromanagement. It also advocated for adaptive thresholds and infrastructure support, core ideas that are echoed in SB 53. "The final version of SB 53 will ensure California continues to lead not only on AI innovation, but on responsible practices to help ensure that innovation is safe and secure," said Senator Wiener, SB53's sponsor. AI company Anthropic has publicly endorsed SB 53, stating that it provides clearer expectations and establishes guardrails without imposing rigid mandates. Their support reinforces the perception that the bill strikes a nuanced balance between regulation and innovation. The Stakes for California and AI Policy California's push comes amid growing uncertainty at the federal level. PYMNTS recently reported that Colorado has delayed the implementation of its AI law until June 2026, providing businesses with more time to adjust. PYMNTS has also tracked proposals for AI regulatory sandboxes, a framework that allows firms to test new systems under limited oversight. California's AB 2013 takes effect January 1, 2026. The law requires generative AI developers to post summaries of their training data, including sources and collection timelines, creating a baseline of transparency alongside SB 53's frontier model rules. If Newsom signs SB 53, California would become the first U.S. jurisdiction to impose binding risk rules on AI developers.
Share
Share
Copy Link
California's SB53 bill, awaiting Governor Newsom's signature, could set a national standard for AI regulation. The bill requires large AI developers to disclose safety protocols and report incidents, potentially reshaping the AI industry landscape.
California is on the brink of setting a new standard for artificial intelligence (AI) regulation with Senate Bill 53 (SB53), a groundbreaking piece of legislation that could have far-reaching implications for the AI industry. The bill, currently awaiting Governor Gavin Newsom's signature, has gained significant momentum and is poised to make California a leader in AI oversight
1
.SB53, authored by state Senator Scott Wiener, introduces several crucial requirements for large AI developers:
3
The bill defines a catastrophic incident as one causing at least 50 deaths or $1 billion in damage. Penalties for repeated violations can escalate up to $10 million, underscoring the seriousness of compliance
3
.As the home of Silicon Valley and the epicenter of AI innovation, California's regulatory moves carry significant weight. Governor Newsom emphasized this unique position, stating, "We dominate in artificial intelligence. We have no peers"
2
. This concentration of AI leadership places a sense of responsibility on California to lead in both innovation and responsible practices.Interestingly, Anthropic stands out as the only major AI company to publicly support SB53. The company views the bill as striking a balance between regulation and innovation, providing clearer expectations without imposing rigid mandates
3
.Related Stories
California's move towards AI regulation has sparked a debate at the national level. Some Republican lawmakers, including Rep. Kevin Kiley (R-Calif.), have expressed concerns about California setting AI policy for the entire country. There's a push for a national framework to prevent state-level regulations from dominating the landscape
2
.The Trump administration and GOP lawmakers have advocated for a hands-off approach, focusing on boosting innovation. Earlier attempts to include a provision barring state AI regulation for 10 years in a federal bill exposed rifts within the Republican party
2
.Governor Newsom has until October 12 to sign or veto SB53
1
. If signed, it would make California the first U.S. jurisdiction to impose binding risk rules on AI developers. This move, coupled with the implementation of AB 2013 (effective January 1, 2026), which requires generative AI developers to disclose training data information, could establish California as a trailblazer in comprehensive AI regulation3
.Summarized by
Navi