29 Sources
29 Sources
[1]
California's newly signed AI law just gave Big Tech exactly what it wanted
On Monday, California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act into law, requiring AI companies to disclose their safety practices while stopping short of mandating actual safety testing. The law requires companies with annual revenues of at least $500 million to publish safety protocols on their websites and report incidents to state authorities, but it lacks the stronger enforcement teeth of the bill Newsom vetoed last year after tech companies lobbied heavily against it. The legislation, S.B. 53, replaces Senator Scott Wiener's previous attempt at AI regulation, known as S.B. 1047, that would have required safety testing and "kill switches" for AI systems. Instead, the new law asks companies to describe how they incorporate "national standards, international standards, and industry-consensus best practices" into their AI development, without specifying what those standards are or requiring independent verification. "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive," Newsom said in a statement, though the law's actual protective measures remain largely voluntary beyond basic reporting requirements. According to the California state government, the state houses 32 of the world's top 50 AI companies, and more than half of global venture capital funding for AI and machine learning startups went to Bay Area companies last year. So while the recently signed bill is state-level legislation, what happens in California AI regulation will have a much wider impact, both by legislative precedent and by affecting companies that craft AI systems used around the world. Transparency instead of testing Where the vetoed SB 1047 would have mandated safety testing and kill switches for AI systems, the new law focuses on disclosure. Companies must report what the state calls "potential critical safety incidents" to California's Office of Emergency Services and provide whistleblower protections for employees who raise safety concerns. The law defines catastrophic risk narrowly as incidents potentially causing 50+ deaths or $1 billion in damage through weapons assistance, autonomous criminal acts, or loss of control. The attorney general can levy civil penalties of up to $1 million per violation for noncompliance with these reporting requirements. The shift from mandatory safety testing to voluntary disclosure follows a year of intense lobbying. According to The New York Times, Meta and venture capital firm Andreessen Horowitz have pledged up to $200 million to two separate super PACs supporting politicians friendly to the AI industry, while companies have pushed for federal legislation that would preempt state AI rules. The original SB 1047 had been drafted by AI safety advocates who warned about existential threats from AI drawn heavily from hypothetical scenarios and tropes from science fiction, but it met pushback from AI firms that found the requirements too vague and potential reporting burdens too onerous. The new law follows recommendations from AI experts convened by Newsom, including Stanford's Fei-Fei Li and former California Supreme Court Justice Mariano-Florentino Cuéllar. As with SB-1047, the new law creates CalCompute, a consortium within the Government Operations Agency, to develop a public computing cluster framework. The California Department of Technology will recommend annual updates to the law, though such recommendations require no legislative action. Senator Wiener described the law as establishing "commonsense guardrails," and Anthropic's co-founder, Jack Clark, called the law's safeguards "practical," though the transparency requirements likely mirror practices already standard at major AI companies and disclosure requirements without enforcement mechanisms or specific standards and may offer limited protection against potential AI harms in the long run.
[2]
California's new AI safety law shows regulation and innovation don't have to clash | TechCrunch
SB 53, the AI safety and transparency bill that California Gov. Gavin Newsom signed into law this week, is proof that state regulation doesn't have to hinder AI progress. So says Adam Billen, vice president of public policy at youth-led advocacy group Encode AI, on today's episode of Equity. "The reality is that policy makers themselves know that we have to do something, and they know from working on a million other issues that there is a way to pass legislation that genuinely does protect innovation -- which I do care about -- while making sure that these products are safe," Billen told TechCrunch. At its core, SB 53 is a first-in-the-nation bill that requires large AI labs to be transparent about their safety and security protocols - specifically around how they prevent their models from catastrophic risks, like being used to commit cyber attacks on critical infrastructure or build bio-weapons. The law also mandates that companies stick to those protocols, which will be enforced by the Office of Emergency Services. "Companies are already doing the stuff that we ask them to do in this bill," Billen told TechCrunch. "They do safety testing on their models. They release model cards. Are they starting to skimp in some areas at some companies? Yes. And that's why bills like this are important." Billen also noted that some AI firms have a policy around relaxing safety standards under competitive pressure. OpenAI, for example, has publicly stated that it may "adjust" its safety requirements if a rival AI lab releases a high-risk system without similar safeguards. Billen argues that policy can enforce companies' existing safety promises, preventing them from cutting corners under competitive or financial pressure. While public opposition to SB 53 was muted in comparison to its predecessor SB 1047, which Newsom vetoed last year, the rhetoric in Silicon Valley and among most AI labs has been that almost any AI regulation is anathema to progress and will ultimately hinder the U.S. in its race to beat China. It's why companies like Meta, VCs like Andreessen Horowitz, and powerful individuals like OpenAI president Greg Brockman are collectively pumping hundreds of millions into super PACs to back pro-AI politicians in state elections. And it's why those same forces earlier this year pushed for an AI moratorium that would have banned states from regulating AI for 10 years. Encode AI ran a coalition of more than 200 organizations to work to strike down the proposal, but Billen says the fight isn't over. Senator Ted Cruz, who championed the moratorium, is attempting a new strategy to achieve the same goal of federal preemption of state laws. In September, Cruz introduced the SANDBOX Act, which would allow AI companies to apply for waivers to temporarily bypass certain federal regulations for up to 10 years. Billen also anticipates a forthcoming bill establishing a federal AI standard that would be pitched as a middle-ground solution but would in reality override state laws. He warned that narrowly scoped federal AI legislation could "delete federalism for the most important technology of our time." "If you told me SB 53 was the bill that would replace all the state bills on everything related to AI and all of the potential risks, I would tell you that's probably not a very good idea and that this bill is designed for a particular subset of things," Billen said. While he agrees that the AI race with China matters, and that policymakers need to enact regulation that will support American progress, he says killing state bills - which mainly focus on deepfakes, transparency, algorithmic discrimination, children's safety, and governmental use of AI -- isn't the way to go about doing that. "Are bills like SB 53 the thing that will stop us from beating China? No," he said. "I think it is just genuinely intellectually dishonest to say that that is the thing that will stop us in the race." He added: "If the thing you care about is beating China in the race on AI -- and I do care about that - then the things you would push for are stuff like export controls in Congress," Billen said. "You would make sure that American companies have the chips. But that's not what the industry is pushing for." Legislative proposals like the Chip Security Act aim to prevent the diversion of advanced AI chips to China through export controls and tracking devices, and the existing CHIPS and Science Act seeks to boost domestic chip production. However, some major tech companies, including OpenAI and Nvidia, have expressed reluctance or opposition to certain aspects of these efforts, citing concerns about effectiveness, competitiveness, and security vulnerabilities. Nvidia has its reasons - it has a strong financial incentive to continue selling chips to China, which has historically represented a significant portion of its global revenue. Billen speculated that OpenAI could hold back on chip export advocacy to stay in the good graces of crucial suppliers like Nvidia. There's also been inconsistent messaging from the Trump administration. Three months expanding an export ban on advanced AI chips to China in April 2025, the administration reversed course, allowing Nvidia and AMD to sell some chips to China in exchange for 15% of the revenue. "You see people on the Hill moving towards bills like the Chip Security Act that would put export controls on China," Billen said. "In the meantime, there's going to continue to be this propping up of the narrative to kill state bills that are actually quite light tough." Bilen added that SB 53 is an example of democracy in action - of industry and policymakers working together to get to a version of a bill that everyone can agree on. It's "very ugly and messy," but "that process of democracy and federalism is the entire foundation of our country and our economic system, and I hope that we will keep doing that successfully." "I think SB 53 is one of the best proof points that that can still work," he said.
[3]
Why California's new AI safety law succeeded where SB 1047 failed | TechCrunch
California just made history as the first state to require AI safety transparency from the biggest labs in the industry. Governor Newsom signed SB 53 into law this week, mandating that AI giants like OpenAI and Anthropic disclose, and stick to, their safety protocols. The decision is already sparking debate about whether other states will follow suit. Adam Billen, vice president of public policy at Encode AI, joined Equity to break down what California's new AI transparency law actually means -- from whistleblower protections to safety incident reporting requirements. He also explains why SB 53 succeeded where SB 1047 failed, what "transparency without liability" looks like in practice, and what's still on Governor Newsom's desk, including rules for AI companion chatbots.
[4]
SB 53, the landmark AI transparency bill, is now law in California
On Monday, California governor Gavin Newsom signed the "Transparency in Frontier Artificial Intelligence Act," which was authored by Senator Scott Wiener (D-San Francisco). It's the second draft of such a bill, as Newsom vetoed the first version -- SB 1047 -- last year due to concerns it was too strict and could stifle AI innovation in the state. It would have required all AI developers, especially makers of models with training costs of $100 million or more, to test for specific risks. After the veto, Newsom tasked AI researchers with coming up with an alternative, which was published in the form of a 52-page report -- and formed the basis of SB 53.
[5]
California Governor Newsom signs landmark AI safety bill SB 53 | TechCrunch
California Gov. Gavin Newsom has signed SB 53, a first-in-the-nation bill that sets new transparency requirements on large AI companies. SB 53, which passed the state legislature two weeks ago, requires large AI labs - including OpenAI, Anthropic, Meta, and Google DeepMind - to be transparent about safety protocols. It also ensures whistleblower protections for employees at those companies. In addition, SB 53 creates a mechanism for AI companies and the public to report potential critical safety incidents to California's Office of Emergency Services. Companies also have to report incidents related to crimes committed without human oversight, such as cyberattacks, and deceptive behavior by a model that isn't required under the EU AI Act. The bill has received mixed reactions from the AI industry. Tech firms have broadly argued that state-level AI policy risks creating a "patchwork of regulation" that would hinder innovation, although Anthropic endorsed the bill. Meta and OpenAi lobbied against it. OpenAI even wrote and published an open letter to Gov. Newsom that discouraged his signing of SB 53. The new bill comes as some of Silicon Valley's tech elite have pour hundreds of millions into super PACs to back candidates that support a light-touch approach to AI regulation. Leaders at OpenAI and Meta have in recent weeks launched pro-AI super PACs that aim to back candidates and bills that are friendly to AI. Still, other states might look to California for inspiration as they attempt to curb the potential harms caused by the unmitigated advancement of such a powerful emerging technology. In New York, a similar bill was passed by state lawmakers and is awaiting Gov. Kathy Hochul's signature or veto. "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive," Newsom said in a statement. "This legislation strikes that balance. AI is the new frontier in innovation, and California is not only here for it - but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves." The governor is also weighing another bill - SB 243 - that passed both the State Assembly and Senate with bipartisan support this month. The bill would regulate AI companion chatbots, requiring operators to implement safety protocols, and hold them legally accountable if their bots fail to meet those standards. SB 53 is Senator Scott Wiener's second attempt at an AI safety bill after Newsom vetoed his more sweeping SB 1047 last year amid major pushback from AI companies. With this bill, Wiener reached out to major AI companies to attempt to help them understand the changes he made to the bill.
[6]
California becomes first US state to require AI transparency
LLM makers have to file a steady stream of reports in the name of transparency A year after vetoing a tougher bill, California Gov Gavin Newsom has signed the nation's first AI transparency law, forcing big model developers to publish frameworks and file incident reports, but critics argue it's more paperwork than protection. Newsom signed California Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act, into law on Monday. The law largely does what it says on the tin, placing a number of transparency requirements on the frontier developers. Large AI firms, defined by the bill as those with annual gross revenue in excess of $500 million, including affiliates, must publish and update frontier AI frameworks, include added disclosures in their transparency reports, report critical safety incidents to the state Office of Emergency Services, and not retaliate against whistleblowers, among other requirements. Safety incidents, per the law, include unauthorized access or compromise of a model "that results in death or bodily injury," harm resulting from a "catastrophic risk" (defined by the bill as use of an AI model that causes death or harm to more than 50 people or over $1 billion in property damage), "loss of control" of a frontier model, or a model being able to deceive its developers to subvert its own restrictions. Newsom signed SB 53, authored by California State Senator Scott Wiener, a year after he vetoed a similar but tougher bill from the same senator, SB 1047. Newsom agreed with Wiener's efforts, but said he was unhappy with limitations in the bill that restricted management of AI models to only the largest ones. "By focusing only on the most expensive and largest-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology," Newsom said last year. Newsom convened a working group following the veto in order to figure out a better way forward on an AI transparency bill. The group released a draft report in March that Wiener explained on Monday played a key part in developing the new version of his bill. "I'm grateful to the Governor for his leadership in convening the Joint California AI Policy Working Group, working with us to refine the legislation, and now signing it into law," Wiener said. Despite those committee-led changes in the newly signed law, the core of SB 53 and who it applies to remains unchanged: it still targets developers of 'frontier models' trained with more than 10²⁶ operations, with extra obligations for firms pulling in over $500 million a year. That, says Chamber of Progress policy manager Aden Hizkias, leaves the bill falling short of the mark needed to actually create meaningful progress on regulating AI. "This [10²⁶ FLOPS] threshold is an arbitrary proxy for capability and risk," Hizkias wrote in a July description of SB 1047. "It fails to capture the nuances of model behavior, deployment content or intent. Some smaller-scale models could still pose serious real-world risks, while some large-scale models may present very low risk." The Chamber of Progress' criticism of the bill (note that the group's tech industry partners include Amazon, Google, NVIDIA, and others) goes beyond the model size threshold. Per Hizkias, SB 53's biggest change from SB 1047 is its shift from enforcing a "do no harm" mandate to a "show your work" approach. "Penalties and injunctions are now tied to paperwork failures, for example, missed deadlines or misleading statements, rather than actual harm," Hizkias wrote. "A single misfiled report or overzealous disclosure can trigger injunctions, fines, or reputational harm even if the model never causes damage." This, the CoP policy analyst said, is a massive change from SB 1047, which required AI developers to certify that their new models didn't pose a risk of exacerbating critical harms like mass casualty events, development of deadly weapons and cyberattacks. Most penalties would only have occurred under SB 1047 after such incidents; SB 53, she argued, only creates "a compliance minefield without clear standards." "SB 53 does not impose any meaningful safety duty and instead enforces a burdensome transparency regime requiring exhaustive disclosures and reporting," Hizkias warned. Anthropic, which isn't a member of the CoP and helped work with Wiener to draft the new bill, has a different take. "Governor Newsom's signature on SB 53 establishes meaningful transparency requirements for frontier AI companies without imposing prescriptive technical mandates," Anthropic cofounder and policy chief Jack Clark told The Register in an email. Clark doesn't want AI regulation to stop there, however. "While federal standards remain essential to avoid a patchwork of state regulations, California has created a strong framework that balances public safety with continued innovation," Clark added. Microsoft had no comment on the passage of the bill; none of the other AI companies that meet SB 53's enforcement threshold responded to questions. Wiener's office declined to comment beyond its press release on passage of the bill. "California has long been a leader in technology and innovation," Newsom said in a signing statement accompanying the bill. "We are once again demonstrating our leadership, by protecting our residents today while pressing the federal government to act on national standards." The accountability rules might still be contentious, but there's one thing SB 53's critics and proponents seem to agree on: Its provision creating a publicly available computing cluster dubbed "CalCompute" that's available for AI startups and researchers to use as an alternative to spinning up their own hardware. "Despite the flaws of the amended SB 53, we strongly support CalCompute," Hizkias said. CalCompute was proposed as part of SB 1047 as well. "The creation of a public option for computing power through CalCompute will democratize access to critical AI infrastructure," said Teri Olle, director of Economic Security California Action, a sponsor of SB 53. CalCompute, per the bill, will be led by a consortium within the state's Government Operations Agency (GOA) and aims to provide resources to the public for development of AI "that is safe, ethical, equitable, and sustainable." The provisions of SB 53 take effect on January 1, 2026, but extra time is being provided to stand up CalCompute - the GOA doesn't even have to submit a report on what it'll take to create the cluster until the beginning of 2027. ®
[7]
California's Newsom signs law requiring AI safety disclosures
Sept 29 (Reuters) - California Governor Gavin Newsom signed a state law on Monday that would require ChatGPT developer OpenAI and other big players to disclose their approach to mitigating potential catastrophic risk from their cutting-edge AI models. California is the home to top AI players including OpenAi Alphabet's (GOOGL.O), opens new tab Google, Meta Platforms (META.O), opens new tab, Nvidia (NVDA.O), opens new tab and Anthropic and with this bill wants to be seen as leading on regulation of an industry potentially critical to its economy in the years ahead. "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive," Newsom said in a press release on the law. Newsom's office said the law, known as SB 53, fills the gap left by Congress, which so far has not passed broad AI legislation, and provides a model for the U.S. to follow. Reporting by Jody Godoy in New York; editing by Chris Sanders Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Artificial Intelligence * Data Privacy * Intellectual Property * Public Policy Jody Godoy Thomson Reuters Jody Godoy reports on tech policy and antitrust enforcement, including how regulators are responding to the rise of AI. Reach her at [email protected]
[8]
California Gov. Gavin Newsom signs landmark bill creating AI safety measures
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom on Monday signed a law that aims to prevent people from using powerful artificial intelligence models for potentially catastrophic activities like building a bioweapon or shutting down a bank system. The move comes as Newsom touted California as a leader in AI regulation and criticized the inaction at the federal level in a recent conversation with former President Bill Clinton. The new law will establish some of the first-in-the-nation regulations on large-scale AI models without hurting the state's homegrown industry, Newsom said. Thirty-two of the top 50 AI companies are located in California and will have to follow the requirements. "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance," Newsom said in a statement. The legislation requires AI companies to implement and disclose publicly safety protocols to prevent their most advanced models from being used to cause major harm. The rules are designed to cover AI systems if they meet a "frontier" threshold that signals they run on a huge amount of computing power. Such thresholds are based on how many calculations the computers are performing. Those who crafted the regulations have acknowledged the numerical thresholds are an imperfect starting point to distinguish today's highest-performing generative AI systems from the next generation that could be even more powerful. The existing systems are largely made by California-based companies like Anthropic, Google, Meta Platforms and OpenAI. The legislation defines a catastrophic risk as something that would cause at least $1 billion in damage or more than 50 injuries or deaths. It's designed to guard against AI being used for activities that could cause mass disruption, such as hacking into a power grid. Companies also have to report to the state any critical safety incidents within 15 days. The law creates whistleblower protections for AI workers and establishes a public cloud for researchers. It includes a fine of $1 million per violation. It drew opposition from some tech companies, which argued that AI legislation should be done at the federal level. But Anthropic said the regulations help level the playing field and make official the safety practices many companies are already doing voluntarily. "Developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety, creating a level playing field where disclosure is mandatory, not optional," the company said in its endorsing announcement. The signing comes after Newsom last year vetoed a broader version of the legislation, siding with tech companies that said the requirements were too rigid and would have hampered innovation. Newsom instead asked a group of several industry experts, including AI pioneer Fei-Fei Li, to develop recommendations on guardrails around powerful AI models. The new law incorporates recommendations and feedback from Newsom's group of AI experts and the industry, supporters said. The legislation also doesn't put the same level of reporting requirements on startups to avoid hurting innovation, said state Sen. Scott Wiener of San Francisco, the bill's author. "With this law, California is stepping up, once again, as a global leader on both technology innovation and safety," Wiener said in a statement. Newsom's decision comes as President Donald Trump in July announced a plan to eliminate what his administration sees as "onerous" regulations to speed up AI innovation and cement the U.S.' position as the global AI leader. Republicans in Congress earlier this year unsuccessfully tried to ban states and localities from regulating AI for a decade. Without stronger federal regulations, states across the country have spent the last few years trying to rein in the technology, tackling everything from deepfakes in elections to AI "therapy." In California, the Legislature this year passed a number of bills to address safety concerns around AI chatbots for children and the use of AI in the workplace. California has also been an early adopter of AI technologies. The state has deployed generative AI tools to spot wildfires and address highway congestion and road safety, among other things. -- - Associated Press reporter Matt O'Brien contributed to the report.
[9]
California's Newsom signs law requiring AI safety disclosures
California Governor Gavin Newsom speaks during the 2025 Clinton Global Initiative (CGI) in New York City, U.S., September 24, 2025. California Governor Gavin Newsom signed into state law on Monday a requirement that ChatGPT developer OpenAI and other big players disclose how they plan to mitigate potential catastrophic risks from their cutting-edge AI models. California is the home to top AI companies including OpenAI, Alphabet's Google, Meta Platforms Nvidia and Anthropic, and with this law seeks to lead on regulation of an industry critical to its economy, Newsom said. "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive," Newsom said in a press release. Newsom's office said the law, known as SB 53, fills a gap left by the U.S. Congress, which so far has not passed broad AI legislation, and provides a model for the U.S. to follow. If federal standards are put in place, Newsom said, the state legislature should "ensure alignment with those standards - all while maintaining the high bar established by SB 53." Last year, Newsom vetoed California's first attempt at AI legislation, which had faced fierce industry pushback. The bill would have required companies that spent more than $100 million on their AI models to hire third-party auditors annually to review risk assessments and allowed the state to levy penalties in the hundreds of millions of dollars. The new law requires companies with more than $500 million in revenue to assess the risk that their cutting-edge technology could break free of human control or aid the development of bioweapons, and disclose those assessments to the public. It allows for fines of up to $1 million per violation. Jack Clark, co-founder of AI company Anthropic, called the law "a strong framework that balances public safety with continued innovation." The industry still hopes for a federal framework that would replace the California law, as well as others like it enacted recently in Colorado and New York. Last year, a bid by some Republicans in the U.S. Congress to block states from regulating AI was voted down in the Senate 99-1. "The biggest danger of SB 53 is that it sets a precedent for states, rather than the federal government, to take the lead in governing the national AI market - creating a patchwork of 50 compliance regimes that startups don't have the resources to navigate," said Collin McCune, head of government affairs at Silicon Valley venture capital firm Andreessen Horowitz. U.S. Representative Jay Obernolte, a California Republican, is working on AI legislation that could preempt some state laws, his office said, although it declined to comment further on pending legislation. Some Democrats are also discussing how to enact a federal standard. "It's not whether we're gonna regulate AI, it's do you want 17 states doing it, or do you want Congress to do it?" U.S. Representative Ted Lieu, a Democrat from Los Angeles, said at a recent hearing on AI legislation in the U.S. House of Representatives.
[10]
California becomes first state to require AI companies to disclose safety protocols
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. Looking ahead: California's new law cements the state's role at the forefront of the national debate. By enshrining transparency and reporting requirements into statute, it establishes a precedent that other states and federal lawmakers will now have to confront as they consider how far to go in regulating the technology. California has become the first state to require major artificial intelligence companies to make their safety practices public. Governor Gavin Newsom signed the bill, known as SB 53, after months of debate between lawmakers and technology firms including OpenAI, Meta, and Anthropic. The law is already attracting attention in Washington and abroad, as Congress and foreign governments weigh how to regulate a field that has advanced faster than policymakers have been able to respond. The legislation marks a policy shift for Newsom, who last year vetoed a more expansive AI safety bill. That earlier proposal would have imposed sweeping restrictions and faced fierce resistance from Silicon Valley. This time, the bill's scope was narrowed, and legislators worked with both the governor's office and major companies to address some of the industry's concerns. Under SB 53, advanced AI companies must publish their safety and security protocols and notify the state of safety incidents. These include criminal uses of AI without human involvement, such as autonomous cyberattacks, or instances where a model behaves deceptively or manipulatively. The law also provides a pathway for companies and individuals to report risks, with protections in place for whistleblowers. Additionally, the measure sets in motion plans for a state-run cloud computing project, dubbed CalCompute, intended to support research under California's regulatory framework. Industry reactions to SB 53 have been more restrained than during last year's debate. Anthropic openly supported the measure in its final stages, with co-founder Jack Clark stating that the law provides "real accountability" in AI development and deployment. OpenAI expressed encouragement that California had created a framework that could potentially align with federal regulations. Meta echoed a similar sentiment, describing the law as a constructive step. However, not all responses were positive. The Chamber of Progress, a lobbying group, warned that the law could discourage startups and signal to younger companies that they might be better off building elsewhere. Venture capital firm Andreessen Horowitz noted that, while the bill included thoughtful provisions, its regulatory approach could slow innovation and further consolidate market power among the largest players. At the federal level, efforts to establish a national AI framework remain unsettled. Some members of Congress have proposed federal standards that would allow states to set complementary rules, while others including Senator Ted Cruz argue that a state-by-state approach risks fracturing the national market and undermining US competitiveness with China. Cruz has pledged to revive his push to prevent states from enacting their own AI regulations, calling a patchwork of standards "cataclysmic."
[11]
Newsom signs major California AI bill
Why it matters: The law will have major implications for the country's biggest AI players -- and shows the appetite to regulate AI at the state level. * The Trump administration and key Republicans have advocated for a pause on AI laws and regulation coming out of the states. What's inside: The Transparency in Frontier Artificial Intelligence Act, authored by state Sen. Scott Wiener, requires large AI developers to make public disclosures about safety protocols and for them to report safety incidents. * The legislation will also create whistleblower protections and make cloud computing available for smaller developers and researchers. * Anthropic was the only major AI company to publicly support the bill. What they're saying: "This bill will strengthen California's ability to monitor, evaluate, and respond to critical safety incidents associated with these advances systems," Newsom wrote. * "With this law, California is stepping up, once again, as a global leader on both technology innovation and safety," Wiener said. * Anthropic's head of policy Jack Clark: "While federal standards remain essential to avoid a patchwork of state regulations, California has created a strong framework that balances public safety with continued innovation." Newsom also addressed the possibility of any federal effort to preempt state law in his signing message. * "Should the federal government or Congress adopt national AI standards that maintain or exceed the protections in this bill, subsequent action will be necessary to provide alignment between policy frameworks -- ensuring businesses are not subject to duplicative or conflicting requirements across jurisdictions." Flashback: Last year, Newsom vetoed an AI safety bill that was designed to have first-in-the-nation safety regulations against AI misuse. The bottom line: The debate over another key tech policy in the U.S. will revolve once again around what California is doing. * The new law may inspire other states to act -- which could put new pressure on Congress to get involved. Editor's note: This story has been updated to include a statement from Anthropic.
[12]
Gavin Newson Signs Law Cracking Down on AI Industry
The AI industry has been a Wild West. Is that starting to change? The nascent AI industry has attracted untold hundreds of billions of dollars in investment over the past few years, but it's still operating in a near-total regulatory vacuum. That's not to say it's had no negative impact. The tech has been linked to a wave of mental health breakdowns, suicides and even murder -- and that's without getting into allegations about the sector's surveillance of users, copyright violations, and other alleged negative effects on users and society. Now, lawmakers are starting to play catch-up. This week, California governor Gavin Newsom signed what proponents say is the first AI safety and transparency law in the US. The Transparency in Frontier Artificial Intelligence Act, also known as SB 53, requires AI companies with over $500 million in revenue to publicly disclose their safety and security protocols in fairly granular detail. "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive," Newsom said in a statement. "This legislation strikes that balance." Critics might disagree. It's true that in some senses, the bill's scope is sweeping: it requires AI companies do everything from sharing how they plan to mitigate potential "Terminator"-esque scenarios of rogue AI rising up against humanity, to reporting critical safety incidents, to creating new protections for whistleblowers. On the other hand, its penalties feel distinctly feeble. Newsom vetoed a previous attempt at similar AI regulation last year, which would've demanded far more from the industry: it would've applied to a vastly larger number of companies by targeting any that spend upwards of $100 million on an AI model, for instance, and penalties could've reached hundreds of millions for severe transgressions. The bill Newsom just signed, in contrast, caps fines at a drop-in-the-bucket $1 million per violation, which would be a mosquito bite to a centibillion dollar company like OpenAI. Tellingly, AI companies are trumpeting their support for the new bill. An OpenAI spokesperson said the company was "pleased to see that California has created a critical path toward harmonization with the federal government -- the most effective approach to AI safety." Meta and Anthropic quickly parroted that praise. State lawmakers have proposed similar legislation in New York, and there have been piecemeal attempts to regulate aspects of AI elsewhere. But as the home of virtually every important AI company, California is in a unique position to set the agenda for meaningful regulation. This bill might be better than nothing. But as a rule of thumb, if new regulation is greeted with open arms by the industry it's supposed to oversee, it's probably not much of a threat.
[13]
California Governor Gavin Newsom signs landmark AI safety regulation | Fortune
California has taken a significant step toward regulating artificial intelligence with Governor Gavin Newsom signing a new state law that will require major AI companies, many of which are headquartered in the state, to publicly disclose how they plan to mitigate the potentially catastrophic risks posed by advanced AI models. The law also creates mechanisms for reporting critical safety incidents, extends whistleblower protections to AI company employees, and initiates the development of CalCompute, a government consortium tasked with creating a public computing cluster for safe, ethical, and sustainable AI research and innovation. By compelling companies, including OpenAI, Meta, Google DeepMind, and Anthropic, to follow these new rules at home, California may effectively set the standard for AI oversight. Newsom framed the law as a balance between safeguarding the public and encouraging innovation. In a statement, he wrote: "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance." The legislation, authored by State Sen. Scott Wiener, follows a failed attempt to pass a similar AI law last year. Wiener said that the new law, which was known by the shorthand SB 53 (for Senate Bill 53), focuses on transparency rather than liability, a departure from his prior SB 1047 bill, which Newsom vetoed last year. "SB 53's passage marks a notable win for California and the AI industry as a whole," said Sunny Gandhi, VP of Political Affairs at Encode AI, a co-sponsor SB 53. "By establishing transparency and accountability measures for large-scale developers, SB 53 ensures that startups and innovators aren't saddled with disproportionate burdens, while the most powerful models face appropriate oversight. This balanced approach sets the stage for a competitive, safe, and globally respected AI ecosystem." Industry reactions to the new legislation have been divided. Jack Clark, co-founder of AI company Anthropic, which backed SB 53, wrote on X: "We applaud [the California Governor] for signing [Scott Wiener's] SB 53, establishing transparency requirements for frontier AI companies that will help us all have better data about these systems and the companies building them. Anthropic is proud to have supported this bill." He emphasized that while federal standards are still important to prevent a patchwork of state rules, California has created a framework that balances public safety with ongoing innovation. OpenAI, which did not endorse the bill, told news outlets it was "pleased to see that California has created a critical path toward harmonization with the federal government -- the most effective approach to AI safety," adding that if implemented correctly, the law would enable cooperation between federal and state governments on AI deployment. Meta spokesperson Christopher Sgro similarly told media the company "supports balanced AI regulation," calling SB 53 "a positive step in that direction," and said Meta looks forward to working with lawmakers to protect consumers while fostering innovation. Despite being a state-level law, the California legislation will have a global reach, since 32 of the world's top 50 AI companies are based in the state. The bill requires AI firms to report incidents to California's Office of Emergency Services and protects whistleblowers, allowing engineers and other employees to raise safety concerns without risking their careers. SB 53 also includes civil penalties for noncompliance, enforceable by the state attorney general, though AI policy experts like Miles Brundage note these penalties are relatively weak compared, even compared to those enforced by the EU's AI Act. Brundage, who was formerly the head of policy research at OpenAI, said in an X post that while SB 53 represented "a step forward," there was a need for "actual transparency" in reporting, stronger minimum risk thresholds, and technically robust third-party evaluations. Collin McCune, head of government affairs at Andreessen Horowitz, also warned the law "risks squeezing out startups, slowing innovation, and entrenching the biggest players," and said it sets a dangerous precedent for state-by-state regulation that could create "a patchwork of 50 compliance regimes that startups don't have the resources to navigate." Several AI companies that lobbied against the bill also made similar arguments. California is aiming to promote transparency and accountability in the AI sector with the requirement for public disclosures and incident reporting; however, critics like McCune argue that the law could make compliance challenging for smaller firms and entrench Big Tech's AI dominance. Thomas Woodside, a co-founder at Secure AI Project, a co-sponsor of the law, called the concerns around startups "overblown." "This bill is only applying to companies that are training AI models with a huge amount of compute that costs hundreds of millions of dollars, something that a tiny startups can't do," he told Fortune. "Reporting very serious things that go wrong, and whistleblower protections, is a very basic level of transparency; and the obligations don't even apply to companies that have less than $500 million in annual revenue."
[14]
California enacts AI safety law targeting tech giants
California Governor Gavin Newsom has signed into law groundbreaking legislation requiring the world's largest artificial intelligence companies to publicly disclose their safety protocols and report critical incidents, state lawmakers announced Monday. Senate Bill 53 marks California's most significant move yet to regulate Silicon Valley's rapidly advancing AI industry while also maintaining its position as a global tech hub. "With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails," State Senator Scott Wiener, the bill's sponsor, said in a statement. The new law represents a successful second attempt by Wiener to establish AI safety regulations after Newsom vetoed his previous bill, SB 1047, after furious pushback from the tech industry. It also comes after a failed attempt by the Trump administration to prevent states from enacting AI regulations, under the argument that they would create regulatory chaos and slow US-made innovation in a race with China. The new law says major AI companies have to publicly disclose their safety and security protocols in redacted form to protect intellectual property. They must also report critical safety incidents -- including model-enabled weapons threats, major cyber-attacks, or loss of model control -- within 15 days to state officials. The legislation also establishes whistleblower protections for employees who reveal evidence of dangers or violations. According to Wiener, California's approach differs from the European Union's landmark AI Act, which requires private disclosures to government agencies. SB 53, meanwhile, mandates public disclosure to ensure greater accountability. In what advocates describe as a world-first provision, the law requires companies to report instances where AI systems engage in dangerous deceptive behavior during testing. For example, if an AI system lies about the effectiveness of controls designed to prevent it from assisting in bioweapon construction, developers must disclose the incident if it materially increases catastrophic harm risks. The working group behind the law was led by prominent experts including Stanford University's Fei-Fei Li, known as the "godmother of AI."
[15]
California Gov. Gavin Newsom signs first-in-nation AI transparency law
Cecilio Padilla is a digital producer for CBS Sacramento and a Sacramento-area native who has been covering Northern California for more than a decade. Gov. Gavin Newsom has signed a new bill into law in California that aims to regulate the artificial intelligence industry. Named SB 53, the law targets larger AI models and implements a number of transparency, reporting and safety obligations that developers will need to follow. Legislators crafted the bill using recommendations from a recent report on AI guardrails. Newsom hailed the law as a first of its kind in the U.S. "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive," Newsom said in a statement after signing the bill Monday. Industry groups have voiced their concern over the guardrails as proposed, calling the law flawed. "For true AI safety, California should pursue standards based on empirical risk and measurable harm. Solutions should empower transparency and accountability without penalizing innovators," wrote the Chamber of Progress tech industry trade group in a July critique of the bill. The law was authored by Sen. Scott Wiener (D-San Francisco). Newsom has until Oct. 12 to sign or veto bills passed by the California legislature during their most recent session. Some new laws already signed by Newsom this session include a series of bills aimed at expanding access to reproductive health care in the state.
[16]
California enacts AI safety law targeting tech giants
San Francisco (United States) (AFP) - California Governor Gavin Newsom has signed into law groundbreaking legislation requiring the world's largest artificial intelligence companies to publicly disclose their safety protocols and report critical incidents, state lawmakers announced Monday. Senate Bill 53 marks California's most significant move yet to regulate Silicon Valley's rapidly advancing AI industry while also maintaining its position as a global tech hub. "With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails," State Senator Scott Wiener, the bill's sponsor, said in a statement. The new law represents a successful second attempt by Wiener to establish AI safety regulations after Newsom vetoed his previous bill, SB 1047, after furious pushback from the tech industry. It also comes after a failed attempt by the Trump administration to prevent states from enacting AI regulations, under the argument that they would create regulatory chaos and slow US-made innovation in a race with China. The new law says major AI companies have to publicly disclose their safety and security protocols in redacted form to protect intellectual property. They must also report critical safety incidents -- including model-enabled weapons threats, major cyber-attacks, or loss of model control -- within 15 days to state officials. The legislation also establishes whistleblower protections for employees who reveal evidence of dangers or violations. According to Wiener, California's approach differs from the European Union's landmark AI Act, which requires private disclosures to government agencies. SB 53, meanwhile, mandates public disclosure to ensure greater accountability. In what advocates describe as a world-first provision, the law requires companies to report instances where AI systems engage in dangerous deceptive behavior during testing. For example, if an AI system lies about the effectiveness of controls designed to prevent it from assisting in bioweapon construction, developers must disclose the incident if it materially increases catastrophic harm risks. The working group behind the law was led by prominent experts including Stanford University's Fei-Fei Li, known as the "godmother of AI."
[17]
California bill regulating top AI companies signed into law
California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act, otherwise known as SB53, into law Monday afternoon. The bill is the first of its kind in the United States, placing new AI-specific regulations on the top players in the industry, requiring these companies to fulfill transparency requirements and report AI-related safety incidents. Though several states have recently passed laws regulating aspects of AI, SB 53 is the first to explicitly focus on the safety of cutting-edge and powerful AI models. In a statement, Newsom said that "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance." The law requires leading AI companies to publish public documents detailing how they are following best practices to create safe AI systems. The law creates a new pathway for companies to report severe AI-related incidents to California's Office of Emergency Services, while also strengthening protections for whistleblowers who raise concerns about health and safety risks. The law is backed by civil penalties for non-compliance, to be enforced by the California Attorney General's office. While SB 53 attracted intense criticism from industry groups like the Chamber of Progress and the Consumer Technology Association, leading AI company Anthropic endorsed the bill while others like Meta said it was a "step in the right direction." While these companies expressed their support for the bill, they made clear that their preference was for federal legislation to avoid a patchwork of state-by-state laws. In a LinkedIn statement several weeks ago, OpenAI's chief global affairs officer Chris Lehane wrote, "America leads best with clear, nationwide rules, not a patchwork of state or local regulations. Fragmented state‑by‑state approaches create friction, duplication, and missed opportunities." Monday morning, U.S. Senators Josh Hawley and Richard Blumenthal proposed a new federal bill that would require leading AI developers to "evaluate advanced AI systems and collect data on the likelihood of adverse AI incidents." As currently written, the federal bill would create an Advanced Artificial Intelligence Evaluation Program within the Department of Energy. Participation in the evaluation program would be mandatory, much like SB53's mandatory transparency and reporting requirements. The passage of SB53 and the introduction of the federal bill from Sen. Hawley and Sen. Blumenthal come as world leaders increasingly call for AI regulation in the face of growing risks from advanced AI systems. In remarks to the United Nations General Assembly last week, President Donald Trump said, AI "could be one of the great things ever, but it also can be dangerous, but it can be put to tremendous use and tremendous good." Addressing the U.N. one day after President Trump, President Vladimir Zelensky of Ukraine said, "We are now living through the most destructive arms race in human history because this time, it includes artificial intelligence."
[18]
California AI bill inches closer to the finish line
Why it matters: Gov. Gavin Newsom has not definitively said whether he will sign the bill, SB53 -- but its momentum is strong, and if signed, the law would have major implications for the country's biggest AI names. What they're saying: Speaking to Bill Clinton about AI at a Clinton Global Initiative event in New York on Wednesday, Newsom said that California "has a sense of responsibility and accountability to lead, so we support risk-taking, but not recklessness." * "We have a bill that's on my desk that we think strikes the right balance. And we worked with industry, but we didn't submit to industry. We're not doing things to them, but we're not doing things necessarily for them." Newsom's office, when asked by Axios if the governor meant SB53, said that he "did not specify which bill he was referencing." * "Beyond that, we don't typically comment on pending legislation," said Newsom spokesperson Tara Gallegos. How it works: SB53, authored by state Sen. Scott Wiener, would require large AI developers to make public disclosures about safety protocols and for them to report safety incidents. * The legislation would also along create whistleblower protections and make public cloud computing available for access to compute for smaller developers and researchers. * Anthropic is the only major AI company to publicly support the bill. What's next: Newsom has until Oct. 12 to sign or veto the legislation.
[19]
Governor of California signs landmark AI safety bill into law - SiliconANGLE
Governor of California signs landmark AI safety bill into law California Governor Gavin Newsom today signed into law a sweeping measure requiring major artificial intelligence developers to meet state-mandated safety standards and disclose potential risks associated with their products. The Transparency in Frontier Artificial Intelligence Act, S.B. 53, also strengthens protections for employees and executives who blow the whistle on harms or unsafe practices. While other states have passed laws regulating AI, SB 53 is said to be the first to put the emphasis on the safety of cutting-edge products. Since the bill was first introduced by California Senator Scott Wiener, a Democrat from San Francisco, it and a similar previous bill, SB 1047, that Newsom (pictured) vetoed sparked debate: critics argue such state laws could slow innovation at a time when the U.S. is racing China in AI development, while supporters say oversight is vital, particularly in California, home to industry giants such as Meta Platforms Inc., OpenAI, Google LLC, Anthropic PBC. And chipmaker Nvidia Corp. The state is home to many lesser giants, with a recent Forbes list of the top 50 AI firms putting 32 of them in or close to Silicon Valley. Given the reach of these companies, Newsom's passing of S.B. 53 will be felt all over the world. He called the law "a blueprint for well-balanced AI policies beyond our borders, especially in the absence of a comprehensive federal AI policy framework." Wiener says the law shows that the state is "stepping up" to meet potential present and future safety issues head on, introducing what he calls "commonsense guardrails" that embrace "trust, fairness, and accountability." "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive," said Newsom. "This legislation strikes that balance." Some of the AI that might be affected have expressed support for the bill, while others have shown a preference for federal laws over state regulations. Just today, Republican Senator Josh Hawley and Democrat Senator Richard Blumenthal proposed a federal bill that asks AI firms to "evaluate advanced AI systems and collect data on the likelihood of adverse AI incidents." An OpenAI spokesperson said the law is "a critical path toward harmonization with the federal government,' what he called "the most effective approach to AI safety," adding the caveat, "If implemented correctly." Anthropic co-founder and head of policy Jack Clark was similarly positive, describing SB 53 as "meaningful transparency requirements for frontier AI companies without imposing prescriptive technical mandates."
[20]
California enacts SB 53 AI transparency law
California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (SB 53) into law Monday, establishing the first U.S. state regulations for leading AI companies through mandatory transparency and safety reporting requirements. The bill is the first law in the United States to explicitly regulate the safety of powerful, frontier AI models. While other states have passed laws addressing certain AI aspects, SB 53 creates a distinct regulatory framework focused on top developers. Governor Newsom stated, "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance." The law's provisions are intended to create public insight into the development processes of these advanced systems. The law requires leading AI companies to publish public documents detailing how they are following best practices to create safe systems. It also creates a new pathway for companies to report severe AI-related incidents to the California Office of Emergency Services. The act strengthens protections for whistleblowers who raise concerns about health and safety risks, shielding employees from potential reprisal for coming forward. This new reporting channel and the whistleblower protections are designed to increase accountability for the systems being developed. Non-compliance with the law's mandates can trigger civil penalties, which will be enforced by the California Attorney General's office. The legislation attracted intense criticism from industry groups like the Chamber of Progress and the Consumer Technology Association. In contrast, AI company Anthropic endorsed the bill. Another major developer, Meta, called it "a step in the right direction," signaling a degree of industry division on the new regulations. Even with some support, companies expressed a clear preference for federal legislation to avoid a "patchwork" of state-by-state laws. In a LinkedIn statement published several weeks ago, OpenAI's chief global affairs officer Chris Lehane wrote, "America leads best with clear, nationwide rules, not a patchwork of state or local regulations. Fragmented state-by-state approaches create friction, duplication, and missed opportunities." This reflects a broader industry concern about navigating multiple, potentially conflicting, regulatory environments across the country. On the same Monday, U.S. Senators Josh Hawley and Richard Blumenthal introduced a federal bill that would require top AI developers to "evaluate advanced AI systems and collect data on the likelihood of adverse AI incidents." The proposal would create a mandatory Advanced Artificial Intelligence Evaluation Program within the Department of Energy. Participation in this program would be compulsory, a measure that echoes the mandatory nature of California's new reporting requirements. These domestic legislative actions coincide with growing international calls for AI regulation. At the United Nations General Assembly last week, President Donald Trump remarked that AI "could be one of the great things ever, but it also can be dangerous, but it can be put to tremendous use and tremendous good." The next day, President Vladimir Zelensky of Ukraine told the U.N., "We are now living through the most destructive arms race in human history because this time, it includes artificial intelligence."
[21]
California Gov. Gavin Newsom signs landmark bill creating AI safety measures
SACRAMENTO, Calif. -- California Gov. Gavin Newsom on Monday signed a law that aims to prevent people from using powerful artificial intelligence models for potentially catastrophic activities like building a bioweapon or shutting down a bank system. The move comes as Newsom touted California as a leader in AI regulation and criticized the inaction at the federal level in a recent conversation with former President Bill Clinton. The new law will establish some of the first-in-the-nation regulations on large-scale AI models without hurting the state's homegrown industry, Newsom said. Many of the world's top AI companies are located in California and will have to follow the requirements. "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance," Newsom said in a statement. The legislation requires AI companies to implement and disclose publicly safety protocols to prevent their most advanced models from being used to cause major harm. The rules are designed to cover AI systems if they meet a "frontier" threshold that signals they run on a huge amount of computing power. Such thresholds are based on how many calculations the computers are performing. Those who crafted the regulations have acknowledged the numerical thresholds are an imperfect starting point to distinguish today's highest-performing generative AI systems from the next generation that could be even more powerful. The existing systems are largely made by California-based companies like Anthropic, Google, Meta Platforms and OpenAI. The legislation defines a catastrophic risk as something that would cause at least $1 billion in damage or more than 50 injuries or deaths. It's designed to guard against AI being used for activities that could cause mass disruption, such as hacking into a power grid. Companies also have to report to the state any critical safety incidents within 15 days. The law creates whistleblower protections for AI workers and establishes a public cloud for researchers. It includes a fine of $1 million per violation. It drew opposition from some tech companies, which argued that AI legislation should be done at the federal level. But Anthropic said the regulations are "practical safeguards" that make official the safety practices many companies are already doing voluntarily. "While federal standards remain essential to avoid a patchwork of state regulations, California has created a strong framework that balances public safety with continued innovation," Jack Clark, co-founder and head of policy at Anthropic, said in a statement. The signing comes after Newsom last year vetoed a broader version of the legislation, siding with tech companies that said the requirements were too rigid and would have hampered innovation. Newsom instead asked a group of several industry experts, including AI pioneer Fei-Fei Li, to develop recommendations on guardrails around powerful AI models. The new law incorporates recommendations and feedback from Newsom's group of AI experts and the industry, supporters said. The legislation also doesn't put the same level of reporting requirements on startups to avoid hurting innovation, said state Sen. Scott Wiener of San Francisco, the bill's author. "With this law, California is stepping up, once again, as a global leader on both technology innovation and safety," Wiener said in a statement. Newsom's decision comes as President Donald Trump in July announced a plan to eliminate what his administration sees as "onerous" regulations to speed up AI innovation and cement the U.S.' position as the global AI leader. Republicans in Congress earlier this year unsuccessfully tried to ban states and localities from regulating AI for a decade. Without stronger federal regulations, states across the country have spent the last few years trying to rein in the technology, tackling everything from deepfakes in elections to AI "therapy." In California, the Legislature this year passed a number of bills to address safety concerns around AI chatbots for children and the use of AI in the workplace. California has also been an early adopter of AI technologies. The state has deployed generative AI tools to spot wildfires and address highway congestion and road safety, among other things. ___ Associated Press reporter Matt O'Brien contributed to the report.
[22]
California Gov. Gavin Newsom signs landmark bill creating AI safety measures
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom on Monday signed a law that aims to prevent people from using powerful artificial intelligence models for potentially catastrophic activities like building a bioweapon or shutting down a bank system. The move comes as Newsom touted California as a leader in AI regulation and criticized the inaction at the federal level in a recent conversation with former President Bill Clinton. The new law will establish some of the first-in-the-nation regulations on large-scale AI models without hurting the state's homegrown industry, Newsom said. Many of the world's top AI companies are located in California and will have to follow the requirements. "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance," Newsom said in a statement. The legislation requires AI companies to implement and disclose publicly safety protocols to prevent their most advanced models from being used to cause major harm. The rules are designed to cover AI systems if they meet a "frontier" threshold that signals they run on a huge amount of computing power. Such thresholds are based on how many calculations the computers are performing. Those who crafted the regulations have acknowledged the numerical thresholds are an imperfect starting point to distinguish today's highest-performing generative AI systems from the next generation that could be even more powerful. The existing systems are largely made by California-based companies like Anthropic, Google, Meta Platforms and OpenAI. The legislation defines a catastrophic risk as something that would cause at least $1 billion in damage or more than 50 injuries or deaths. It's designed to guard against AI being used for activities that could cause mass disruption, such as hacking into a power grid. Companies also have to report to the state any critical safety incidents within 15 days. The law creates whistleblower protections for AI workers and establishes a public cloud for researchers. It includes a fine of $1 million per violation. It drew opposition from some tech companies, which argued that AI legislation should be done at the federal level. But Anthropic said the regulations are "practical safeguards" that make official the safety practices many companies are already doing voluntarily. "While federal standards remain essential to avoid a patchwork of state regulations, California has created a strong framework that balances public safety with continued innovation," Jack Clark, co-founder and head of policy at Anthropic, said in a statement. The signing comes after Newsom last year vetoed a broader version of the legislation, siding with tech companies that said the requirements were too rigid and would have hampered innovation. Newsom instead asked a group of several industry experts, including AI pioneer Fei-Fei Li, to develop recommendations on guardrails around powerful AI models. The new law incorporates recommendations and feedback from Newsom's group of AI experts and the industry, supporters said. The legislation also doesn't put the same level of reporting requirements on startups to avoid hurting innovation, said state Sen. Scott Wiener of San Francisco, the bill's author. "With this law, California is stepping up, once again, as a global leader on both technology innovation and safety," Wiener said in a statement. Newsom's decision comes as President Donald Trump in July announced a plan to eliminate what his administration sees as "onerous" regulations to speed up AI innovation and cement the U.S.' position as the global AI leader. Republicans in Congress earlier this year unsuccessfully tried to ban states and localities from regulating AI for a decade. Without stronger federal regulations, states across the country have spent the last few years trying to rein in the technology, tackling everything from deepfakes in elections to AI "therapy." In California, the Legislature this year passed a number of bills to address safety concerns around AI chatbots for children and the use of AI in the workplace. California has also been an early adopter of AI technologies. The state has deployed generative AI tools to spot wildfires and address highway congestion and road safety, among other things. ___ Associated Press reporter Matt O'Brien contributed to the report.
[23]
Newsom signs first-in-the-nation AI safety disclosures law
California Gov. Gavin Newsom (D) signed a bill Monday requiring artificial intelligence (AI) companies to disclose safety information about large-scale frontier models in a key win for the push to regulate AI in Silicon Valley's home state. S.B. 53, the Transparency in Frontier AI Act, will require developers of these leading-edge AI models to publish frameworks detailing how they assess and mitigate catastrophic risks. "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive," Newsom said in a statement Monday. "This legislation strikes that balance." "AI is the new frontier in innovation, and California is not only here for it -- but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves," he continued. Newsom hinted last week that he supported the legislation, a key vote of confidence after the governor vetoed an earlier iteration of the AI bill. S.B. 53 is considered the successor to last year's S.B. 1047. The bill put forward a much stricter AI framework, which sought to require models to undergo safety testing before release and hold developers accountable for potential harm. Despite passing the state legislature, S.B. 1047 was ultimately rejected by Newsom. It also faced pushback from federal lawmakers, with several prominent California Democrats in Congress voicing concerns about the legislation. By contrast, S.B. 53 has been more well-received. Anthropic endorsed the legislation earlier this month. Jack Clark, co-founder and head of policy at the AI firm, said Monday that the bill "establishes meaningful transparency requirements for frontier AI companies without imposing prescriptive technical mandates." "While federal standards remain essential to avoid a patchwork of state regulations, California has created a strong framework that balances public safety with continued innovation," Clark added. Given California's prominent role in the AI space, experts have suggested its AI legislation could serve to create a de facto national standard, especially amid a lack of progress at the federal level. However, this is likely to face resistance from the Trump administration and Republican lawmakers, who have voiced concerns about the Golden State setting the rules for AI across the nation and have increasingly pushed to preempt state AI laws that they worry could limit innovation.
[24]
California Gov. Gavin Newsom Signs Landmark Bill Creating AI Safety Measures
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom on Monday signed a law that aims to prevent people from using powerful artificial intelligence models for potentially catastrophic activities like building a bioweapon or shutting down a bank system. The move comes as Newsom touted California as a leader in AI regulation and criticized the inaction at the federal level in a recent conversation with former President Bill Clinton. The new law will establish some of the first-in-the-nation regulations on large-scale AI models without hurting the state's homegrown industry, Newsom said. Thirty-two of the top 50 AI companies are located in California and will have to follow the requirements. "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance," Newsom said in a statement. The legislation requires AI companies to implement and disclose publicly safety protocols to prevent their most advanced models from being used to cause major harm. The rules are designed to cover AI systems if they meet a "frontier" threshold that signals they run on a huge amount of computing power. Such thresholds are based on how many calculations the computers are performing. Those who crafted the regulations have acknowledged the numerical thresholds are an imperfect starting point to distinguish today's highest-performing generative AI systems from the next generation that could be even more powerful. The existing systems are largely made by California-based companies like Anthropic, Google, Meta Platforms and OpenAI. The legislation defines a catastrophic risk as something that would cause at least $1 billion in damage or more than 50 injuries or deaths. It's designed to guard against AI being used for activities that could cause mass disruption, such as hacking into a power grid. Companies also have to report to the state any critical safety incidents within 15 days. The law creates whistleblower protections for AI workers and establishes a public cloud for researchers. It includes a fine of $1 million per violation. It drew opposition from some tech companies, which argued that AI legislation should be done at the federal level. But Anthropic said the regulations help level the playing field and make official the safety practices many companies are already doing voluntarily. "Developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety, creating a level playing field where disclosure is mandatory, not optional," the company said in its endorsing announcement. The signing comes after Newsom last year vetoed a broader version of the legislation, siding with tech companies that said the requirements were too rigid and would have hampered innovation. Newsom instead asked a group of several industry experts, including AI pioneer Fei-Fei Li, to develop recommendations on guardrails around powerful AI models. The new law incorporates recommendations and feedback from Newsom's group of AI experts and the industry, supporters said. The legislation also doesn't put the same level of reporting requirements on startups to avoid hurting innovation, said state Sen. Scott Wiener of San Francisco, the bill's author. "With this law, California is stepping up, once again, as a global leader on both technology innovation and safety," Wiener said in a statement. Newsom's decision comes as President Donald Trump in July announced a plan to eliminate what his administration sees as "onerous" regulations to speed up AI innovation and cement the U.S.' position as the global AI leader. Republicans in Congress earlier this year unsuccessfully tried to ban states and localities from regulating AI for a decade. Without stronger federal regulations, states across the country have spent the last few years trying to rein in the technology, tackling everything from deepfakes in elections to AI "therapy." In California, the Legislature this year passed a number of bills to address safety concerns around AI chatbots for children and the use of AI in the workplace. California has also been an early adopter of AI technologies. The state has deployed generative AI tools to spot wildfires and address highway congestion and road safety, among other things. -- - Associated Press reporter Matt O'Brien contributed to the report.
[25]
'We have no peers:' California AI efforts clash with GOP push for hands-off approach
California state lawmakers have ramped up efforts to regulate artificial intelligence (AI) in their latest session, putting the Golden State on a collision course with a Republican effort to impose a national ban on such policies. As the AI race heats up, President Trump and GOP lawmakers have sought to eliminate regulations they argue could stifle innovation, while states forge ahead with attempts to place guardrails on the technology. But California sits in a unique position. As the home of Silicon Valley and the center of the AI boom, it could play an outsized role in defining the future of AI regulation -- both inside and outside its borders. "We dominate in artificial intelligence. We have no peers," California Gov. Gavin Newsom (D) said Wednesday. "As a consequence of having so much leadership residing in such a concentrated place, California, we have a sense of responsibility and accountability to lead, so we support risk-taking, but not recklessness," he added. The California legislature passed several AI bills in the session that ended in mid-September. Most closely watched is Senate Bill 53, legislation that would require developers of large frontier models to publish frameworks detailing how they assess and mitigate catastrophic risks. It is currently awaiting the governor's signature. "Because California is such a large state, any AI regulations that it enacts could serve as a potential de facto national standard," said Andrew Lokay, a senior research analyst at Beacon Policy Advisors. "Companies could decide to simplify compliance by applying California's rules to their operations beyond the Golden State," he continued. Washington, D.C., is taking notice. Sriram Krishnan, a White House senior policy advisor for AI, argued last week that they "don't want California to set the rules for AI across the country." Rep. Kevin Kiley (R-Calif.) acknowledged his home state "continues to be the center of breathtaking innovation worldwide" but called into question whether it should be the one to regulate AI. "The notion that this is the right body to regulate the most powerful technology in human history, whose workings are actually largely beyond the understanding even of the technology's creators, is a fairly fantastical notion," he said at a hearing last week. "I do think the risk that California is going to drive AI policy for the entire country is a very real one, and I think that a national framework that seeks to stop that from happening is needed and appropriate," Kiley added. With a heavy focus on boosting innovation, the Trump administration and GOP lawmakers have increasingly pushed to preempt state AI laws that they argue could weigh down the technology. Earlier this year, several Republicans sought to include a provision in Trump's "big, beautiful bill" that would have barred state AI regulation for 10 years. The effort exposed a rift within the GOP. Some lawmakers, including Sen. Marsha Blackburn (R-Tenn.) and Rep. Marjorie Taylor Greene (R-Ga.), voiced concerns about the restrictions on states' rights and the preemption of AI-related protections. The Senate ultimately voted 99-1 to remove the provision. Despite the setback, Sen. Ted Cruz (R-Texas), chair of the Senate Commerce Committee, said last week that the moratorium push is "not dead at all." This focus on state laws was also reflected in Trump's AI Action Plan, which called for limiting funding to states over AI rules, tasking the Federal Communications Commission (FCC) with evaluating whether state laws interfere with its mandate and reviewing Federal Trade Commission (FTC) investigations that could "unduly burden AI innovation." As the president charges ahead with this endeavor, Lokay noted that California's push for AI regulation could provide more momentum to efforts to preempt state rules. However, he underscored that there are still many obstacles to passing such a moratorium. Beyond GOP infighting, Congress has long struggled to pass tech legislation, with kids' online safety and digital privacy efforts repeatedly falling short. While lawmakers have taken an interest in AI, a federal framework still appears far off. "A year ago my response to this kind of legislation was the states should not be doing this. We should leave it to the federal government," Appian CEO Matt Calkins said in a statement to The Hill. "One year later and the situation has changed. The federal government is not taking the lead." "In fact, it is flirting with the idea of forbidding or preventing states from creating AI regulation, and so, in the face of that aggressive pronunciation of a federal level interest in AI regulation, I do think it should come from somewhere," he continued. "I'm sorry to see it come from the states, but I think that is one possible way we could arrive at it, it's just going to be more painful." Anthropic appeared to make a similar calculus in throwing its support behind California's S.B. 53. The AI firm endorsed the legislation in early September. "While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won't wait for consensus in Washington," it wrote in a blog post earlier this month. The reception to S.B. 53 so far has been markedly better than its predecessor. The bill, put forward by California state Sen. Scott Wiener (D), is widely viewed as a successor to S.B. 1047 from last year, which cleared the state legislature but was vetoed by Newsom. S.B. 1047 offered a much more heavy-handed approach to AI regulation, pushing for models to undergo safety testing before public release and seeking to hold developers liable for potential severe harm. The measure drew a rare rebuke from several California Democrats in Congress. Former Speaker Nancy Pelosi, Reps. Zoe Lofgren, Ro Khanna and others argued the legislation was overly restrictive and sought to tackle an issue that should be left to federal lawmakers. Anthropic offered lukewarm support for S.B. 1047 last year, suggesting "its benefits likely outweigh its costs" after some amendments were made to the legislation. S.B. 53 hasn't won over all of its detractors. OpenAI has remained critical of California's approach, warning Newsom of the risk they could "inadvertently create a 'CEQA for AI innovation.'" The California Environmental Quality Act (CEQA) has faced criticism for making it more difficult to build in the state. But the state's AI bill has largely received less pushback this year. Khanna told The Hill he views the legislation as a "strong start," underscoring that "they made a number of revisions on it that avoid downstream excessive liability." And Newsom hinted at his support Wednesday, noting that "we have a bill that's on my desk that we think strikes the right balance, and we worked with industry, but we didn't submit to industry."
[26]
Gavin Newsom Says He's Signing A Law To Install 'Common-Sense Guardrails' For AI Safety: What This Means For Google, Meta And Nvidia - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL)
On Monday, Governor Gavin Newsom (D-Calif.) signed a landmark law requiring artificial intelligence giants such as OpenAI, Alphabet Inc.'s (NASDAQ: GOOG) (NASDAQ: GOOGL) Google, Meta Platforms, Inc. (NASDAQ: META) and Nvidia Corporation (NASDAQ: NVDA) to disclose how they plan to prevent their most advanced models from causing potential catastrophic risks. California Takes Lead On AI Regulation Newsom described the new law, SB 53, as a critical step in ensuring that AI innovation thrives while protecting public safety. "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive," he said in a press release. "AI is the new frontier in innovation, and California is not only here for it - but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation," the statement read. Newsom's office called the law a potential model for the rest of the U.S. If Congress enacts national standards, California lawmakers are expected to align state rules while maintaining the "high bar established by SB 53," noted Reuters. See Also: Nvidia CEO Jensen Huang Warns US Export Controls On China May Have Backfired: 'They're Nanoseconds Behind Us' What The Law Requires SB 53 applies to AI companies with annual revenues exceeding $500 million. These firms must conduct public risk assessments, detailing how their technology could spiral out of human control or be misused to create bioweapons. Violations carry penalties of up to $1 million. The law comes after Newsom vetoed an earlier bill that sought annual third-party audits of companies investing more than $100 million in AI models. That proposal faced heavy industry pushback over the potential compliance burden. Industry Pushes Back On Patchwork Rules Jack Clark, co-founder of Anthropic, welcomed the move, saying, "Anthropic is proud to have supported this bill." Sen. Scott Wiener (D-Calif.) supported the bill and took to X, formerly Twitter to say, "It's an exciting step for responsible scaling of AI innovation." However, Collin McCune, head of government affairs at Andreessen Horowitz, warned that SB 53 risks creating "a patchwork of 50 compliance regimes that startups don't have the resources to navigate." Global Context: AI Rules Take Shape Worldwide California's law follows similar efforts abroad. The EU's AI Act also imposes strict requirements on high-risk systems, from risk assessments to bias controls. Meanwhile, China has called for a global body to coordinate AI governance, highlighting the fragmented state of international rules. Benzinga's Edge Stock Rankings indicate that NVDA continues to trend upward across short, medium and long-term horizons, with additional performance details available here. Read Next: Cathie Wood Dumps Palantir As Stock Touches Peak Prices, Bails On Soaring Flying-Taxi Maker Archer Aviation Photo Courtesy: Sheila Fitzgerald on Shutterstock.com Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. GOOGAlphabet Inc$244.11-1.24%OverviewGOOGLAlphabet Inc$243.70-1.15%METAMeta Platforms Inc$743.900.02%NVDANVIDIA Corp$181.111.64%Market News and Data brought to you by Benzinga APIs
[27]
AI Regulation Gains Momentum With California's New Law | PYMNTS.com
The law targets companies with more than $500 million in annual revenue that build "frontier" AI models. Developers must publish frameworks for managing catastrophic risks such as systems escaping human control or being misused to create bioweapons and report significant safety incidents within 15 days. Violations can result in fines of up to $1 million per act, according to a Reuters report. Employees who raise safety concerns are protected by whistleblower provisions, while the state will launch "CalCompute," a public computing consortium that provides shared AI infrastructure to researchers and smaller firms. Newsom framed the measure as both a safeguard and an enabler. "AI is the new frontier in innovation, and California is not only here for it -- but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves," he said in the news release. California passed SB 53 after shelving a broader 2024 proposal that called for independent audits. The new measure narrows the focus to transparency and accountability while aligning the state with emerging global approaches to AI oversight. As Politico noted, the measure drew on recommendations from the state's AI guardrails report and puts California at the center of the national policy debate. Industry and policy reactions are mixed. Anthropic co-founder Jack Clark reportedly called the law "a strong framework that balances public safety with continued innovation," while venture firms including Andreessen Horowitz warned it could encourage a patchwork of state regimes. Representative Ted Lieu said the question now is whether "Congress" or "17 states" should regulate AI, underscoring the stakes for federal preemption, according to Reuters. A recent PYMNTS analysis noted that California's leadership could ripple through national and global AI regulation debates. The approval of SB 53 reflects the state's role as a hub for AI companies as well as a state setting precedence in ongoing regulatory conversations.
[28]
California Eyes Regulating AI Models | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Newsom has publicly suggested support for AI oversight legislation. According to Politico, at the Clinton Global Initiative, he said, "We have a bill that's on my desk that we think strikes the right balance ... We worked with industry, but we didn't submit to industry. We're not doing things to them, but we're not necessarily doing things for them." He did not specify which measure he meant. Observers widely believe he referred to SB 53. Politico reported that his remarks follow months of AI policy maneuvering, and state senators told reporters they hope he was referring to SB 53. Bloomberg Law reported that Newsom has endorsed stronger oversight of the AI sector and highlighted California's role in setting early guardrails. What SB 53 Would Require Under SB 53, large frontier AI developers are required to publish safety protocols explaining how they test for catastrophic risks and apply mitigations. Developers must also publish transparency reports for new or modified models, summarizing risk assessments, dangerous capability thresholds, intended uses and safeguards. Critical safety incidents must be reported to California's Office of Emergency Services within 15 days, or within 24 hours if an imminent threat exists. A catastrophic incident is defined as one causing at least 50 deaths or $1 billion in damage. The law also protects whistleblowers, requiring anonymous reporting channels and barring retaliation. Starting in 2030, annual independent audits will be mandatory, with summaries sent to the Attorney General. Penalties escalate up to $10 million for repeated catastrophic-risk violations. Rooted in the Governor's Expert Commission SB 53 is closely aligned with the California Report on Frontier AI Policy, released June 17 by an expert working group convened by Newsom. That report prioritized evidence-based oversight, standardized disclosures, incident tracking and protections for insiders over rigid micromanagement. It also advocated for adaptive thresholds and infrastructure support, core ideas that are echoed in SB 53. "The final version of SB 53 will ensure California continues to lead not only on AI innovation, but on responsible practices to help ensure that innovation is safe and secure," said Senator Wiener, SB53's sponsor. AI company Anthropic has publicly endorsed SB 53, stating that it provides clearer expectations and establishes guardrails without imposing rigid mandates. Their support reinforces the perception that the bill strikes a nuanced balance between regulation and innovation. The Stakes for California and AI Policy California's push comes amid growing uncertainty at the federal level. PYMNTS recently reported that Colorado has delayed the implementation of its AI law until June 2026, providing businesses with more time to adjust. PYMNTS has also tracked proposals for AI regulatory sandboxes, a framework that allows firms to test new systems under limited oversight. California's AB 2013 takes effect January 1, 2026. The law requires generative AI developers to post summaries of their training data, including sources and collection timelines, creating a baseline of transparency alongside SB 53's frontier model rules. If Newsom signs SB 53, California would become the first U.S. jurisdiction to impose binding risk rules on AI developers.
[29]
What Does California's New AI Law Require From Big Tech Outfits?
California's Governor, Gavin Newsom signed Senate Bill (SB) 53 into law, which makes it the first US state to regulate frontier AI models: the advanced systems that demand vast computing resources to train. The law requires companies developing these models to follow new rules on transparency, accountability, and risk management. It builds on recommendations from California's AI Policy Working Group, which released a report earlier this year calling for enforceable safety standards. Importantly, the state's legislature noted that voluntary commitments from AI developers create gaps in oversight. And SB 53 responds by converting these commitments into binding legal obligations. The Act applies only to the most resourced companies at the frontier of AI development, not to smaller players. To define this scope, lawmakers classify "large developers" as companies that, until January 2027, have trained or begun training at least one foundation model using more than 10^26 floating-point operations (FLOPs) and that earned annual revenues exceeding $100 million in the previous calendar year. Notably, the US Attorney General can update this definition to reflect new technologies and practices. For context, the FLOPs threshold refers to the raw computing power used to train a model. Moreover, the law defines foundation models as AI systems trained on broad datasets, designed for generality of output, and adaptable to a wide range of tasks. These models form the backbone of today's generative AI systems. In addition, it defines catastrophic risk as a foreseeable, material risk where a foundation model could cause death or serious injury to more than 50 people, or inflict at least $1 billion in damage to property or money through a single incident, scheme, or dangerous capability like cyberattacks or bioweapons development. Together, these provisions draw clear boundaries around who is covered for now, but the law also acknowledges that smaller or less resource-intensive models could pose catastrophic risks in the future. Lawmakers note that additional legislation may be required to address such risks as the technology evolves. Large developers must now operate under a detailed compliance framework.They are required to publish a safety and security protocol on their websites. This protocol must explain how the company tests for catastrophic risks, defines dangerous capabilities, and sets thresholds for risk. It must also describe mitigation strategies, cybersecurity measures to protect unreleased model weights, and response plans for critical safety incidents, including the ability to shut down deployed models under company control. For context, critical safety incidents include unauthorised access to AI model weights, the emergence of dangerous capabilities, harm from a catastrophic risk, or models subverting developer controls. Elsewhere, companies must also explain how they use third-parties in assessments, whether evaluations are reproducible by outsiders, and how they decide when a model has been substantially modified enough to trigger new assessments. Before releasing or substantially modifying a foundation model, companies must publish transparency reports, which must include: If deployment proceeds despite risks, developers must explain the reasoning, the decision-making process, and the limitations of the assessments used. In addition, companies must report critical safety incidents to the Attorney General within 15 days of discovery. If an incident poses imminent risk of death or injury, they must notify public safety or law enforcement authorities within 24 hours. Finally, the law prohibits false or misleading statements about catastrophic risks, mitigation efforts, or compliance with published safety protocols. Starting in 2030, large developers will be subject to annual independent audits. These audits must assess whether companies have substantially complied with their safety and security protocols, and highlight any areas of vagueness that prevent clear assessment. Auditors must include experts in corporate compliance and in the technical safety of foundation models. Developers must retain audit reports for five years and provide high-level summaries to the Attorney General within 30 days of completion. Notably, auditors are barred from misrepresenting or omitting material facts, with penalties applying for any instance of false reporting. The Attorney General has exclusive authority to enforce SB 53. Penalties vary based on the severity and intent of the violation: The law includes robust whistleblower protections for employees, contractors, freelancers, vendors, and even board members involved in AI development. Companies cannot retaliate against or silence individuals who disclose risks to the Attorney General, federal authorities, or internal authorities with investigative powers. Developers must provide anonymous internal reporting channels, give monthly updates to employees who file reports, and ensure that officers and directors are informed of disclosures, unless they themselves are implicated. Companies are also required to notify all employees of their rights under the law through workplace postings or annual written notices. Courts can award attorney's fees to whistleblowers and issue injunctions to stop retaliation. Notably, once a worker shows that protected activity contributed to adverse action, the burden shifts to the employer to prove otherwise by clear and convincing evidence. SB 53 also creates a consortium inside the Government Operations Agency to design CalCompute, a proposed state-backed public cloud computing cluster. The consortium must submit its framework to the Legislature by January 1, 2027, after which it will dissolve. CalCompute is intended to make high-performance computing accessible beyond the handful of tech giants that currently dominate the space. The framework must analyse costs, funding sources, governance structures, and workforce pathways, with a preference for housing the system at the University of California. Notably, the project will only move forward if the state budget allocates appropriate funds. To protect public safety, SB 53 limits disclosure of sensitive information. Reports of critical safety incidents, whistleblower filings, and audit summaries submitted to the Attorney General are exempt from the California Public Records Act (CPRA). For context, CPRA is a state law that ensures public access to governmental records. Instead, beginning in 2027, the Attorney General must publish anonymised, aggregated annual reports on incidents and audits. The Legislature justified this on the grounds that disclosure could compromise emergency response or national security. Despite its scope, SB 53 has notable gaps. For instance, enforcement relies heavily on companies self-reporting risks and publishing their own protocols until 2030. Furthermore, the Act does not cover smaller firms that may still build harmful systems, and does not address bias, discrimination, or social harms from AI outputs. California dominates the global AI ecosystem. The state is home to 32 of the world's 50 leading AI companies, and in 2024 more than half of all global AI venture funding went to San Francisco Bay Area startups. Tech giants such as Google, Apple, and Nvidia, each valued at more than $3 trillion, operate from California and employ hundreds of thousands of workers. Despite this concentration of talent and capital, the US federal government has yet to pass comprehensive AI regulation. By creating binding obligations for companies at the frontier of AI development. California is positioning itself as both a leader in innovation and in responsible governance. As Senator Scott Wiener, the bill's author, put it: "With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place common-sense guardrails to understand and reduce risk." Notably, the law shifts the conversation from voluntary safety pledges to enforceable rules backed by penalties, setting a template that other states and even the US Congress may follow. Furthermore, support for such regulation has also come from inside the AI industry. Last year, current and former employees of companies like OpenAI, Google DeepMind, Anthropic, Meta, and xAI signed a letter addressed to Governor Newsom, urging him to enact stronger AI guardrails. They argued that SB 1047, an earlier version of the bill, represented "a meaningful step forward" despite the uncertainty of regulating advanced technologies. The letter specifically highlighted the need to prevent powerful models from aiding in the development of chemical, biological, radiological, or nuclear weapons, and to mandate safeguards like pre-deployment safety testing and red-teaming. India can draw several lessons from California's approach. SB 53 shows how regulation can focus first on catastrophic risk and highly-resourced developers, rather than over-regulating smaller startups. The use of compute thresholds like 10^26 FLOPs offers a clear and measurable way to define who falls under the law. India can adapt similar thresholds for its Digital India Act or other AI-related frameworks. The law's reporting requirements and whistleblower protections could also serve as a template for India, where workers often lack secure channels to raise concerns about harmful technologies. CalCompute shows how public compute infrastructure can include researchers and smaller companies in AI development, rather than leaving them out. At the same time, India may need to go further than California as SB 53 does not address bias, misinformation, or algorithmic opacity, all of which are pressing issues in the Indian context.
Share
Share
Copy Link
Governor Gavin Newsom signs SB 53, a groundbreaking AI safety bill requiring large AI companies to disclose safety protocols. The law aims to protect communities while fostering innovation in the AI industry.
On Monday, California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (SB 53) into law, marking a significant milestone in AI regulation
1
5
. This first-in-the-nation bill requires large AI companies to disclose their safety practices and report potential critical safety incidents to state authorities.
Source: PYMNTS
The new law applies to companies with annual revenues of at least $500 million and mandates several key provisions:
1
.5
.5
.SB 53 represents a compromise between the tech industry's concerns and the need for AI regulation. It replaces the more stringent SB 1047, which Newsom vetoed last year after intense lobbying from tech companies
1
4
.
Source: PYMNTS
The law has received mixed reactions from the AI industry. While some companies like Anthropic have endorsed the bill, others such as Meta and OpenAI lobbied against it
5
. The tech industry has broadly argued that state-level AI policies could create a "patchwork of regulation" hindering innovation.Related Stories
California's new law comes amid a larger debate about AI regulation and its impact on innovation. Some tech leaders and venture capitalists have been pouring millions into super PACs to support pro-AI politicians and legislation
2
.As California houses 32 of the world's top 50 AI companies, this law is expected to have far-reaching effects beyond the state
1
. Other states, such as New York, are considering similar legislation5
. However, there are concerns about potential federal preemption of state laws, with proposals like the SANDBOX Act aiming to allow AI companies to bypass certain regulations2
.
Source: Axios
As the AI landscape continues to evolve rapidly, the implementation and impact of SB 53 will be closely watched by policymakers, tech companies, and AI safety advocates alike.
Summarized by
Navi
[2]
1
Technology

2
Technology

3
Science and Research
