3 Sources
3 Sources
[1]
New York Governor Kathy Hochul signs RAISE Act to regulate AI safety | TechCrunch
Governor Kathy Hochul has signed the RAISE Act, positioning New York as the second U.S. state to enact major AI safety legislation. State lawmakers passed RAISE Act in June, but following lobbying from the tech industry, Hochul proposed changes to scale the bill back. The New York Times reports that Hochul ultimately agreed to sign the original bill, while lawmakers agreed to make her requested changes next year. The bill will require large AI developers to publish information about their safety protocols and report safety incidents to the state within 72 hours. It will also create a new office within the Department of Financial Services to monitor AI development. If companies fail to submit safety reports or make false statements, they can be fined up to $1 million ($3 million for subsequent violations). California Governor Gavin Newsom signed a similar safety bill in September, which Hochul referenced in her announcement. "This law builds on California's recently adopted framework, creating a unified benchmark among the country's leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public," Hochul said. State Senator Andrew Gounardes, one of the bill's sponsors, posted, "Big Tech thought they could weasel their way into killing our bill. We shut them down and passed the strongest AI safety law in the country." Both OpenAI and Anthropic expressed support for New York's bill while also calling for federal legislation, with Anthropic's head of external affairs Sarah Heck telling the NYT, "The fact that two of the largest states in the country have now enacted AI transparency legislation signals the critical importance of safety and should inspire Congress to build on them." Not everyone in the tech industry has been so supportive. In fact, a super PAC backed by Andreessen Horowitz and OpenAI President Greg Brockman is looking to challenge Assemblyman Alex Bores, who co-sponsored the bill with Gounardes. (Bores told journalists, "I appreciate how straightforward they're being about it.") This comes after President Donald Trump signed an executive order that directs federal agencies to challenge state AI laws. The order -- backed by Trump's AI czar David Sacks -- is the latest attempt by the Trump Administration to curtail states' ability to regulate AI, and will likely be challenged in court. We also discussed Trump's executive order, and the role that Sacks and a16z have played in opposing state AI regulation, on the latest episode of the Equity podcast.
[2]
Governor Hochul signs New York's AI safety act
New York governor Kathy Hochul signed legislation on Friday aimed at holding large AI developers accountable for the safety of their models. The RAISE Act establishes rules for greater transparency, requiring these companies to publish information about their safety protocols and report any incidents within 72 hours of their occurrence. It comes a few months after California adopted similar legislation. But, the penalties aren't going to be nearly as steep as they were initially presented when the bill passed back in June. While that version included fines of up to $10 million dollars for a company's first violation and up to $30 million for subsequent violations, according to Politico, Hochul's version sets the fines at up to $1 million for the first violation, and $3 million for any violations after that. In addition to the new reporting rules, a new oversight office dedicated to AI safety and transparency is being born out of the RAISE Act. This office will be part of the Department of Financial Services, and issue annual reports on its assessment of large AI developers. Hochul signed two other pieces of AI legislation earlier in December that focused on the use of the technology in the entertainment industry. At the same time, President Trump has been pushing to curb states' attempts at AI regulation, and signed an executive order this month calling for "a minimally burdensome national standard" instead.
[3]
N.Y. Gov. Kathy Hochul signs sweeping AI safety bill
Why it matters: New York and California are setting de facto safety rules for frontier AI companies in the U.S. as Congress struggles to settle on federal standards. * It comes a week after President Trump signed an executive order aiming to override state AI laws. Driving the news: After negotiations, the RAISE Act regulates frontier AI safety with incident reporting requirements for both big and small models. * It also requires risk assessment plans and puts in place financial penalties for violations (up to $1 million for the first violation and up to $3 million for subsequent violations). * AI companies must report safety incidents to the state within 72 hours of determining one occurred. * It also "creates a new oversight office within the Department of Financial Services to ensure AI frontier model transparency," per a release from Hochul's office. What they're saying: "This law builds on California's recently adopted framework, creating a unified benchmark among the country's leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public," Hochul said in the release, calling New York a leader in AI regulation. * "Today is a major victory in what will soon be a national fight to harness the best of AI's potential and protect Americans from the worst of its harms," said state Assemblymember and bill sponsor Alex Bores in a statement. Flashback: Prior to Hochul signing the bill, tech and AI lobbyists were able to negotiate changes so it more closely resembled California's SB 53, arguing that state bills should be uniform for industry certainty. * But the bill sponsors secured some changes to strengthen requirements around reporting critical safety incidents in the final version. Bores said in the release that negotiations this week among himself, Hochul and co-sponsor state Sen. Andrew Gounardes were successful.
Share
Share
Copy Link
Governor Kathy Hochul signed the RAISE Act, positioning New York as the second U.S. state to enact major AI safety legislation after California. The law requires large AI developers to publish safety protocols and report incidents within 72 hours, while creating a new oversight office. The move comes amid federal inaction and just a week after President Trump signed an executive order aimed at overriding state AI laws.

New York Governor Kathy Hochul signed the RAISE Act into law, establishing New York as the second U.S. state to implement comprehensive AI safety legislation
1
. The move positions the state alongside California in setting de facto safety standards for frontier AI companies as federal lawmakers continue to struggle with establishing national regulations3
. State lawmakers initially passed the AI safety bill in June, but following intense lobbying from the tech industry, Hochul proposed scaling back certain provisions1
.The RAISE Act mandates that large AI developers publish safety protocols and maintain transparency around their operations
2
. Companies must report safety incidents to the state within 72 hours of determining one occurred, ensuring rapid accountability when problems arise1
. The legislation also requires risk assessment plans from AI developers working on AI frontier models, creating a framework for proactive safety measures rather than reactive responses3
.The incident reporting requirements apply to both big and small models, casting a wide net across the industry
3
. This comprehensive approach aims to capture potential safety issues regardless of model size, though the focus remains on frontier AI systems that pose the greatest potential risks.A dedicated AI safety oversight office will be established within the Department of Financial Services to monitor AI development and ensure compliance
1
. This office will issue annual reports assessing large AI developers and their adherence to safety standards2
. The creation of this specialized unit signals New York's commitment to sustained oversight rather than one-time regulatory action.Companies that fail to submit safety reports or make false statements face financial penalties of up to $1 million for first violations and up to $3 million for subsequent violations
1
. However, these penalties represent a significant reduction from the original bill passed in June, which included fines of up to $10 million for first violations and up to $30 million for subsequent ones2
. Tech and AI lobbyists successfully negotiated these changes, arguing that state bills should be uniform for industry certainty3
.The path to signing involved complex negotiations between Hochul and bill sponsors Andrew Gounardes and Alex Bores
3
. According to The New York Times, Hochul ultimately agreed to sign the original bill while lawmakers agreed to make her requested changes next year1
. State Senator Gounardes posted that "Big Tech thought they could weasel their way into killing our bill. We shut them down and passed the strongest AI safety law in the country"1
.Not everyone in the tech industry has supported the legislation. A super PAC backed by Andreessen Horowitz and OpenAI President Greg Brockman is looking to challenge Assemblyman Bores, who co-sponsored the bill with Gounardes
1
. This political targeting demonstrates the high stakes surrounding AI regulation and the willingness of some industry players to challenge lawmakers directly.Related Stories
Hochul explicitly referenced California Governor Gavin Newsom's similar safety bill signed in September, stating that "this law builds on California's recently adopted framework, creating a unified benchmark among the country's leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public"
1
. Both OpenAI and Anthropic expressed support for New York's bill while calling for federal legislation, with Anthropic's head of external affairs Sarah Heck noting that "the fact that two of the largest states in the country have now enacted AI transparency legislation signals the critical importance of safety and should inspire Congress to build on them"1
.The signing comes just one week after President Donald Trump signed an executive order directing federal agencies to challenge state AI laws
1
. The executive order, backed by Trump's AI czar David Sacks, aims to establish "a minimally burdensome national standard" and represents the latest attempt by the Trump Administration to curtail states' ability to regulate AI2
. Legal challenges to this federal intervention appear likely1
.As New York and California establish increasingly aligned standards, other states may follow suit, potentially creating a de facto national framework even without federal action. The coordination between these two major tech states could force AI developers to adopt these safety practices nationwide, making the practical impact of state-level AI regulation far broader than their geographic boundaries might suggest.
Summarized by
Navi
[2]
11 Dec 2025•Policy and Regulation

28 Dec 2024•Policy and Regulation

26 Sept 2025•Policy and Regulation

1
Technology

2
Technology

3
Technology
