4 Sources
4 Sources
[1]
New York Governor Kathy Hochul signs RAISE Act to regulate AI safety | TechCrunch
Governor Kathy Hochul has signed the RAISE Act, positioning New York as the second U.S. state to enact major AI safety legislation. State lawmakers passed RAISE Act in June, but following lobbying from the tech industry, Hochul proposed changes to scale the bill back. The New York Times reports that Hochul ultimately agreed to sign the original bill, while lawmakers agreed to make her requested changes next year. The bill will require large AI developers to publish information about their safety protocols and report safety incidents to the state within 72 hours. It will also create a new office within the Department of Financial Services to monitor AI development. If companies fail to submit safety reports or make false statements, they can be fined up to $1 million ($3 million for subsequent violations). California Governor Gavin Newsom signed a similar safety bill in September, which Hochul referenced in her announcement. "This law builds on California's recently adopted framework, creating a unified benchmark among the country's leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public," Hochul said. State Senator Andrew Gounardes, one of the bill's sponsors, posted, "Big Tech thought they could weasel their way into killing our bill. We shut them down and passed the strongest AI safety law in the country." Both OpenAI and Anthropic expressed support for New York's bill while also calling for federal legislation, with Anthropic's head of external affairs Sarah Heck telling the NYT, "The fact that two of the largest states in the country have now enacted AI transparency legislation signals the critical importance of safety and should inspire Congress to build on them." Not everyone in the tech industry has been so supportive. In fact, a super PAC backed by Andreessen Horowitz and OpenAI President Greg Brockman is looking to challenge Assemblyman Alex Bores, who co-sponsored the bill with Gounardes. (Bores told journalists, "I appreciate how straightforward they're being about it.") This comes after President Donald Trump signed an executive order that directs federal agencies to challenge state AI laws. The order -- backed by Trump's AI czar David Sacks -- is the latest attempt by the Trump Administration to curtail states' ability to regulate AI, and will likely be challenged in court. We also discussed Trump's executive order, and the role that Sacks and a16z have played in opposing state AI regulation, on the latest episode of the Equity podcast.
[2]
Governor Hochul signs New York's AI safety act
New York governor Kathy Hochul signed legislation on Friday aimed at holding large AI developers accountable for the safety of their models. The RAISE Act establishes rules for greater transparency, requiring these companies to publish information about their safety protocols and report any incidents within 72 hours of their occurrence. It comes a few months after California adopted similar legislation. But, the penalties aren't going to be nearly as steep as they were initially presented when the bill passed back in June. While that version included fines of up to $10 million dollars for a company's first violation and up to $30 million for subsequent violations, according to Politico, Hochul's version sets the fines at up to $1 million for the first violation, and $3 million for any violations after that. In addition to the new reporting rules, a new oversight office dedicated to AI safety and transparency is being born out of the RAISE Act. This office will be part of the Department of Financial Services, and issue annual reports on its assessment of large AI developers. Hochul signed two other pieces of AI legislation earlier in December that focused on the use of the technology in the entertainment industry. At the same time, President Trump has been pushing to curb states' attempts at AI regulation, and signed an executive order this month calling for "a minimally burdensome national standard" instead.
[3]
New York State Just Put Itself on a Legal Collision Course with Trump's AI Policy
On Friday, New York Governor Kathy Hochul signed something called the Responsible AI Safety and Education (Raise) Act, meant to, on one hand, establish an AI safety regime, and on another, troll Silicon Valley Republicans like Marc Andreessen who have been trying to dictate tech policy during the second Trump Administration. This comes just days after President Trump sent out an executive order that ostensibly blocks states from regulating AI. According to the new state law, AI companies with more than $500 million in annual revenue must draft, publish, and follow formalized sets of safety procedures aimed at preventing “critical harm,†and will have to report safety issues within 72 hours or be hit with fines, which makes it stricter than California’s SB 53, which gives companies 15 days to report safety issues. About a week ago on December 11, the Trump executive order called “Ensuring a National Policy Framework for Artificial Intelligence,†framed AI as a federal priority and outlined something called an “AI Litigation Task Force†at the Department of Justice. This task force will ostensibly have the job of challenging state AI laws determined to be in violation of the federal program on AI (basically doing nothing) according to the attorney general. Even if the executive order turns out to lack a strong legal foundation, tying state laws up in legislation is still a dreary prospect, but New York State has rushed headlong into that eventuality with this law. In an explainer for Axios published Friday, legal experts talking to Maria Curi and Ashley Gold averred that Trump’s executive order relies on a strange reading of parts of the Constitution, such as the Dormant Commerce Clause, which is usually interpreted as an attempt to prevent states from writing self-dealing laws that are unfair to other statesâ€"not laws that are simply meant to fill a legal vacuum left by the federal government
[4]
N.Y. Gov. Kathy Hochul signs sweeping AI safety bill
Why it matters: New York and California are setting de facto safety rules for frontier AI companies in the U.S. as Congress struggles to settle on federal standards. * It comes a week after President Trump signed an executive order aiming to override state AI laws. Driving the news: After negotiations, the RAISE Act regulates frontier AI safety with incident reporting requirements for both big and small models. * It also requires risk assessment plans and puts in place financial penalties for violations (up to $1 million for the first violation and up to $3 million for subsequent violations). * AI companies must report safety incidents to the state within 72 hours of determining one occurred. * It also "creates a new oversight office within the Department of Financial Services to ensure AI frontier model transparency," per a release from Hochul's office. What they're saying: "This law builds on California's recently adopted framework, creating a unified benchmark among the country's leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public," Hochul said in the release, calling New York a leader in AI regulation. * "Today is a major victory in what will soon be a national fight to harness the best of AI's potential and protect Americans from the worst of its harms," said state Assemblymember and bill sponsor Alex Bores in a statement. Flashback: Prior to Hochul signing the bill, tech and AI lobbyists were able to negotiate changes so it more closely resembled California's SB 53, arguing that state bills should be uniform for industry certainty. * But the bill sponsors secured some changes to strengthen requirements around reporting critical safety incidents in the final version. Bores said in the release that negotiations this week among himself, Hochul and co-sponsor state Sen. Andrew Gounardes were successful.
Share
Share
Copy Link
Governor Kathy Hochul signed the RAISE Act, making New York the second U.S. state to enact major AI safety legislation. The law requires large AI developers to publish safety protocols and report incidents within 72 hours, with penalties up to $1 million for violations. The move sets up a potential legal clash with President Trump's recent executive order aimed at blocking state AI regulation.
Governor Kathy Hochul signed the RAISE Act on Friday, positioning New York as the second U.S. state to enact comprehensive AI safety legislation
1
. The law targets AI developers with more than $500 million in annual revenue, requiring them to draft, publish, and follow formalized safety protocols aimed at preventing critical harm3
. Companies must report safety incidents to the state within 72 hours of determining one occurred, a stricter timeline than California's 15-day requirement3
4
. State lawmakers passed the bill in June, but following lobbying from the tech industry, Hochul proposed changes to scale it back. The New York Times reports that Hochul ultimately agreed to sign the original bill, while lawmakers agreed to make her requested changes next year1
.
Source: Gizmodo
The penalties under the final version represent a significant reduction from the initial proposal. While the June version included fines of up to $10 million for a first violation and up to $30 million for subsequent violations, the signed version sets penalties at up to $1 million for the first violation and $3 million for any violations after that
2
4
. Companies that fail to submit safety reports or make false statements face these financial consequences. The RAISE Act also creates a new oversight office within the Department of Financial Services to monitor AI development and issue annual reports assessing large AI developers1
2
.Hochul explicitly referenced California Governor Gavin Newsom's similar safety bill signed in September, positioning New York's law as part of a broader state-level movement. "This law builds on California's recently adopted framework, creating a unified benchmark among the country's leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public," Hochul said
1
4
. Tech and AI lobbyists negotiated changes so the bill more closely resembled California's SB 53, arguing that state bills should be uniform for industry certainty4
. However, bill sponsors secured changes to strengthen requirements around reporting critical safety incidents in the final version.
Source: Axios
Related Stories
Both OpenAI and Anthropic expressed support for New York's bill while calling for federal legislation. Anthropic's head of external affairs Sarah Heck told the New York Times, "The fact that two of the largest states in the country have now enacted AI transparency legislation signals the critical importance of safety and should inspire Congress to build on them"
1
. Yet opposition remains fierce in some tech circles. A super PAC backed by Andreessen Horowitz and OpenAI President Greg Brockman is looking to challenge Assemblyman Alex Bores, who co-sponsored the bill with state Senator Andrew Gounardes1
. Gounardes posted, "Big Tech thought they could weasel their way into killing our bill. We shut them down and passed the strongest AI safety law in the country"1
.The RAISE Act arrives just days after President Trump signed an executive order on December 11 titled "Ensuring a National Policy Framework for Artificial Intelligence," which directs federal agencies to challenge state AI laws
1
3
. The executive order, backed by Trump's AI czar David Sacks, outlines an "AI Litigation Task Force" at the Department of Justice to challenge state AI laws determined to be in violation of the federal program on AI regulation3
. Legal experts speaking to Axios suggested that Trump's executive order relies on a questionable reading of parts of the Constitution, such as the Dormant Commerce Clause, which is typically interpreted as preventing states from writing self-dealing laws unfair to other states—not laws meant to fill a legal vacuum left by the federal government3
. New York and California are setting de facto safety rules for frontier AI companies as Congress struggles to settle on federal standards, and the legal battle ahead will likely determine whether states retain authority to regulate AI within their borders4
.Summarized by
Navi
[2]
11 Dec 2025•Policy and Regulation

28 Dec 2024•Policy and Regulation

26 Sept 2025•Policy and Regulation

1
Policy and Regulation

2
Technology

3
Technology
