New York Governor Kathy Hochul signs RAISE Act, making state second to enact major AI safety law

Reviewed byNidhi Govil

3 Sources

Share

Governor Kathy Hochul signed the RAISE Act, positioning New York as the second U.S. state to enact major AI safety legislation after California. The law requires large AI developers to publish safety protocols and report incidents within 72 hours, while creating a new oversight office. The move comes amid federal inaction and just a week after President Trump signed an executive order aimed at overriding state AI laws.

News article

New York Enacts Sweeping AI Safety Legislation

New York Governor Kathy Hochul signed the RAISE Act into law, establishing New York as the second U.S. state to implement comprehensive AI safety legislation

1

. The move positions the state alongside California in setting de facto safety standards for frontier AI companies as federal lawmakers continue to struggle with establishing national regulations

3

. State lawmakers initially passed the AI safety bill in June, but following intense lobbying from the tech industry, Hochul proposed scaling back certain provisions

1

.

Key Requirements for AI Developers

The RAISE Act mandates that large AI developers publish safety protocols and maintain transparency around their operations

2

. Companies must report safety incidents to the state within 72 hours of determining one occurred, ensuring rapid accountability when problems arise

1

. The legislation also requires risk assessment plans from AI developers working on AI frontier models, creating a framework for proactive safety measures rather than reactive responses

3

.

The incident reporting requirements apply to both big and small models, casting a wide net across the industry

3

. This comprehensive approach aims to capture potential safety issues regardless of model size, though the focus remains on frontier AI systems that pose the greatest potential risks.

New Oversight Office and Financial Penalties

A dedicated AI safety oversight office will be established within the Department of Financial Services to monitor AI development and ensure compliance

1

. This office will issue annual reports assessing large AI developers and their adherence to safety standards

2

. The creation of this specialized unit signals New York's commitment to sustained oversight rather than one-time regulatory action.

Companies that fail to submit safety reports or make false statements face financial penalties of up to $1 million for first violations and up to $3 million for subsequent violations

1

. However, these penalties represent a significant reduction from the original bill passed in June, which included fines of up to $10 million for first violations and up to $30 million for subsequent ones

2

. Tech and AI lobbyists successfully negotiated these changes, arguing that state bills should be uniform for industry certainty

3

.

Political Maneuvering and Industry Pushback

The path to signing involved complex negotiations between Hochul and bill sponsors Andrew Gounardes and Alex Bores

3

. According to The New York Times, Hochul ultimately agreed to sign the original bill while lawmakers agreed to make her requested changes next year

1

. State Senator Gounardes posted that "Big Tech thought they could weasel their way into killing our bill. We shut them down and passed the strongest AI safety law in the country"

1

.

Not everyone in the tech industry has supported the legislation. A super PAC backed by Andreessen Horowitz and OpenAI President Greg Brockman is looking to challenge Assemblyman Bores, who co-sponsored the bill with Gounardes

1

. This political targeting demonstrates the high stakes surrounding AI regulation and the willingness of some industry players to challenge lawmakers directly.

Building on California's Framework

Hochul explicitly referenced California Governor Gavin Newsom's similar safety bill signed in September, stating that "this law builds on California's recently adopted framework, creating a unified benchmark among the country's leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public"

1

. Both OpenAI and Anthropic expressed support for New York's bill while calling for federal legislation, with Anthropic's head of external affairs Sarah Heck noting that "the fact that two of the largest states in the country have now enacted AI transparency legislation signals the critical importance of safety and should inspire Congress to build on them"

1

.

Federal Conflict and Future Implications

The signing comes just one week after President Donald Trump signed an executive order directing federal agencies to challenge state AI laws

1

. The executive order, backed by Trump's AI czar David Sacks, aims to establish "a minimally burdensome national standard" and represents the latest attempt by the Trump Administration to curtail states' ability to regulate AI

2

. Legal challenges to this federal intervention appear likely

1

.

As New York and California establish increasingly aligned standards, other states may follow suit, potentially creating a de facto national framework even without federal action. The coordination between these two major tech states could force AI developers to adopt these safety practices nationwide, making the practical impact of state-level AI regulation far broader than their geographic boundaries might suggest.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo