2 Sources
2 Sources
[1]
N.Y. Gov. Kathy Hochul proposes major changes to AI bill
Why it matters: States are increasingly the venue of intense tech lobbying as they get more active on regulating AI, at the same time that President Trump pushes ahead with plans to stop them with an executive order. * According to the sources who have reviewed the document, the original RAISE Act has been crossed out, with replacement text that nearly verbatim resembles a California AI safety bill that Gov. Gavin Newsom (D) signed this year, SB53. Driving the news: Hochul this week "redlined" the RAISE Act, an AI safety bill from NY state Sen. Andrew Gounardes and Assemblymember Alex Bores that was written to require incident reporting and safety plans for powerful AI models. The big picture: The debate has received national attention as pro-AI groups go after Bores, who is pushing for Hochul to sign the bill and is running for former Rep. Jerry Nadler's (D-N.Y.) seat in Congress. * This week, pro-AI PAC Leading the Future started running ads going after Bores in his district NY-12, saying the RAISE Act would "fail to keep people safe" and "crush jobs." * AI companies and startups have been urging major changes to the RAISE Act and suggesting it more closely mirror SB53. California's AI law that Hochul is looking to adopt requires large AI developers to make public disclosures about safety protocols and report safety incidents. * It also creates whistleblower protections and expands cloud computing access for smaller developers and researchers. What's inside: Per a person familiar with the negotiations, the RAISE Act no longer includes: * A requirement for detailed safety standards from AI companies that reasonably reduce the risk of harm. * A ban on releasing models that pose an unreasonable risk of harm. * A requirement to report security incidents; along with requirements that would apply to future models of foreign AI models like DeepSeek. The bill also has a weakened catastrophic risk disclosure component, the person said. A spokesperson for Hochul didn't immediately respond to a request for comment. * The news was first reported in Transformer. What's next: Hochul has ten days (not counting Sunday) from the time she is delivered bills to sign, veto or agree to "chapter amendments" with bill sponsors.
[2]
Gov. Hochul should kill Albany's toxic AI bill -- and let Congress...
Even though Congress has punted (for now) on any national rules to regulate Artificial Intelligence development, Gov. Kathy Hochul should still veto the bill now on her desk for New York-only regulation. America is absolutely guaranteed to lose the race with China to set the global standards for AI if US developers have to cope with 50 different sets of contradictory regulation -- and the nation would pay a huge long-term economic price for that loss. Hochul herself flagged that point months ago on Bloomberg TV: "People prefer to have a federal regulation," as "it's hard when one state has a set of rules, another state does, another state. I don't think that's a model for inspiring innovation." Beyond that, the bill passed by New York legislators, The Responsible Artificial Intelligence Safety and Education (RAISE) Act, contains some terrible provisions designed to boost trial lawyers, not cutting-edge tech. US companies want to make their designs open-source, allowing others to build on their tech so American software can become the global standard. But the ambulance chasers want to be able to sue one tech company for sky-high damages over how another programmer uses (abuses) such open-source code; that's tantamount to banning US open-source work -- even though US lawyers will have no ability to suck the blood out of China's open-source innovators. Similar innovation-crushing provisions are already part of the Transparency in Frontier AI Act that California passed last month; having New York second the trend is beyond ominous. At the very least, it's going to push cutting-edge tech companies to states like Texas and Florida, whose laws are likely to be far more innovation-friendly: On top of New York's high taxes and energy costs, lawsuit-friendly laws will make the Empire State a foolish place for tech to set up shop. Again, Congress really needs to act, with lawmakers meeting in the middle to set clear, practical AI guidelines. Among the musts for any national regulation, incidentally: Copyright must be protected, so that AI companies can't mine publishers' content without compensating them. That could put countless entertainment, news and media outfits, including us, out of business fast. But American tech companies can't be strangled, either, as the New York bill would do: It even tries to set international regulatory standards -- anyone who wants to do business in New York will have to abide by them: Another way to send more companies running full-speed from the Empire State. China's lack of any true rule of law (especially in protecting intellectual property) constitutes a real handicap in the AI race, but imposing actively bad laws in America could prove worse. Above all else, most folks who win New York state legislative races are terrible candidates when it comes to grasping the issues involved in cutting-edge tech; how can they possibly craft sensible policy that protects consumers while allowing innovation? Veto the RAISE act, governor, and stick to your guns: Shaping the regulatory future of AI is a job for national lawmakers.
Share
Share
Copy Link
New York Governor Kathy Hochul has significantly weakened the RAISE Act, an AI safety bill that would have required incident reporting and safety plans for powerful AI models. The redlined version now closely resembles California's lighter-touch SB53, removing key provisions including bans on risky model releases and detailed safety standards. The move comes amid intense lobbying from AI companies and pro-AI groups targeting bill sponsor Alex Bores.
New York Governor Kathy Hochul has delivered a significant blow to the state's AI safety bill this week by proposing sweeping changes that would dramatically weaken its oversight provisions. According to sources who reviewed the redlined document, the original RAISE Act has been crossed out with replacement text that nearly verbatim resembles SB53, a California AI bill that Governor Gavin Newsom signed earlier this year
1
. The Responsible Artificial Intelligence Safety and Education Act, initially authored by state Senator Andrew Gounardes and Assemblymember Alex Bores, was written to require incident reporting and safety plans for powerful AI models developed in New York1
.
Source: New York Post
The state's AI safety bill no longer includes several critical components that made it one of the more stringent proposals for state-level AI regulations in the country. Per a person familiar with the negotiations, the revised version has removed a requirement for detailed AI safety standards from AI companies that would reasonably reduce the risk of harm, a ban on releasing AI models that pose an unreasonable risk of harm, and requirements to report security incidents
1
. The bill also features a weakened catastrophic risk disclosure component, and provisions that would have applied to future models of foreign AI systems like DeepSeek have been eliminated .The dramatic shift in the AI bill comes amid intense tech lobbying pressure on state lawmakers. This week, pro-AI PAC Leading the Future launched advertisements targeting Bores in his district NY-12, claiming the RAISE Act would "fail to keep people safe" and "crush jobs"
1
. Bores, who is running for former Representative Jerry Nadler's Congressional seat, has been pushing for Hochul to sign the original bill, making him a focal point in the national debate over how states should regulate artificial intelligence1
. AI companies and startups have been urging major changes to the legislation, specifically suggesting it more closely mirror the California AI bill that Hochul now appears ready to adopt1
.
Source: Axios
The California model that Hochul is looking to replicate requires large AI developers to make public disclosures about safety protocols and report safety incidents, while also creating whistleblower protections and expanding cloud computing access for smaller developers and researchers
1
. This lighter-touch approach has drawn support from the tech industry but criticism from those who believe stronger safeguards are necessary as AI models grow more powerful.The controversy surrounding the RAISE Act reflects broader tensions about whether states or Congress should set the rules for AI development. Hochul herself previously flagged concerns about a patchwork of state regulations, stating on Bloomberg TV months ago that "people prefer to have a federal regulation" because "it's hard when one state has a set of rules, another state does, another state. I don't think that's a model for inspiring innovation"
2
. Critics argue that if AI developers must cope with 50 different sets of contradictory regulations, America risks losing the race with China to set global standards for artificial intelligence, potentially paying a huge long-term economic price2
.Opponents of the original RAISE Act have raised particular concerns about provisions they say would boost trial lawyers at the expense of American innovation. Critics point to language that could allow lawsuits against one tech company over how another programmer uses open-source code, effectively discouraging US companies from making their designs open-source even as this practice helps American software become the global standard
2
. Some observers warn that lawsuit-friendly laws combined with New York's high taxes and energy costs could push cutting-edge tech companies to states like Texas and Florida2
.Related Stories
Hochul has ten days, not counting Sunday, from the time she receives the bill to sign, veto, or agree to "chapter amendments" with bill sponsors
1
. The decision carries significant implications beyond New York, as states increasingly become venues for intense debates over AI governance while President Trump pushes ahead with plans to stop state-level regulation through an executive order1
. The tension between protecting intellectual property rights—including copyright protections so AI companies cannot mine publishers' content without compensation—and avoiding regulations that strangle American tech companies remains unresolved2
. As the debate continues, observers are watching whether Congress will step in with national AI regulation that balances consumer protection with the need to maintain American competitiveness in the global AI race.Summarized by
Navi
Yesterday•Policy and Regulation

01 Oct 2024•Policy and Regulation

24 Oct 2024•Policy and Regulation

1
Technology

2
Technology

3
Technology
