Illinois lawmakers debate AI liability bills as OpenAI and Anthropic clash over regulations

2 Sources

Share

Illinois is considering multiple AI regulations that could reshape how companies are held accountable for harm. Two competing legislative proposals have divided major AI developers, with OpenAI supporting liability limits while Anthropic pushes for stricter oversight. The debate centers on balancing innovation with user safety as AI capabilities rapidly advance.

Illinois Lawmakers Weigh Competing Legislative Proposals on AI Developer Liability

Illinois lawmakers are navigating a complex debate over AI regulations as they consider multiple bills that could fundamentally reshape how artificial intelligence companies are held accountable for harm. The Legislature is examining several measures, including the Artificial Intelligence Public Safety and Child Protection Transparency Act, which would require AI developers to publish child protection plans and face civil penalties for violations

1

. The urgency became clear during recent testimony when Rep. Daniel Didech pointed to several incidents where AI users died by suicide after communicating with chatbots, stressing the need for third-party regulation

1

.

Source: CBS

Source: CBS

OpenAI and Anthropic Back Opposing Bills on Catastrophic Harm

Two of the world's most influential AI companies have taken opposing positions on competing legislative proposals addressing AI liability for catastrophic harm. Senate Bill 3444 would shield developers from responsibility for massive harms—including deaths, serious injuries to 100 or more people, or at least $1 billion in property damage—if they did not act intentionally or recklessly and if they publicly post detailed safety and transparency plans

2

. OpenAI strongly supports this measure, stating they advocate for legislation to improve transparency and risk reduction in safety protocols of frontier AI companies, and hope to establish a harmonized safety framework across states

2

.

Anthropic, however, opposes the liability-limiting approach, arguing that "good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jail-free card against all liability"

2

. Instead, Anthropic testified in support of Senate Bill 3261, which would require independent safety audits and mandate that developers report serious AI safety incidents to the Attorney General

2

. James Hartmann of Anthropic told lawmakers that companies building the most powerful systems have an obligation to do so safely

1

.

Balancing Innovation Against User Safety and Accountability

The debate over guardrails and regulatory frameworks has exposed tensions between protecting user safety and fostering innovation. Industry groups warn that a patchwork of state-level rules could disproportionately harm startups while leaving Big Tech largely unaffected. Zack Kahn of American Innovators Network acknowledged that chatbots interacting with minors need meaningful protections, but cautioned that "a patchwork of state-by-state standards won't slow down Big Tech; however, it will kill the startups we're trying to out-innovate them"

1

.

Source: Axios

Source: Axios

Opponents of applying traditional product liability to AI argue that the framework designed for fixed, physical goods is ill-suited for dynamic digital services. Aden Hizkias of the Chamber of Progress wrote to lawmakers that "AI-enabled chatbots are dynamic digital services ... that can vary from interaction to interaction"

1

. Yet the potential for catastrophic harm remains real, as illustrated by Florida prosecutors investigating whether ChatGPT helped a Florida State University student accused of killing two people by answering questions about where to find the most students and what type of gun to use

2

.

Rapid AI Evolution Demands Adaptive Regulatory Approach

Scott Wisor of Secure AI Project recommended giving the Attorney General the power to adapt laws as necessary, noting that "we're on an exponential curve ... basically every 100 to 210 days, the capabilities of AI models doubles"

1

. Dr. David Utzke, an artificial intelligence design and cybersecurity expert, emphasized the need for stricter oversight, stating that "we really need to establish where the guardrails need to be - what should be available to the public, what should be available to specifically industry experts"

2

. Both proposals face a May 15 deadline to get a vote by the full Illinois Senate

2

, and the outcome could influence how other states approach transparency and accountability in AI development.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved