2 Sources
[1]
Could Illinois AI rules hurt startups?
Why it matters: The Legislature is currently debating several bills focused on the technology, including one calling on companies to prioritize user safety and another that would hold them liable for any potential harm. The latest: Lawmakers and tech industry experts testified this week on the Artificial Intelligence Public Safety and Child Protection Transparency Act, which would require AI developers to publish a child protection plan and make companies subject to civil penalties if they violate the law. Context: Rep. Daniel Didech stressed the need for third-party regulation, pointing to several incidents in recent years where AI users died by suicide after communicating with chatbots. What they're saying: "We are founded on a particular belief that AI may become one of the most consequential technologies in human history, and that the companies building the most powerful systems have an obligation to do so safely," James Hartmann of Anthropic told lawmakers. * As tech inevitably outpaces legislation, Scott Wisor of Secure AI Project recommended giving the Attorney General the power to adapt the laws as necessary. "We're on an exponential curve ... basically every 100 to 210 days, the capabilities of AI models doubles," Wisor testified. The other side: Industry groups warn that a patchwork of state regulations could hurt startups. * "Chatbots that interact with miners need meaningful protections. We're not here to say don't regulate," Zack Kahn of American Innovators Network said. "We're here to say that a patchwork of state-by-state standards won't slow down Big Tech; however, it will kill the startups we're trying to out-innovate them." The hearing also centered on a bill that would create consumer protections around chatbots similar to those for other products. * Opponents argue that traditional product liability -- designed for fixed, physical goods -- is a poor fit. * "AI-enabled chatbots are dynamic digital services ... that can vary from interaction to interaction," Aden Hizkias of the Chamber of Progress wrote to lawmakers. State of play: Illinois does have AI laws on the books now, including the ban of AI in psychotherapy, except as administrative support for licensed therapists, and requirements for employers to inform applicants of any AI use during job interviews Meanwhile, on the national level, Democrats are at odds with how to talk about AI to constituents, Politico reports. * Some in the party are focusing solely on the cost of data centers rather than other potential threats, like the ones Illinois is confronting with AI. If you or someone you know may be considering suicide, call or text the National Suicide Prevention Lifeline at 988. Ayuda disponible en español.
[2]
Illinois lawmakers debating competing proposals on AI liability in catastrophes
Megan De Mar is a member of the CBS News Chicago Investigators team, focusing on topical investigative stories. Illinois lawmakers are trying to decide what should happen when artificial intelligence leads to serious destruction or even death, and two of the most influential AI companies in the world are backing opposing state bills trying to answer that question. Prosecutors in Florida are investigating whether Open AI's Chat GPT helped a Florida State University student accused of killing two people and wounding several others after opening fire on campus last year. Florida Attorney General James Uthmeier has said his team determined ChatGPT answered questions like where the accused shooter could find the most students, and what type of gun to use. "If this were a person on the other end of the screen, we would be charging them with murder," Uthmeier said. Artificial intelligence design and cybersecurity expert Dr. David Utzke said it's not difficult to fathom how the use of AI could lead to these kinds of serious harms. "We've now built these models with such complexity and such refinement that it won't just tell you where the vulnerabilities are, it will actually build the workarounds to get you in," he said. That's why Illinois lawmakers have started getting out ahead of the liability concerns. Under Senate Bill 3444, developers would not be held responsible for massive harms -- such as deaths, serious injuries to 100 or more people, or at least $1 billion in property damage -- if they did not act intentionally or recklessly and if they publicly post detailed safety and transparency plans. The maker of ChatGPT said they support that proposal. "OpenAI strongly supports and advocates for legislation and regularly efforts to improve the transparency and risk reduction in safety protocols of frontier AI companies. That's why we have worked with states like California and New York to help establish a harmonized safety framework. In the absence of federal action, we will continue to work with states -- including Illinois -- to work towards a consistent safety framework, including with enforcement mechanisms that provide similar penalties as California and New York for noncompliance. We we hope these state laws will inform a national framework that will help ensure the U.S. continues to lead," the company said in a statement. But AI company Anthropic feels differently about the measure sponsored by Illinois state Sen. Bill Cunningham. "Good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jail-free card against all liability. We know that Senator Cunningham cares deeply about AI safety and we look forward to working with him on changes that would instead pair transparency with real accountability for mitigating the most serious harms frontier AI systems could cause," the company said. Instead of limiting liability, Anthropic testified in support of Illinois Senate Bill 3261, which would require artificial intelligence developers to undergo independent audits of their safety and child protection plans. They would also need to report serious AI safety incidents to the Illinois Attorney General. That bill was introduced in February. Both proposals face a May 15 deadline to get a vote by the full Illinois Senate. Utzke said he thinks Illinois should be less focused on limiting liability and more focused on strengthening AI oversight. "We really need to establish where the guardrails need to be - what should be available to the public, what should be available to specifically industry experts - and start to parse this thing out. That's why we have licenses for pilots," he said.
Share
Copy Link
Illinois is considering multiple AI regulations that could reshape how companies are held accountable for harm. Two competing legislative proposals have divided major AI developers, with OpenAI supporting liability limits while Anthropic pushes for stricter oversight. The debate centers on balancing innovation with user safety as AI capabilities rapidly advance.
Illinois lawmakers are navigating a complex debate over AI regulations as they consider multiple bills that could fundamentally reshape how artificial intelligence companies are held accountable for harm. The Legislature is examining several measures, including the Artificial Intelligence Public Safety and Child Protection Transparency Act, which would require AI developers to publish child protection plans and face civil penalties for violations
1
. The urgency became clear during recent testimony when Rep. Daniel Didech pointed to several incidents where AI users died by suicide after communicating with chatbots, stressing the need for third-party regulation1
.
Source: CBS
Two of the world's most influential AI companies have taken opposing positions on competing legislative proposals addressing AI liability for catastrophic harm. Senate Bill 3444 would shield developers from responsibility for massive harms—including deaths, serious injuries to 100 or more people, or at least $1 billion in property damage—if they did not act intentionally or recklessly and if they publicly post detailed safety and transparency plans
2
. OpenAI strongly supports this measure, stating they advocate for legislation to improve transparency and risk reduction in safety protocols of frontier AI companies, and hope to establish a harmonized safety framework across states2
.Anthropic, however, opposes the liability-limiting approach, arguing that "good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jail-free card against all liability"
2
. Instead, Anthropic testified in support of Senate Bill 3261, which would require independent safety audits and mandate that developers report serious AI safety incidents to the Attorney General2
. James Hartmann of Anthropic told lawmakers that companies building the most powerful systems have an obligation to do so safely1
.The debate over guardrails and regulatory frameworks has exposed tensions between protecting user safety and fostering innovation. Industry groups warn that a patchwork of state-level rules could disproportionately harm startups while leaving Big Tech largely unaffected. Zack Kahn of American Innovators Network acknowledged that chatbots interacting with minors need meaningful protections, but cautioned that "a patchwork of state-by-state standards won't slow down Big Tech; however, it will kill the startups we're trying to out-innovate them"
1
.
Source: Axios
Opponents of applying traditional product liability to AI argue that the framework designed for fixed, physical goods is ill-suited for dynamic digital services. Aden Hizkias of the Chamber of Progress wrote to lawmakers that "AI-enabled chatbots are dynamic digital services ... that can vary from interaction to interaction"
1
. Yet the potential for catastrophic harm remains real, as illustrated by Florida prosecutors investigating whether ChatGPT helped a Florida State University student accused of killing two people by answering questions about where to find the most students and what type of gun to use2
.Related Stories
Scott Wisor of Secure AI Project recommended giving the Attorney General the power to adapt laws as necessary, noting that "we're on an exponential curve ... basically every 100 to 210 days, the capabilities of AI models doubles"
1
. Dr. David Utzke, an artificial intelligence design and cybersecurity expert, emphasized the need for stricter oversight, stating that "we really need to establish where the guardrails need to be - what should be available to the public, what should be available to specifically industry experts"2
. Both proposals face a May 15 deadline to get a vote by the full Illinois Senate2
, and the outcome could influence how other states approach transparency and accountability in AI development.Summarized by
Navi
10 Apr 2026•Policy and Regulation

05 Jun 2025•Policy and Regulation

06 Aug 2025•Policy and Regulation

1
Science and Research

2
Technology

3
Technology
