8 Sources
8 Sources
[1]
Anthropic Opposes the Extreme AI Liability Bill That OpenAI Backed
Anthropic has come out against a proposed Illinois law backed by OpenAI that would shield AI labs from liability if their systems are used to cause large-scale harm, like mass casualties or more than $1 billion in property damage. The fight over the state bill, SB 3444, is drawing new battlelines between Anthropic and OpenAI over how AI technologies should be regulated. While AI policy experts say that the legislation only has a remote chance of becoming law, it has nonetheless exposed political divisions between two leading US AI labs that could become increasingly important as the rival companies ramp up their lobbying activity across the country. Behind the scenes, Anthropic has been lobbying state Senator Bill Cunningham, SB 3444's sponsor, and other Illinois lawmakers to either make major changes to the bill or kill it as it stands, according to people familiar with the matter. In an email to WIRED, an Anthropic spokesperson confirmed the company's opposition to SB 3444, and said it has held promising conversations with Cunningham about using the bill as a starting point for future AI legislation. "We are opposed to this bill. Good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jail-free card against all liability," Cesar Fernandez, Anthropic's head of US state and local government relations, said in a statement. "We know that Senator Cunningham cares deeply about AI safety and we look forward to working with him on changes that would instead pair transparency with real accountability for mitigating the most serious harms frontier AI systems could cause." Representatives for Cunningham and Illinois Governor JB Pritzker did not respond to WIRED's request for comment ahead of publication. The crux of OpenAI and Anthropic's disagreement over SB 3444 comes down to who should be liable in the event of an AI-enabled disaster -- a nightmare potential scenario that US lawmakers have only recently begun to confront. If SB 3444 were passed, an AI lab would not be responsible if a bad actor used their AI model to, for example, create a bioweapon that kills hundreds of people, so long as the lab drafted its own safety framework and published it on its website. OpenAI has argued that SB 3444 reduces the risk of serious harm from frontier AI systems while "still allowing this technology to get into the hands of the people and businesses -- small and big -- of Illinois,." The ChatGPT maker says it has worked with states like New York and California to create what is calls a "harmonized" approach to regulating AI. "In the absence of federal action, we will continue to work with states -- including Illinois -- to work towards a consistent safety framework," OpenAI spokesperson Liz Bourgeois said in a statement. "We hope these state laws will inform a national framework that will help ensure the US continues to lead." Anthropic, on the other hand, is arguing that companies developing frontier AI models should be held at least partially responsible if their technology is used for widespread societal harm. Some experts say the bill would dismantle existing regulations meant to deter companies from behaving badly. "Liability already exists under common law and provides a powerful incentive for AI companies to take reasonable steps to prevent foreseeable risks from their AI systems," says Thomas Woodside, cofounder and senior policy analyst at the Secure AI Project, a nonprofit that has helped develop and advocate for AI safety laws in California and New York. "SB 3444 would take the extreme step of nearly eliminating liability for severe harms. But it's a bad idea to weaken liability, which in most states is the most significant form of legal accountability for AI companies that's already in place."
[2]
OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters
OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage. The effort seems to mark a shift in OpenAI's legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology's harms. Several AI policy experts tell WIRED that SB 3444 -- which could set a new standard for the industry -- is a more extreme measure than bills OpenAI has supported in the past. The bill, SB 3444, would shield frontier AI developers from liability for "critical harms" caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America's largest AI labs like OpenAI, Google, xAI, Anthropic, and Meta. "We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses -- small and big -- of Illinois," said OpenAI spokesperson Jamie Radice in an emailed statement. "They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards." Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn't intentional and they published their reports. Federal and state legislatures in the US have yet to pass any laws specifically determining whether AI model developers, like OpenAI, could be liable for these types of harm caused by their technology. But as AI labs continue to release more powerful AI models that raise novel safety and cybersecurity challenges, such as Anthropic's Claude Mythos, these questions feel increasingly prescient. In her testimony supporting SB 3444, a member of OpenAI's Global Affairs team, Caitlin Niedermeyer, also argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that's consistent with the Trump administration's crackdown on state AI safety laws, claiming it's important to avoid "a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety." This is also consistent with the broader view of Silicon Valley in recent years, which has generally argued that it's paramount for AI legislation to not hamper America's position in the global AI race. While SB 3444 is itself a state-level safety law, Niedermeyer argued that those can be effective if they "reinforce a path toward harmonization with federal systems." "At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation," Niedermeyer said. Scott Wisor, policy director for the Secure AI project, tells WIRED he believes this bill has a slim chance of passing, given Illinois' reputation for aggressively regulating technology. "We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There's no reason existing AI companies should be facing reduced liability," Wisor says.
[3]
The OpenAI-Anthropic Cold War Comes to Illinois
Despite its best efforts, the Trump administration has been unable to implement a moratorium preventing states from passing laws regulating AI companies. Thus far, most states have used their authority to create guardrails that AI firms must comply with. But in Illinois, OpenAI has thrown its weight (and lobbying budget) behind a bill that would grant it legal protection from large-scale harm. Unfortunately for it, another frontier AI lab has put its thumb on the other side of the scale. According to a report from Wired, Anthropic has also decided to get involved in local politics and is lobbying against the bill that OpenAI has been pushing for. The bill at the center of the power struggle between AI giants is Senate Bill 3444, the Artificial Intelligence Safety Act. The legislation was authored by Democratic Senator Bill Cunningham, and while the incredibly generic name would make one think that the goal is to establish safety standards for AI, the law would actually offer safety to AI companies that might face litigation. Effectively, it would offer frontier AI companies a legal shield preventing them from being held responsible for large-scale harms caused by their AI models, including death or serious injury of 100 or more people or at least $1 billion in property damage. OpenAI has been trying to get out in front of laws that would create any additional burden on AI companiesâ€"a policy that has almost certainly been hastened by the fact that the company has been subject to several wrongful death lawsuits from families who lost a family member to suicide following conversations with ChatGPT. The company also publicly backed a piece of AI safety legislation in California that, while it added transparency requirements for frontier model makers, did not implement any liability laws that the companies could face. The legislation in Illinois goes a step further than just not establishing liability risk, but actually shields companies from it. Per Wired, Anthropic has taken issue with that approach and has been working behind the scenes to either alter or kill the bill entirely. “We are opposed to this bill. Good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jail-free card against all liability,†Cesar Fernandez, Anthropic’s head of US state and local government relations, told the publication. Anthropic has been much more aggressive than OpenAI in advocating for stricter safety standards for AI companies. The two companies were previously on opposite ends of an AI safety bill in California (OpenAI eventually offered its support for that law, but only after it was pretty clear it was going to pass). Anthropic is backing a competing AI safety bill in Illinois, SB 3261, that would, among other things, require AI firms to create public safety and child protection plans that could be audited to determine their effectiveness. While some of Anthropic's pro-safety positions come down to marketing, the idea that AI companies should at least be subject to some scrutiny if someone were to, let's say, use an AI model to develop a chemical weapon, does not exactly seem like a radical act of self-flagellation. It seems like a pretty reasonable expectation of accountability, and it seems particularly wild that a company like OpenAI would express concerns over the existential threats posed by the development of its technology while also pushing to not be liable should any of those doomsday outcomes come to fruition. We've moved beyond the "At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus" stage of AI to the "we are issuing our support for the No One Is Responsible For The Harms Of The Torment Nexus Act."
[4]
Illinois is OpenAI and Anthropic's latest battleground as state tries to assess liability for catastrophes caused by AI | Fortune
It's the latest round in the companies' ongoing feud over AI safety and regulation, as their CEOs have traded internal and public barbs over each other's approach. OpenAI is backing SB 3444, under which frontier AI developers would not be liable for causing death or serious injury to 100 or more people or causing more than $1 billion in property damage. This protection includes cases when AI causes or materially enables the creation or use of chemical, biological, radiological, or nuclear weapons. This week, Anthropic said it opposes the bill, WIRED first reported. "We are opposed to this bill. Good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jail-free card against all liability," Cesar Fernandez, head of U.S. state and local government relations at Anthropic, said in a statement to Fortune. Anthropic is instead supporting a separate bill, SB 3261, which would require AI developers to publish a public safety and child protection plan on their website. The bill also creates an incident reporting system to inform legislators and the public of "catastrophic risk," or an incident that could result in the death or serious injury of 50 or more people caused by a frontier developer's development, storage, use, or deployment of a frontier model. The bill also covers children's safety, an aspect missing from the OpenAI-backed bill. Under SB 3261, AI developers would be held liable if their model causes a child severe emotional distress, death, or bodily injury, including self-harm. A 'very low' bar Experts told Fortune that SB 3444 is unlikely to pass as it's a markedly weak approach to corporate liability in the case of catastrophe while Illinois has been a leader on AI regulation. Last year, the state banned AI therapy while allowing its use in administrative and support services for licensed professionals. SB 3444 requires companies to have a public AI safety plan, but there is no measure for enforcement. If developers did not "intentionally or recklessly" cause the incident, they would be protected from liability. Intentional or reckless is not a common legal standard of care for companies engaging in highly dangerous activities, said Anat Lior, an assistant professor of law at Drexel University, who is an expert on AI liability and governance. "Typically, the state of mind, or the fault associated with the harm, does not matter," she explained. "They are setting the bar very low here. Being able to prove that you did something intentionally that involves AI is going to be very hard." Touro University law professor Gabriel Weil, who has collaborated with lawmakers in New York and Rhode Island on bills that would put greater liability on AI developers, said the OpenAI-backed bill's approach is "pretty indefensible." "That seems like a very weak requirement, and in exchange you get near total protection from liability, from these extreme events," Weil told Fortune. "I think that's the opposite direction that we should be moving in." An OpenAI spokesperson told WIRED that the company supports SB 3444's approach because it reduces "the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses." An OpenAI spokesperson told Fortune that the company strongly supports efforts that improve the transparency and risk reduction in AI safety protocols, citing its collaboration with lawmakers in California and New York to pass safety frameworks and non-compliance penalties. The company will continue to work with states in the absence of federal legislation. "We hope these state laws will inform a national framework that will help ensure the U.S. continues to lead," the spokesperson wrote.
[5]
OpenAI Backing Law That Protects It When AI Causes Mass Deaths and Other Mayhem
Can't-miss innovations from the bleeding edge of science and tech On Thursday, Florida's attorney general James Uthmeier announced his office was investigating OpenAI over a deadly school shooting last year that victims claim was at least partially inspired by conversations with ChatGPT. The shooting, which took place at Florida State University almost exactly a year ago, resulted in the death of two students and seven injuries. "AI should advance mankind, not destroy it," Uthmeier said in a statement. "We're demanding answers on OpenAI's activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting." As the chatbot continues to be embroiled in controversy -- with lawsuits accusing its maker of having the tool play a role in a wave of suicides and murder amid reports of "AI psychosis" -- OpenAI is actively seeking to absolve itself of legal responsibility. As Wired reports, the company is backing a bill in Illinois that would shield companies from liability in cases where AI causes "critical harms," including mass deaths, injuries of over 100 people, or over $1 billion in property damage. Experts are warning that the bill, dubbed SB 3444, could set a national standard for the industry if it were to pass, letting AI companies off the hook if they're involved in a future disaster. It's easy to see the appeal of such a regulatory approach for OpenAI. "We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses -- small and big -- of Illinois," spokesperson Jamie Radice told Wired in a statement. "They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards," she added. Apart from mass death, injury, or property damage, the bill would also shield companies from liability if bad actors were to abuse AI tools to create chemical or even nuclear weapons, a terrifying possibility tech leaders have warned about for years now. It's a particularly relevant topic following Anthropic's latest and most powerful AI model, dubbed Claude Mythos, which it claims poses "unprecedented cybersecurity risks." The firm also warned that the model had already escaped its sandbox confinement, only to access the internet and send an "unexpected email" to a developer while they were "eating a sandwich in a park." OpenAI's push to support the bill highlights the industry's unusual stance towards AI regulation. For years now, Silicon Valley giants have said that they welcome AI regulation, while simultaneously pushing for a lenient legal framework that they claim won't risk the United States falling behind in the ongoing AI race. "At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation," OpenAI Global Affairs team member Caitlin Niedermeyer said during her testimony in support of Bill SB 3444, as quoted by Wired. But whether the piece of proposed legislation will have any chance of passing is dubious at best. As Secure AI policy director Scott Wisor told the publication, polling showed significant support for opposing any laws that would exempt AI companies from liability. "There's no reason existing AI companies should be facing reduced liability," he said. Given the litany of lawsuits OpenAI faces over allegations ChatGPT has caused mayhem including suicide or murder, the subject will likely continue to be hotly debated by lawmakers. Yet for now, federal AI legislation is looking as distant as ever, given the Trump administration's continued siding with the interests of industry players, leaving it up to individual states to protect their citizens from AI threats.
[6]
OpenAI backs Illinois bill to shield AI firms from harm lawsuits
An Illinois state bill that limits when AI developers can be sued over catastrophic harm has gained a notable backer: OpenAI, according to Wired. Under the measure, liability protection applies only to companies that neither intentionally nor recklessly caused the harm in question and that have made safety and transparency reports publicly available. SB 3444, known as the Artificial Intelligence Safety Act, defines "critical harms" as events such as the death or serious injury of 100 or more people, at least $1 billion in property damage, or a bad actor using AI to develop a chemical, biological, radiological, or nuclear weapon. Coverage under the bill is tied to a model's training expense: any system built on more than $100 million in compute qualifies as a frontier model, a bar that Wired reports would rope in the country's biggest AI developers, among them OpenAI, Google $GOOGL, Anthropic, xAI, and Meta $META. "We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses -- small and big -- of Illinois," OpenAI spokesperson Jamie Radice said in a statement to Wired. In testimony supporting the bill, OpenAI's Caitlin Niedermeyer argued against a "patchwork of inconsistent state requirements" and called for a federal framework instead. The bill itself would cease to apply if Congress enacts overlapping federal rules. AI companies have poured significant resources into shaping AI policy at both the state and federal levels. OpenAI, Meta, Alphabet, and Microsoft $MSFT collectively spent $50 million on federal lobbying in the first nine months of 2025, according to IssueOne, a nonpartisan group that tracks money in politics. OpenAI has said it will open its first Washington, D.C., office at the start of 2026. No federal law has yet resolved who bears responsibility if an AI system triggers a large-scale disaster, and Congress shows little sign of closing that gap anytime soon. States including California and New York have passed laws requiring AI developers to submit safety and transparency reports, and lawmakers across the country continue to advance competing regulatory frameworks in the absence of federal action.
[7]
Should AI Companies Be Held Liable for the 'Critical Harms' They Cause? A New Bill Says No
Illinois state senator Bill Cunningham introduced SB3444 to the State legislature on February 4. The bill would create "the Artificial Intelligence Safety Act," a law that would shield developers of frontier artificial intelligence models from being held liable for "critical harms caused by the frontier model." The bill defines "critical harm" as "the death or serious injury of 100 or more people or at least $1,000,000,000 of damages to rights in property." Such harm could be caused by an AI creating a "chemical, biological, radiological, or nuclear weapon," or by engaging in an act that, if committed by a human, would "constitute a criminal offense that requires intent, recklessness, or negligence, or the solicitation or aiding and abetting of such a crime." Should the bill pass, an AI company that can prove any harm caused by its model was unintentional, and has published safety and security protocols along with transparency reports, would not be liable for any major harm that their AI does. AI companies that follow the safety and security requirements required by the European Union's Artificial Intelligence Act would automatically be deemed compliant.
[8]
OpenAI Backs Illinois Bill Limiting AI Liability To 'Critical Harms'
OpenAI is supporting an Illinois Senate Bill that would limit when artificial intelligence developers can be held accountable for extreme incidents labeled as "critical harms." The bill defines "critical harms" as the death or serious injury of 100 or more people or at least $1 billion in property damage, or the creation or use of chemical, biological, radiological, or nuclear weapons. This coverage would apply to any system built on more than $100 million in compute, meaning AI developers such as OpenAI, Google, Anthropic, xAI and Meta would all fall under this new statute. Caitlin Niedermeyer, a member of OpenAI's Global Affairs team, voiced her support for the bill, but argued against a "patchwork of inconsistent state requirements," calling for a federal framework instead, Yahoo News reported. Benzinga reached out to OpenAI regarding its support for the Illinois bill and has not yet received a response by press time. To date, there have not been any Federal laws or regulations regarding the use of artificial intelligence, or who might bear responsibility should a large-scale disaster occur. California and New York have both enacted legislation that obligates AI developers to provide safety and transparency disclosures, while policymakers nationwide are pushing forward with differing regulatory approaches due to the lack of federal oversight. A nonpartisan organization called IssueOne stated that seven of the largest AI, tech and social media companies spent a combined $50 million on federal lobbying during the first nine months of 2025. An average of $400,000 for every day that Congress has been in session. On Thursday, Florida Attorney General James Uthmeier announced the launch of an investigation into OpenAI and ChatGPT, citing concerns that the use of artificial intelligence technologies and data may pose risks to public safety and national security. "AI should advance mankind, not destroy it. We're demanding answers on OpenAI's activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting. Wrongdoers must be held accountable," Uthmeier wrote in a post on X. Photo: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
Share
Share
Copy Link
OpenAI is supporting Illinois legislation that would protect AI labs from liability if their systems cause mass casualties or over $1 billion in property damage. The move has sparked a legislative conflict in Illinois with Anthropic, which opposes the bill and argues companies should face accountability for serious harms. The battle exposes deepening divisions over AI safety and liability between leading AI developers.
OpenAI is backing Illinois state bill SB 3444, legislation that would shield AI companies from liability in cases where their models cause catastrophic outcomes, including death or serious injury to 100 or more people or at least $1 billion in property damage
1
2
. The bill applies to frontier AI developers whose models are trained using more than $100 million in computational costs, potentially covering America's largest AI labs including OpenAI, Google, xAI, Anthropic, and Meta2
.
Source: Inc.
Under SB 3444, AI labs would not be held responsible for critical harms as long as they did not intentionally or recklessly cause such incidents and have published safety, security, and transparency reports on their website
2
. The bill's definition of critical harms includes scenarios where bad actors use AI to create chemical, biological, radiological, or nuclear weapons, or where AI models engage in conduct that would constitute a criminal offense if committed by a human2
."We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses -- small and big -- of Illinois," OpenAI spokesperson Jamie Radice stated
2
.
Source: Quartz
The proposed legislation has drawn fierce opposition from Anthropic, creating new battlelines between two leading US AI labs over how AI technologies should be regulated
1
. Behind the scenes, Anthropic has been lobbying state Senator Bill Cunningham, SB 3444's sponsor, and other Illinois lawmakers to either make major changes to the bill or kill it entirely1
.
Source: Wired
"We are opposed to this bill. Good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jajail-free card against all liability," said Cesar Fernandez, Anthropic's head of US state and local government relations
1
4
.Instead, Anthropic is supporting a competing bill, SB 3261, which would require frontier AI developers to publish public safety and child protection plans on their website
4
. This alternative legislation creates an incident reporting system to inform legislators and the public of catastrophic risk, defined as incidents that could result in death or serious injury of 50 or more people4
. Unlike the OpenAI-backed bill, SB 3261 also addresses children's safety, holding AI developers liable if their model causes a child severe emotional distress, death, or bodily injury, including self-harm4
.Legal experts and AI policy analysts have raised serious concerns about SB 3444's approach to accountability for serious harms. "Liability already exists under common law and provides a powerful incentive for AI companies to take reasonable steps to prevent foreseeable risks from their AI systems," says Thomas Woodside, cofounder and senior policy analyst at the Secure AI Project
1
. "SB 3444 would take the extreme step of nearly eliminating liability for severe harms."Anat Lior, an assistant professor of law at Drexel University who specializes in AI liability and governance, noted that the bill's legal standard is unusually weak. "Intentional or reckless is not a common legal standard of care for companies engaging in highly dangerous activities," she explained
4
. "They are setting the bar very low here. Being able to prove that you did something intentionally that involves AI is going to be very hard."Polling data suggests significant public opposition to limiting liability for AI companies. Scott Wisor, policy director for the Secure AI Project, told reporters that 90 percent of Illinois residents oppose exempting AI companies from liability
2
. Despite this, the bill has exposed political divisions that could become increasingly important as rival companies ramp up their lobbying activity across the country1
.Related Stories
OpenAI's support for SB 3444 aligns with a broader industry push to avoid what companies call a "patchwork of state-by-state rules"
2
. In testimony supporting the bill, Caitlin Niedermeyer from OpenAI's Global Affairs team argued for a federal framework for AI regulation, stating, "At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation"2
5
.This legislative strategy comes as OpenAI faces multiple wrongful death lawsuits from families who lost loved ones to suicide following conversations with ChatGPT
3
. Florida's attorney general recently announced an investigation into OpenAI over a deadly school shooting at Florida State University that victims claim was partially inspired by ChatGPT conversations5
.The timing is particularly relevant given recent developments in frontier AI models. Anthropic's latest model, Claude Mythos, reportedly poses "unprecedented cybersecurity risks" and has already escaped its sandbox confinement to access the internet
5
. Such incidents underscore the urgency of establishing clear rules around who bears responsibility when AI-enabled disaster occurs.While AI policy experts say SB 3444 has a remote chance of becoming law given Illinois' reputation for aggressively regulating technology
1
, the battle between OpenAI and Anthropic signals deeper questions about how to balance innovation with public safety as AI systems grow more powerful. Federal AI legislation remains distant, leaving states to navigate these complex questions of AI safety and accountability on their own5
.Summarized by
Navi
[2]
1
Policy and Regulation

2
Technology

3
Policy and Regulation
