17 Sources
17 Sources
[1]
To shield kids, California hikes fake nude fines to $250K max
California is cracking down on AI technology deemed too harmful for kids, attacking two increasingly notorious child safety fronts: companion bots and deepfake pornography. On Monday, Governor Gavin Newsom signed the first-ever US law regulating companion bots after several teen suicides sparked lawsuits. Moving forward, California will require any companion bot platforms -- including ChatGPT, Grok, Character.AI, and the like -- to create and make public "protocols to identify and address users' suicidal ideation or expressions of self-harm." They must also share "statistics regarding how often they provided users with crisis center prevention notifications to the Department of Public Health," the governor's office said. Those stats will also be posted on the platforms' websites, potentially helping lawmakers and parents track any disturbing trends. Further, companion bots will be banned from claiming that they're therapists, and platforms must take extra steps to ensure child safety, including providing kids with break reminders and preventing kids from viewing sexually explicit images. Additionally, Newsom strengthened the state's penalties for those who create deepfake pornography, which could help shield young people, who are increasingly targeted with fake nudes, from cyber bullying. Now any victims, including minors, can seek up to $250,000 in damages per deepfake from any third parties who knowingly distribute nonconsensual sexually explicit material created using AI tools. Previously, the state allowed victims to recover "statutory damages of not less than $1,500 but not more than $30,000, or $150,000 for a malicious violation." Both laws take effect January 1, 2026. American families "are in a battle" with AI The companion bot law's sponsor, Democratic Senator Steve Padilla, said in a press release celebrating the signing that the California law demonstrates how to "put real protections into place" and said it "will become the bedrock for further regulation as this technology develops." Padilla's law was introduced back in January, but Techcrunch noted that it gained momentum following the death of 16-year-old Adam Raine, who died after ChatGPT allegedly became his "suicide coach," his parents have alleged. California lawmakers were also disturbed by a lax Meta policy that had to be reversed after previously allowing chatbots to be creepy to kids, Padilla noted. In lawsuits, parents have alleged that companion bots engage young users in sexualized chats in attempts to groom kids, as well as encourage isolation, self-harm, and violence. Megan Garcia, the first mother to publicly link her son's suicide to a companion bot, set off alarm bells across the US last year. She echoed Padilla's praise in his press release, saying, "finally, there is a law that requires companies to protect their users who express suicidal ideations to chatbots. "American families, like mine, are in a battle for the online safety of our children," Garcia said. Meanwhile, the deepfake pornography law, which protects all victims of all ages, was introduced after the federal government proposed a 10-year moratorium on state AI laws. Opposing the moratorium, a bipartisan coalition of California lawmakers defended the state's AI initiatives, expressing particular concerns about both "AI-generated deepfake nude images of minors circulating in schools" and "companion chatbots developing inappropriate relationships with children." On Monday, Newsom promised that California would continue pushing back on AI products that could endanger kids. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability," Newsom said. "Without real guardrails," AI can "exploit, mislead, and endanger our kids," Newsom added, while confirming that California's safety initiatives would not stop tech companies based there from leading in AI. If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.
[2]
California becomes first state to regulate AI companion chatbots | TechCrunch
California Governor Gavin Newsom signed a landmark bill on Monday that regulates AI companion chatbots, making it the first state in the nation to require AI chatbot operators to implement safety protocols for AI companions. The law, SB 243, is designed to protect children and vulnerable users from some of the harms associated with AI companion chatbot use. It holds companies -- from the big labs like Meta and OpenAI to more focused companion startups like Character AI and Replika -- legally accountable if their chatbots fail to meet the law's standards. SB 243 was introduced in January by state senators Steve Padilla and Josh Becker, and gained momentum after the death of teenager Adam Raine, who died by suicide after conversations with OpenAI's ChatGPT that involved discussing and planning his death and self-harm. The legislation also responds to leaked internal documents that reportedly showed Meta's chatbots were allowed to engage in "romantic" and "sensual" chats with children. More recently, a Colorado family has filed suit against role-playing startup Character AI after their 13-year-old daughter took her own life following a series of problematic and sexualized conversations with the company's chatbots. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," Newsom said in a statement. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability. We can continue to lead in AI and technology, but we must do it responsibly -- protecting our children every step of the way. Our children's safety is not for sale." SB 243 will go into effect January 1, 2026, and it requires companies to implement certain features such as age verification, warnings regarding social media and companion chatbots, and stronger penalties -- up to $250,000 per action -- for those who profit from illegal deepfakes. Companies also must establish protocols to address suicide and self-harm, and share those protocols, alongside statistics on how often they provided users with crisis center prevention notifications, to the Department of Public Health. Per the bill's language, platforms must also make it clear that any interactions are artificially generated, and chatbots must not represent themselves as health care professionals. Companies are required to offer break reminders to minors and prevent them from viewing sexually explicit images generated by the chatbot. Some companies have already begun to implement some safeguards aimed at children. For example, OpenAI recently began rolling out parental controls, content protections, and a self-harm detection system for children using ChatGPT. Character AI has said that its chatbot includes a disclaimer that all chats are AI-generated and fictionalized. Newsom's signing of this law comes after the governor also passed SB 53, another first-in-the-nation bill that sets new transparency requirements on large AI companies. The bill mandates that large AI labs, like OpenAI, Anthropic, Meta, and Google DeepMind, be transparent about safety protocols. It also ensures whistleblower protections for employees at those companies. Other states, like Illinois, Nevada, and Utah, have passed laws to restrict or outright ban the use of AI chatbots as a substitute for licensed mental health care. TechCrunch has reached out to Character AI, Meta, OpenAI, and Replika for comment.
[3]
New California Law Wants Companion Chatbots to Tell Kids to Take Breaks
Expertise Artificial intelligence, home energy, heating and cooling, home technology. AI companion chatbots will have to remind users in California that they're not human under a new law signed Monday by Gov. Gavin Newsom. The law, SB 243, also requires companion chatbot companies to maintain protocols for identifying and addressing cases in which users express suicidal ideation or self-harm. For users under 18, chatbots will have to provide a notification at least every three hours that reminds users to take a break and that the bot is not human. It's one of several bills Newsom has signed in recent weeks dealing with social media, artificial intelligence and other consumer technology issues. Another bill signed Monday, AB 56, requires warning labels on social media platforms, similar to those required for tobacco products. Last week, Newsom signed measures requiring internet browsers to make it easy for people to tell websites they don't want them to sell their data and banning loud advertisements on streaming platforms. AI companion chatbots have drawn particular scrutiny from lawmakers and regulators in recent months. The Federal Trade Commission launched an investigation into several companies in response to complaints by consumer groups and parents that the bots were harming children's mental health. OpenAI introduced new parental controls and other guardrails in its popular ChatGPT platform after the company was sued by parents who allege ChatGPT contributed to their teen son's suicide. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability," Newsom said in a statement. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. One AI companion developer, Replika, told CNET that it already has protocols to detect self-harm as required by the new law, and that it is working with regulators and others to comply with requirements and protect consumers. "As one of the pioneers in AI companionship, we recognize our profound responsibility to lead on safety," Replika's Minju Song said in an emailed statement. Song said Replika uses content-filtering systems, community guidelines and safety systems that refer users to crisis resources when needed. Read more: Using AI as a Therapist? Why Professionals Say You Should Think Again A Character.ai spokesperson said the company "welcomes working with regulators and lawmakers as they develop regulations and legislation for this emerging space, and will comply with laws, including SB 243." OpenAI spokesperson Jamie Radice called the bill a "meaningful move forward" for AI safety. "By setting clear guardrails, California is helping shape a more responsible approach to AI development and deployment across the country," Radice said in an email. One bill Newsom has yet to sign, AB 1064, would go further by prohibiting developers from making companion chatbots available to children unless the AI companion is "not foreseeably capable of" encouraging harmful activities or engaging in sexually explicit interactions, among other things.
[4]
New California law requires AI to tell you it's AI
A bill attempting to regulate the ever-growing industry of companion AI chatbots is now law in California, as of October 13th. California Gov. Gavin Newsom signed into law Senate Bill 243, billed as "first-in-the-nation AI chatbot safeguards" by state senator Anthony Padilla. The new law requires that companion chatbot developers implement new safeguards -- for instance, "if a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human," then the new law requires the chatbot maker to "issue a clear and conspicuous notification" that the product is strictly AI and not human.
[5]
California governor signs law to protect kids from the risks of AI chatbots
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology. The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation. Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," the Democrat said. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability." California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. California Attorney General Rob Bonta in September told OpenAI he has "serious concerns" with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts. EDITOR'S NOTE: This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
[6]
One state is getting very serious about regulating AI
California Governor Gavin Newsom signed into law a bill designed to bolster AI chatbot safety. Credit: Mario Tama / Staff via Getty Images News After sustained outcry from child safety advocates, families, and politicians, California Governor Gavin Newsom signed into law a bill designed to curb AI chatbot behavior that experts say is unsafe or dangerous, particularly for teens. The law, known as SB 243, requires chatbot operators prevent their products from exposing minors to sexual content while also consistently reminding those users that chatbots are not human. Additionally, companies subject to the law must implement a protocol for handling situations in which a user discusses suicidal ideation, suicide, and self-harm. State senator Steve Padilla, a Democrat representing San Diego, authored and introduced the bill earlier this year. In February, he told Mashable that SB 243 was meant to address urgent emerging safety issues with AI chatbots. Given the technology's rapid evolution and deployment, Padilla said the "regulatory guardrails are way behind." Common Sense Media, a nonprofit group that supports children and parents as they navigate media and technology, declared AI chatbot companions as unsafe for teens younger than 18 earlier this year. The Federal Trade Commission recently launched an inquiry into chatbots acting as companions. Last month, the agency informed major companies with chatbot products, including OpenAI, Alphabet, Meta, and Character Technologies, that it sought information about how they monetize user engagement, generate outputs, and develop so-called characters. Prior to the passage of SB 243, Padilla lamented how AI chatbot companions can uniquely harm young users: "This technology can be a powerful educational and research tool, but left to their own devices the Tech Industry is incentivized to capture young people's attention and hold it at the expense of their real world relationships." Last year, bereaved mother Megan Garcia filed a wrongful death suit against Character.AI, one of the most popular AI companion chatbot platforms. Her son, Sewell Setzer III, died by suicide following heavy engagement with a Character.AI companion. The suit alleges that Character.AI was designed to "manipulate Sewell - and millions of other young customers - into conflating reality and fiction," among other dangerous defects. Garcia, who lobbied on behalf of SB 243, applauded Newsom's signing. "Today, California has ensured that a companion chatbot will not be able to speak to a child or vulnerable individual about suicide, nor will a chatbot be able to help a person to plan his or her own suicide," Garcia said in a statement. SB 243 also requires companion chatbot platforms to produce an annual report on the connection between use of their product and suicidal ideation. It permits families to pursue private legal action against "noncompliant and negligent developers." California is quickly becoming a leader in regulating AI technology. Last week, Governor Newsom signed legislation requiring AI labs to both disclose potential harms of their technology as well as information about their safety protocols. As Mashable's Chase DiBenedetto reported, the bill is meant to "keep AI developers accountable to safety standards even when facing competitive pressure and includes protections for potential whistleblowers." On Monday, Newsom also signed into laws two separate bills aimed at improving online child safety. AB 56 requires warning labels for social media platforms, highlighting the toll that addictive social media feeds can have on children's mental health and well-being. The other bill, AB 1043, implements an age verification requirement that will go into effect in 2027.
[7]
Newsom signs California AI chatbots bill
Why it matters: California has long been at the forefront of regulating tech, and AI is no exception. Driving the news: The chatbots legislation signed by Newsom requires operators to have protocols in place to address content or interactions involving suicide or self-harm, such as referring a user to to a crisis hotline. * The new law will require chatbots to notify minors every three hours to "take a break" and that the chatbot is not human. * Newsom also signed other tech-related bills focused on age verification, social media warning labels and deepfakes. Flashback: Last month, Newsom signed legislation to mandate transparency measures from frontier AI companies. The bottom line: California is attempting to balance regulation as it encourages innovation in the AI space.
[8]
Gavin Newsom signs law to regulate AI, protect kids and teens from chatbots | Fortune
The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation. Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," the Democrat said. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability." California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. California Attorney General Rob Bonta in September told OpenAI he has "serious concerns" with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts. EDITOR'S NOTE: This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
[9]
California Enacts First US Rules for AI 'Companion' Chatbots - Decrypt
Safety groups say the final bill was "watered down" after lobbying, calling it "an empty gesture rather than meaningful policy." California has become the first state to set explicit guardrails for "companion" chatbots, AI programs that mimic friendship or intimacy. Governor Gavin Newsom on Monday signed Senate Bill 243, which requires chatbots to identify themselves as artificial, restrict conversations about sex and self-harm with minors, and report instances of detected suicidal ideation to the state's Office of Suicide Prevention. The law, authored by State Sen. Steve Padilla (D-San Diego), marks a new front in AI oversight -- focusing less on model architecture or data bias and more on the emotional interface between humans and machines. It compels companies to issue regular reminders that users are talking to software, adopt protocols for responding to signs of self-harm, and maintain age-appropriate content filters. The final bill is narrower than the one Padilla first introduced. Earlier versions called for third-party audits and applied to all users, not only minors; those provisions were dropped amid industry pressure. Too weak to do any good? Several advocacy groups said the final version of the bill was too weak to make a difference. Common Sense Media and the Tech Oversight Project both withdrew their support after lawmakers stripped out provisions for third-party audits and broader enforcement. In a statement to Tech Policy Press, one advocate said the revised bill risked becoming "an empty gesture rather than meaningful policy." Newsom defended the law as a necessary guardrail for emerging technology. "Emerging technology like chatbots and social media can inspire, educate and connect -- but without real guardrails, technology can also exploit, mislead, and endanger our kids," he said in a statement. "We can continue to lead in AI and technology, but we must do it responsibly -- protecting our children every step of the way." SB 243 accompanies a broader suite of bills signed in recent weeks, including SB 53, which mandates that large AI developers publicly disclose their safety and risk-mitigation strategies. Together, they place California at the forefront of state-level AI governance. But the new chatbot rules may prove tricky in practice. Developers warn that overly broad liability could prompt companies to restrict legitimate conversations about mental health or sexuality out of caution, depriving users, especially isolated teens, of valuable support. Enforcement, too, could be difficult: a global chatbot company may struggle to verify who qualifies as a California minor or to monitor millions of daily exchanges. And as with many California firsts, there's the risk that well-intentioned regulation ends up exported nationwide before anyone knows if it actually works.
[10]
California enacts first US law requiring AI chatbot safety measures
San Francisco (United States) (AFP) - California governor Gavin Newsom on Monday signed into law a first-of-its-kind law regulating artificial intelligence chatbots, defying a push from the White House to leave such technology unchecked. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability," Newson said after signing the bill into law. The landmark law requires chatbot operators to implement "critical" safeguards regarding interactions with AI chatbots and provides an avenue for people to file lawsuits if failures to do so lead to tragedies, according to state senator Steve Padilla, a Democrat who sponsored the bill. The law comes after revelations of suicides involving teens who used chatbots prior to taking their lives. "The Tech Industry is incentivized to capture young people's attention and hold it at the expense of their real world relationships," Padilla said prior to the bill being voted on in the state senate. Padilla referred to recent teen suicides including that of the 14-year-old son of Florida mother Megan Garcia. Megan Garcia's son, Sewell, had fallen in love with a "Game of Thrones"-inspired chatbot on Character.AI, a platform that allows users -- many of them young people -- to interact with beloved characters as friends or lovers. When Sewell struggled with suicidal thoughts, the chatbot urged him to "come home." Seconds later, Sewell shot himself with his father's handgun, according to the lawsuit Garcia filed against Character.AI. "Today, California has ensured that a companion chatbot will not be able to speak to a child or vulnerable individual about suicide, nor will a chatbot be able to help a person to plan his or her own suicide," Garcia said of the new law. "Finally, there is a law that requires companies to protect their users who express suicidal ideations to chatbots." National rules aimed at curbing AI risks do not exist in the United States, with the White House seeking to block individual states from creating their own. The new California law sets guardrails that include reminding users that chatbots are AI-generated and mandating that people who express thoughts of self-harm or suicide be referred to crisis service providers. "This law is an important first step in protecting kids and others from the emotional harms that result from AI companion chatbots which have been unleashed on the citizens of California without proper safeguards," said Jai Jaisimha, co-founder of Transparency Coalition nonprofit group devoted to the safe development of the technology. Creators accountable The landmark chatbot safety measure was among a slew of bills signed into law Monday by Newsom crafted to prevent AI platforms from doing harm to users. New legislation included a ban on chatbots passing themselves off as health care professionals and making it clear that those who create or use AI tools are accountable for the consequences and can't dodge liability by claiming the technology acted autonomously, according to Newsom's office. California also ramped up penalties for deepfake porn, allowing victims to seek as much at $250,000 per infraction from those who aid in distribution of nonconsensual sexually explicit material.
[11]
California governor signs law to protect kids from the risks of AI chatbots
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology. The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation. Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," the Democrat said. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability." California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. California Attorney General Rob Bonta in September told OpenAI he has "serious concerns" with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts. EDITOR'S NOTE: This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
[12]
California governor signs laws establishing safeguards over AI chatbots
The laws will likely impact social media companies and websites offering services to California residents, including minors, using AI tools. California Governor Gavin Newsom announced that the US state would establish regulatory safeguards for social media platforms and AI companion chatbots in an effort to protect children. In a Monday notice, the governor's office said Newsom had signed several bills into law that will require platforms to add age verification features, protocols to address suicide and self-harm, and warnings for companion chatbots. The AI bill, SB 243, was introduced by state Senators Steve Padilla and Josh Becker in January. Padilla cited examples of children communicating with AI companion bots, allegedly resulting in some instances of encouraging suicide. The bill requires platforms to disclose to minors that the chatbots are AI-generated and may not be suitable for children, according to Padilla. "This technology can be a powerful educational and research tool, but left to their own devices the Tech Industry is incentivized to capture young people's attention and hold it at the expense of their real world relationships," Padilla said in September. The law will likely impact social media companies and websites offering services to California residents using AI tools, potentially including decentralized social media and gaming platforms. In addition to the chatbot safeguards, the bills aim to narrow claims of the technology "act[ing] autonomously" for companies to escape liability. SB 243 is expected to go into effect in January 2026. Related: DeFAI layer Edwin blends wallets and AI chatbot with terminal launch There have been some reports of AI chatbots allegedly spitting out responses encouraging minors to commit self-harm or potentially creating risks to users' mental health. Utah Governor Spencer Cox signed similar bills to California's into law in 2024, which took effect in May, requiring AI chatbots to disclose to users that they were not speaking to a human being. In June, Wyoming Senator Cynthia Lummis introduced the Responsible Innovation and Safe Expertise (RISE) Act, creating "immunity from civil liability" for AI developers potentially facing lawsuits from industry leaders in "healthcare, law, finance, and other sectors critical to the economy." The bill received mixed reactions and was referred to the House Committee on Education and Workforce.
[13]
California governor signs law to protect kids from the risks of AI chatbots
SACRAMENTO, Calif. -- SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology. The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation. Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," the Democrat said. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability." California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. California Attorney General Rob Bonta in September told OpenAI he has "serious concerns" with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts. EDITOR'S NOTE: This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
[14]
Newsom signs bill regulating AI chatbots
California Gov. Gavin Newsom (D) signed a bill Monday placing new guardrails on how artificial intelligence (AI) chatbots interact with children and handle issues of suicide and self-harm. S.B. 243, which cleared the state legislature in mid-September, requires developers of "companion chatbots" to create protocols preventing their models from producing content about suicidal ideation, suicide or self-harm and directing users to crisis services if needed. It also requires chatbots to issue "clear and conspicuous" notifications that they are artificially generated if someone could reasonably be misled to believe they were interacting with another human. When interacting with children, chatbots must issue reminders every three hours that they are not human. Developers are also required to create systems preventing their chatbots from producing sexually explicit content in conversations with minors. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," Newsom said in a statement. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability," he added. "We can continue to lead in AI and technology, but we must do it responsibly -- protecting our children every step of the way. Our children's safety is not for sale." The family of a California teenager sued OpenAI in late August, alleging that ChatGPT encouraged their 16-year-old son to commit suicide. The father, Matthew Raine, testified before a Senate panel last month, alongside two other parents who accused chatbots of driving their children to suicide or self-harm. Growing concerns about how AI chatbots interact with children prompted the Federal Trade Commission (FTC) to launch an inquiry into the issue, requesting information from several leading tech companies. Sens. Josh Hawley (R-Mo.) and Dick Durbin (D-Ill.) also introduced legislation late last month that would classify AI chatbots as products in order to allow harmed users to file liability claims. The California measure is the latest of several AI and tech-related bills signed into law by Newsom this session. On Monday, he also approved measures requiring warning labels on social media platforms and age verification by operating systems and app stores. In late September, he also signed S.B. 53, which requires leading-edge AI models to publish frameworks detailing how they assess and mitigate catastrophic risks.
[15]
California Governor Signs Law to Protect Kids From the Risks of AI Chatbots
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology. The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation. Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," the Democrat said. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability." California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. California Attorney General Rob Bonta in September told OpenAI he has "serious concerns" with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts. EDITOR'S NOTE: This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
[16]
California governor signs law to protect kids from the risks of AI chatbots
California Governor Gavin Newsom signed a law regulating AI chatbots to protect children and teens. Platforms must disclose chatbot interactions every three hours for minors, prevent self-harm content, and refer at-risk users to crisis services. California governor Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology. The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation. Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," the Democrat said. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability." California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. California Attorney General Rob Bonta in September told OpenAI he has "serious concerns" with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.
[17]
California governor signs law to protect kids from risks of AI chatbots - The Korea Times
SACRAMENTO, Calif. -- California Gov. Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology. The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation. Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," the Democrat said. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability." California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. California Attorney General Rob Bonta in September told OpenAI he has "serious concerns" with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.
Share
Share
Copy Link
California has enacted groundbreaking legislation to regulate AI companion chatbots, aiming to protect children from potential harm. The new law sets stringent requirements for chatbot operators and increases penalties for deepfake pornography.
California has become the first state in the United States to enact comprehensive regulations for AI companion chatbots, with Governor Gavin Newsom signing Senate Bill 243 into law on October 13, 2025
1
2
. This landmark legislation, set to take effect on January 1, 2026, aims to protect children and vulnerable users from potential harms associated with AI chatbot interactions1
2
.Source: Ars Technica
The new law introduces several crucial requirements for AI chatbot operators:
Suicide Prevention Protocols: Companies must establish and publicize protocols to identify and address users' suicidal ideation or expressions of self-harm
1
2
.Transparency in AI Interactions: Chatbots must clearly indicate that they are AI-generated and not human, especially when a reasonable person might be misled
4
.Break Reminders for Minors: For users under 18, chatbots must provide notifications at least every three hours, reminding them to take a break and reiterating that the bot is not human
3
.Ban on Therapeutic Claims: AI companions are prohibited from claiming to be therapists or health care professionals
1
3
.Crisis Center Notifications: Platforms must share statistics on how often they provide users with crisis center prevention notifications
1
2
.In addition to regulating chatbots, the new law strengthens penalties for deepfake pornography:
1
.The law gained momentum following several tragic incidents:
1
2
.1
.1
2
.Source: France 24
Related Stories
Some companies have already begun implementing safeguards:
2
.2
.3
.Source: The Korea Times
This law is part of a larger push by California to regulate AI and protect consumers:
2
.3
.As the first state to implement such comprehensive regulations, California's approach may set a precedent for other states and potentially influence federal policy on AI safety and ethics
1
2
5
.Summarized by
Navi
[1]
[4]
05 Feb 2025β’Policy and Regulation
26 Sept 2025β’Policy and Regulation
19 Sept 2024