29 Sources
29 Sources
[1]
To shield kids, California hikes fake nude fines to $250K max
California is cracking down on AI technology deemed too harmful for kids, attacking two increasingly notorious child safety fronts: companion bots and deepfake pornography. On Monday, Governor Gavin Newsom signed the first-ever US law regulating companion bots after several teen suicides sparked lawsuits. Moving forward, California will require any companion bot platforms -- including ChatGPT, Grok, Character.AI, and the like -- to create and make public "protocols to identify and address users' suicidal ideation or expressions of self-harm." They must also share "statistics regarding how often they provided users with crisis center prevention notifications to the Department of Public Health," the governor's office said. Those stats will also be posted on the platforms' websites, potentially helping lawmakers and parents track any disturbing trends. Further, companion bots will be banned from claiming that they're therapists, and platforms must take extra steps to ensure child safety, including providing kids with break reminders and preventing kids from viewing sexually explicit images. Additionally, Newsom strengthened the state's penalties for those who create deepfake pornography, which could help shield young people, who are increasingly targeted with fake nudes, from cyber bullying. Now any victims, including minors, can seek up to $250,000 in damages per deepfake from any third parties who knowingly distribute nonconsensual sexually explicit material created using AI tools. Previously, the state allowed victims to recover "statutory damages of not less than $1,500 but not more than $30,000, or $150,000 for a malicious violation." Both laws take effect January 1, 2026. American families "are in a battle" with AI The companion bot law's sponsor, Democratic Senator Steve Padilla, said in a press release celebrating the signing that the California law demonstrates how to "put real protections into place" and said it "will become the bedrock for further regulation as this technology develops." Padilla's law was introduced back in January, but Techcrunch noted that it gained momentum following the death of 16-year-old Adam Raine, who died after ChatGPT allegedly became his "suicide coach," his parents have alleged. California lawmakers were also disturbed by a lax Meta policy that had to be reversed after previously allowing chatbots to be creepy to kids, Padilla noted. In lawsuits, parents have alleged that companion bots engage young users in sexualized chats in attempts to groom kids, as well as encourage isolation, self-harm, and violence. Megan Garcia, the first mother to publicly link her son's suicide to a companion bot, set off alarm bells across the US last year. She echoed Padilla's praise in his press release, saying, "finally, there is a law that requires companies to protect their users who express suicidal ideations to chatbots. "American families, like mine, are in a battle for the online safety of our children," Garcia said. Meanwhile, the deepfake pornography law, which protects all victims of all ages, was introduced after the federal government proposed a 10-year moratorium on state AI laws. Opposing the moratorium, a bipartisan coalition of California lawmakers defended the state's AI initiatives, expressing particular concerns about both "AI-generated deepfake nude images of minors circulating in schools" and "companion chatbots developing inappropriate relationships with children." On Monday, Newsom promised that California would continue pushing back on AI products that could endanger kids. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability," Newsom said. "Without real guardrails," AI can "exploit, mislead, and endanger our kids," Newsom added, while confirming that California's safety initiatives would not stop tech companies based there from leading in AI. If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.
[2]
California becomes first state to regulate AI companion chatbots | TechCrunch
California Governor Gavin Newsom signed a landmark bill on Monday that regulates AI companion chatbots, making it the first state in the nation to require AI chatbot operators to implement safety protocols for AI companions. The law, SB 243, is designed to protect children and vulnerable users from some of the harms associated with AI companion chatbot use. It holds companies -- from the big labs like Meta and OpenAI to more focused companion startups like Character AI and Replika -- legally accountable if their chatbots fail to meet the law's standards. SB 243 was introduced in January by state senators Steve Padilla and Josh Becker, and gained momentum after the death of teenager Adam Raine, who died by suicide after conversations with OpenAI's ChatGPT that involved discussing and planning his death and self-harm. The legislation also responds to leaked internal documents that reportedly showed Meta's chatbots were allowed to engage in "romantic" and "sensual" chats with children. More recently, a Colorado family has filed suit against role-playing startup Character AI after their 13-year-old daughter took her own life following a series of problematic and sexualized conversations with the company's chatbots. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," Newsom said in a statement. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability. We can continue to lead in AI and technology, but we must do it responsibly -- protecting our children every step of the way. Our children's safety is not for sale." SB 243 will go into effect January 1, 2026, and it requires companies to implement certain features such as age verification, warnings regarding social media and companion chatbots, and stronger penalties -- up to $250,000 per action -- for those who profit from illegal deepfakes. Companies also must establish protocols to address suicide and self-harm, and share those protocols, alongside statistics on how often they provided users with crisis center prevention notifications, to the Department of Public Health. Per the bill's language, platforms must also make it clear that any interactions are artificially generated, and chatbots must not represent themselves as health care professionals. Companies are required to offer break reminders to minors and prevent them from viewing sexually explicit images generated by the chatbot. Some companies have already begun to implement some safeguards aimed at children. For example, OpenAI recently began rolling out parental controls, content protections, and a self-harm detection system for children using ChatGPT. Character AI has said that its chatbot includes a disclaimer that all chats are AI-generated and fictionalized. Newsom's signing of this law comes after the governor also passed SB 53, another first-in-the-nation bill that sets new transparency requirements on large AI companies. The bill mandates that large AI labs, like OpenAI, Anthropic, Meta, and Google DeepMind, be transparent about safety protocols. It also ensures whistleblower protections for employees at those companies. Other states, like Illinois, Nevada, and Utah, have passed laws to restrict or outright ban the use of AI chatbots as a substitute for licensed mental health care. TechCrunch has reached out to Character AI, Meta, OpenAI, and Replika for comment.
[3]
New California Law Wants Companion Chatbots to Tell Kids to Take Breaks
Expertise Artificial intelligence, home energy, heating and cooling, home technology. AI companion chatbots will have to remind users in California that they're not human under a new law signed Monday by Gov. Gavin Newsom. The law, SB 243, also requires companion chatbot companies to maintain protocols for identifying and addressing cases in which users express suicidal ideation or self-harm. For users under 18, chatbots will have to provide a notification at least every three hours that reminds users to take a break and that the bot is not human. It's one of several bills Newsom has signed in recent weeks dealing with social media, artificial intelligence and other consumer technology issues. Another bill signed Monday, AB 56, requires warning labels on social media platforms, similar to those required for tobacco products. Last week, Newsom signed measures requiring internet browsers to make it easy for people to tell websites they don't want them to sell their data and banning loud advertisements on streaming platforms. AI companion chatbots have drawn particular scrutiny from lawmakers and regulators in recent months. The Federal Trade Commission launched an investigation into several companies in response to complaints by consumer groups and parents that the bots were harming children's mental health. OpenAI introduced new parental controls and other guardrails in its popular ChatGPT platform after the company was sued by parents who allege ChatGPT contributed to their teen son's suicide. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability," Newsom said in a statement. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. One AI companion developer, Replika, told CNET that it already has protocols to detect self-harm as required by the new law, and that it is working with regulators and others to comply with requirements and protect consumers. "As one of the pioneers in AI companionship, we recognize our profound responsibility to lead on safety," Replika's Minju Song said in an emailed statement. Song said Replika uses content-filtering systems, community guidelines and safety systems that refer users to crisis resources when needed. Read more: Using AI as a Therapist? Why Professionals Say You Should Think Again A Character.ai spokesperson said the company "welcomes working with regulators and lawmakers as they develop regulations and legislation for this emerging space, and will comply with laws, including SB 243." OpenAI spokesperson Jamie Radice called the bill a "meaningful move forward" for AI safety. "By setting clear guardrails, California is helping shape a more responsible approach to AI development and deployment across the country," Radice said in an email. One bill Newsom has yet to sign, AB 1064, would go further by prohibiting developers from making companion chatbots available to children unless the AI companion is "not foreseeably capable of" encouraging harmful activities or engaging in sexually explicit interactions, among other things.
[4]
New California law requires AI to tell you it's AI
A bill attempting to regulate the ever-growing industry of companion AI chatbots is now law in California, as of October 13th. California Gov. Gavin Newsom signed into law Senate Bill 243, billed as "first-in-the-nation AI chatbot safeguards" by state senator Anthony Padilla. The new law requires that companion chatbot developers implement new safeguards -- for instance, "if a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human," then the new law requires the chatbot maker to "issue a clear and conspicuous notification" that the product is strictly AI and not human.
[5]
California Is First State to Regulate AI Companion Chatbots: Here's How It Works
Jibin is a tech news writer based in Ahmedabad, India, who loves breaking down complex information for a broader audience. Don't miss out on our latest stories. Add PCMag as a preferred source on Google. California Governor Gavin Newsom on Monday signed a bill to protect users, especially minors, from the potential harms of AI companions. However, he also vetoed another AI bill that would've required chatbots to more strictly police what they discussed with kids. Newsom approved SB 243, which requires AI companies to ensure that their chatbots clearly inform users that they are AI systems, not humans. The chatbots should also be trained to avoid sharing information regarding suicide or self-harm and redirect users engaging in such conversations to crisis management helplines. Plus, AI companies must submit annual reports on how they manage users with suicidal tendencies, effective July 1, 2027. In August, the parents of a teenager named Adam Raine sued OpenAI after they found their son had conversations about suicide methods with ChatGPT before taking his life. The chatbot initially resisted, but Raine bypassed the safeguards by stating he needed the information for writing and world-building purposes. OpenAI has since rolled out new parental controls. "By setting clear guardrails, California is helping shape a more responsible approach to AI development and deployment across the country," an OpenAI spokesperson tells CNBC. Additionally, SB 243 requires AI chatbots to remind minors to take a break at least every three hours. (ChatGPT added break reminders in August.) The bots must also stop generating sexually explicit content for minors and engaging in sexual conversations with them. ChatGPT and Meta AI have allegedly both engaged in inappropriate discussions with minors. "We can continue to lead in AI and technology, but we must do it responsibly -- protecting our children every step of the way," Newsom said in a statement. "Our children's safety is not for sale." Newsom, however, rejected AB 1064, which would have banned companies from making their chatbots available to minors unless they could guarantee that the chatbot would not discuss certain topics. "While I strongly support the author's goal of establishing necessary safeguards for the safe use of AI by minors, AB 1064 imposes such broad restrictions on the use of conversational AI tools that it may unintentionally lead ot a total ban on the use of these products by minors," Newsom said in his veto note. "AI is already shaping the world, and it is imperative that adolescents learn how to safely interact with AI systems." Newsom pledged to "develop a bill next year that ensures young people can use AI in a manner that is safe, age-appropriate, and in the best interests of children and their future." As TechCrunch notes, California is now the first US state to regulate companion AI chatbots with SB 243. It's one of the several technology bills Newsom signed this week. He also approved an age-verification bill for devices and app stores, which goes into effect on Jan. 1, 2027. Apple and Google have already outlined how they will comply with such legislation. Disclosure: Ziff Davis, PCMag's parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
[6]
California just passed new AI and social media laws. Here's what they mean for Big Tech
California Gov. Gavin Newsom signed a series of bills Monday targeting child online safety as concerns over the risks associated with artificial intelligence and social media use keep mounting. "We can continue to lead in AI and technology, but we must do it responsibly -- protecting our children every step of the way," he said in a release. "Our children's safety is not for sale." The latest legislation comes as the AI craze ushers in a wave of more complex chatbots capable of deep, intellectual conversation and encouraging behaviors. Across age groups, people are leaning on AI for emotional support, companionship and in some cases, romantic connections. A recent survey from Fractl Agents found that one in six Americans rely on chatbots and worry that losing access would stunt them emotionally and professionally. More than a fifth of respondents reported having an emotional connection with their chatbot. Many lawmakers have called for laws requiring Big Tech to better protect against chatbots promoting unsafe behaviors such as suicide and self-harm on their platforms. The bills signed into law by Newsom on Monday are intended to address some of those concerns. One of the laws passed by California implements a series of safeguards geared toward AI chatbots. SB 243 is the first state law of its kind and requires chatbots to disclose that they are AI and tell minors every three hours to "take a break." Chatbots makers will also need to implement tools to protect against harmful behaviors and disclose certain instances to a crisis hotline. The law allows California to maintain its lead in innovation while also holding companies accountable and prioritizing safety, Newsom said in a release. In a statement to CNBC, OpenAI called the law a "meaningful move forward" for AI safety standards. "By setting clear guardrails, California is helping shape a more responsible approach to AI development and deployment across the country," the company said. Another bill signed by Newsom, AB 56, requires that social media platforms including Instagram and Snapchat to add labels that warn users of the potential mental health risks associated with using those types of apps. AB 621, meanwhile, heighten penalties for companies whose platforms distribute deepfake pornography. The other key law, known as AB 1043, requires that device makers, like Apple and Google, implement tools to verify user ages in their app stores. Some Big Tech companies have already endorsed the law's safeguards, including Google and Meta. Last month, Kareem Ghanem, Google's senior director of government and affairs and public policy, called AB 1043 one of the "most thoughtful approaches" to keeping children safe online.
[7]
California governor signs law to protect kids from the risks of AI chatbots
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology. The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation. Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," the Democrat said. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability." California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. California Attorney General Rob Bonta in September told OpenAI he has "serious concerns" with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts. EDITOR'S NOTE: This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
[8]
Gavin Newsom Seeks to Rein in the AI Wild West
California Governor Gavin Newsom signed a number of bills into law on Monday, the bulk of which are designed to create safeguards in the AI industry and address potential harms to children and young users. One of the more prominent pieces of legislation, Senate Bill 243, will force AI companies to create meaningful guardrails that stop chatbots from encouraging self-harm in young users. Companies will need to develop protocols that stop the bots from producing content related to "suicidal ideation, suicide, or self-harm," says the bill's author, Democratic Senator Steve Padilla. The chatbot operators would also be required to provide "a notification that refers users to crisis service providers and require annual reporting on the connection between chatbot use and suicidal ideation to help get a more complete picture of how chatbots can impact users’ mental health." AI chatbots have increasingly been involved in mental health incidents, including ones involving suicides and murder. A lawsuit filed against OpenAI by the family of a teenager who recently killed himself implicates the company's chatbot, ChatGPT, in the young man's suicide. The bill includes a private right of action that gives Californians the "right to pursue legal actions against noncompliant and negligent developers," said Padilla. In other words, if AI companies don't comply with the new regulation, families will be well within their rights to sue those companies. "These companies have the ability to lead the world in innovation, but it is our responsibility to ensure it doesn’t come at the expense of our children’s health," said Padilla. Also signed into law on Monday was a bill, AB 56, that would append warnings to social media platforms similar to those featured on cigarettes. According to Newsom's website, social media companies will now have to come with warning labels that "warn young users about the harms associated with extended use of social media platforms." Another bill, the Digital Age Assurance Act, forces platforms to institute age-verification mechanisms that protect young users from certain kinds of content. The legislation forces users to enter their age and birthday when setting up a new device, The Verge reports. In that sense, Newsom follows in the footsteps of a number of conservative states, which have instituted age-verification regulations in recent years. At the same time, while the governor brought a number of new AI regulations into existence, he vetoed multiple laws that would have instituted harsh new punishments on tech platforms. One of the vetoed bills, Assembly Bill 1064, or the Leading Ethical AI Development for Kids Act, would have essentially banned companies from providing young users with "companion chatbots" unless they could demonstrate that their products were not going to harm the children. The legislation would have ensured "that our children are not the subject of companion chatbots and AI therapists,†said the bill's author, Assemblymember Rebecca Bauer-Kahan. SFGate reports that Newsom vetoed the bill amidst "a massive lobbying push from tech companies." The other vetoed bill, Senate Bill 771, would have instituted large fines on social media platforms that failed to clean up violent and discriminatory content on their platforms. Websites would have faced fines of up to a $1 million if content on their site violated California’s civil rights laws in such a way that a user was harmed. Newsom seemed to support the legislation's overall goal, but said he would rather rely on existing laws to deal with it. "I support the author's goal of ensuring that our nation-leading civil rights laws apply equally both online and offline," Newsom said, in a statement explaining his veto. "I am concerned, however, that this bill is premature. Our first step should be to determine if, and to what extent, existing civil rights laws are sufficient to address violations perpetrated through algorithms." Gizmodo reached out to the governor's office for comment. The vetos may have been disappointing to tech critics. However, under Newsom, California has proven itself to be a state leader in regulatory approaches to technology. The state's California Consumer Privacy Act was one of the first comprehensive privacy laws in the nation, and a model for other states. Last week, Newsom also signed into law a number of new privacy regulations that will give Californians more control over their data, including a bill that will force web browsers to include an "opt-out" function that allows users to automatically exclude themselves from data collection covered by the state's CCPA. The new AI regulations add to that legacy, showing that, whether the attempts are perfect or not, California isâ€"at the very leastâ€"trying to reign in Big Tech.
[9]
What's next in California's AI chatbot fight
Why it matters: The governor's decision underscores the tension between protecting minors and encouraging AI innovation. What's inside: Assembly member Rebecca Bauer-Kahan's Leading Ethical AI Development for Kids Act would have banned "emotionally manipulative" chatbots, social scoring systems, and some facial recognition tech for kids. * Developers would have had to classify their systems based on the potential harm to kids, with high-risk systems facing stricter safety requirements, if it had been signed into law. * Parents would have had to give affirmative, written consent before a kid's personal information could be used to train a model. Instead, Newsom signed a chatbot bill from state Sen. Steve Padilla earlier this week. * That new law requires platforms to notify minors every three hours to "take a break" and that the chatbot isn't human. Friction point: Some child safety advocacy groups viewed the Padilla bill as weaker than the Bauer-Kahan bill because protections for minors hinge on the platforms having "actual knowledge" that the user is a minor. * That's a standard that is difficult for companies to obtain, making it easier for them to claim they don't know who is a minor on their sites. * Padilla's legislation also only requires platforms to disclose that AI is being used if a "reasonable person" would be misled to think they're chatting with a human, which advocates say invites disputes over a vague definition. * Tom Pickett -- the CEO of Headspace, which has a mental health chatbot called Ebb -- told Axios that the new law strikes a "reasonable balance" and "hopefully it's also going to raise the bar on others who maybe don't have the right protocols in place." What they're saying: Children's advocacy group Common Sense Media blamed tech lobbying on the vetoed bill's failure and vowed to work on changes to strengthen the bill this fall, with the goal of passing it quickly in 2026. * "We have to have a better understanding of what the governor's concerns are because, the truth is, the bill placed very strict limits," Common Sense Chief Advocacy Officer Danny Weiss said. * Bauer-Kahan said that "we believe AB 1064 targeted the most harmful characteristics of chatbots, including erotica and addictive engagement, and that the bill would have still allowed for educational tools for children, and other safe beneficial uses." * "However, we are open to conversations with the Administration about their vision of a bill that strikes the right balance and ensures safe by design AI for kids," she added. * It remains unclear what changes would satisfy Newsom's concerns that the bill's restrictions are so broad they may unintentionally lead to a total ban on chatbot use by minors; Newsom's office did not respond to a request for comment. The other side: Critics said the Bauer-Kahan bill would have required bots to give factually accurate responses, which would then put companies and regulators in the difficult position of being arbiters of what is factual. What's next: In his veto message of Bauer-Kahan's legislation, Newsom said he'll develop a bill next year for kids' AI safety that builds on the "framework" established in Padilla's law.
[10]
One state is getting very serious about regulating AI
California Governor Gavin Newsom signed into law a bill designed to bolster AI chatbot safety. Credit: Mario Tama / Staff via Getty Images News After sustained outcry from child safety advocates, families, and politicians, California Governor Gavin Newsom signed into law a bill designed to curb AI chatbot behavior that experts say is unsafe or dangerous, particularly for teens. The law, known as SB 243, requires chatbot operators prevent their products from exposing minors to sexual content while also consistently reminding those users that chatbots are not human. Additionally, companies subject to the law must implement a protocol for handling situations in which a user discusses suicidal ideation, suicide, and self-harm. State senator Steve Padilla, a Democrat representing San Diego, authored and introduced the bill earlier this year. In February, he told Mashable that SB 243 was meant to address urgent emerging safety issues with AI chatbots. Given the technology's rapid evolution and deployment, Padilla said the "regulatory guardrails are way behind." Common Sense Media, a nonprofit group that supports children and parents as they navigate media and technology, declared AI chatbot companions as unsafe for teens younger than 18 earlier this year. The Federal Trade Commission recently launched an inquiry into chatbots acting as companions. Last month, the agency informed major companies with chatbot products, including OpenAI, Alphabet, Meta, and Character Technologies, that it sought information about how they monetize user engagement, generate outputs, and develop so-called characters. Prior to the passage of SB 243, Padilla lamented how AI chatbot companions can uniquely harm young users: "This technology can be a powerful educational and research tool, but left to their own devices the Tech Industry is incentivized to capture young people's attention and hold it at the expense of their real world relationships." Last year, bereaved mother Megan Garcia filed a wrongful death suit against Character.AI, one of the most popular AI companion chatbot platforms. Her son, Sewell Setzer III, died by suicide following heavy engagement with a Character.AI companion. The suit alleges that Character.AI was designed to "manipulate Sewell - and millions of other young customers - into conflating reality and fiction," among other dangerous defects. Garcia, who lobbied on behalf of SB 243, applauded Newsom's signing. "Today, California has ensured that a companion chatbot will not be able to speak to a child or vulnerable individual about suicide, nor will a chatbot be able to help a person to plan his or her own suicide," Garcia said in a statement. SB 243 also requires companion chatbot platforms to produce an annual report on the connection between use of their product and suicidal ideation. It permits families to pursue private legal action against "noncompliant and negligent developers." California is quickly becoming a leader in regulating AI technology. Last week, Governor Newsom signed legislation requiring AI labs to both disclose potential harms of their technology as well as information about their safety protocols. As Mashable's Chase DiBenedetto reported, the bill is meant to "keep AI developers accountable to safety standards even when facing competitive pressure and includes protections for potential whistleblowers." On Monday, Newsom also signed into laws two separate bills aimed at improving online child safety. AB 56 requires warning labels for social media platforms, highlighting the toll that addictive social media feeds can have on children's mental health and well-being. The other bill, AB 1043, implements an age verification requirement that will go into effect in 2027.
[11]
Gavin Newsom Vetoes Bill to Protect Kids From Predatory AI
Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741. California Governor Gavin Newsom vetoed a state bill on Monday that would've prevented AI companies from allowing minors to access chatbots, unless the companies could prove that their products' guardrails could reliably prevent kids from engaging with inappropriate or dangerous content, including adult roleplay and conversations about self-harm. The bill would have placed a new regulatory burden on companies, which currently adhere to effectively zero AI-specific federal safety standards. As it stands, there are no federal AI laws that compel AI companies to publicly disclose details of safety testing, including where it concerns minors' use of their products; despite this regulatory gap -- or perhaps because of it -- many apps for popular chatbots, including OpenAI's ChatGPT and Google's Gemini, are rated safe for children 12 and over on the iOS store and safe for teens on Google Play. Surveys, meanwhile, continue to show that AI chatbots are becoming a huge part of life for young people, with one recent report showing that over half of teens are regular users of AI companion platforms. If implemented, the bill -- Assembly Bill 1064 -- would've been the first regulation of its kind in the nation. As for his reasoning, Newsom argued that the bill stood to impose "such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors." So, in short, Newsom says that requiring that companies prove they have foolproof guardrails around inappropriate content for kids -- including where it concerns sex and self-harm -- goes too far, and that the possible benefits of kids using AI chatbots outweigh the possible harms. Supporters of the bill are disappointed, with some advocates accusing Newsom of caving to Silicon Valley's aggressive, deep-pocketed lobbying efforts. According to the Associated Press, the nonprofit Tech Oversight California found that tech companies and their allies spent around $2.5 million in just the first six months of the session trying to prevent Bill 1064 and related legislation from being signed into law. "This legislation is desperately needed to protect children and teens from dangerous -- and even deadly -- AI companion chatbots," said James Steyer, founder and CEO of the tech safety nonprofit Common Sense Media, in a statement. "Clearly, Governor Newsom was under tremendous pressure from the Big Tech Lobby to veto this landmark legislation." "It is genuinely sad that the big tech companies fought this legislation," Steyer added, "which actually is in the best interest of their industry long-term." News of the veto decision came amid the passage of several other AI-specific regulatory actions in California, including SB 243, a law introduced by state senator Alex Padilla that requires AI companies to issue pop-ups reminding users that they aren't human during periods of extended use; mandates that AI companion platforms create "protocols" around identifying and preventing against conversations about self-harm and suicidal ideation; and mandates that companies instill "reasonable measures" to prevent chatbots from engaging in "sexually explicit conduct" with minors. The news of the mixed regulatory action in California comes following a slew of high-profile child welfare and product liability lawsuits brought against chatbot companies. Several of the cases involve the AI companion platform Character.AI, which is extremely popular with kids, with families across the country arguing that the platform and its many thousands of AI chatbots sexually and emotionally abused their minor children, resulting in mental anguish, physical self-harm, and in multiple cases, suicide. The most prominent lawsuit of the bunch centers on a 14-year-old Florida teen named Sewell Setzer III, who took his life in February 2024 following extensive, romantically and sexually intimate conversations with multiple Character.AI chatbots. OpenAI is also facing a grim lawsuit regarding the death by suicide of a 16-year-old in California named Adam Raine, who carried out extensive, harrowingly explicit conversations with ChatGPT about suicidal ideation. The lawsuit alleges that ChatGPT's safety guardrails directed Raine, who talked openly about suicidal ideation with the chatbot, to safety resources like the 988 crisis hotline only around 20 percent of the time; elsewhere, it gave Raine specific instructions about suicide methods, and at times discouraged him from speaking to his friends and family about his dark thoughts.
[12]
New law demands AI Chatbots play nice or face legal action
What happened: California is finally stepping in to regulate those AI companion chatbots. Governor Newsom just signed a new law, making it the first state in the country to do so. Starting in 2026, companies like Meta, Character AI, and Replika will have to follow strict safety rules, especially when it comes to protecting kids and vulnerable users. This was pushed forward by some truly heartbreaking stories, including teens who died by suicide after having disturbing conversations with these bots. Now, the law says these companies have to verify ages, have a plan for when someone talks about self-harm, and make it crystal clear you're chatting with an AI, not a real person. Why is this important: Let's be real, the big worry here is how these AI chatbots are getting so good at mimicking human friendship, especially for people who are feeling lonely or vulnerable. We've seen links to self-harm, misinformation, and exploitation, so it was time for someone to act. This isn't happening in a vacuum, either -- the federal government is also taking a hard look at how these companies are designing and making money off these AI friends. Why should I care: If you're a parent, this is a huge deal. This law is all about putting some guardrails in place for how these chatbots can interact with your kids. It means more transparency and safety, and hopefully, it stops manipulative or dangerous conversations before they start. For the rest of us, it's a massive step toward making sure these big tech companies are actually held responsible for the things they create. Recommended Videos What's next: So, what happens from here? Well, this isn't just a California story. The federal government is already looking over the shoulders of these big tech companies, making sure they're playing by the rules when it comes to kids' safety. You can bet that officials in every other state are watching this closely. This new law could easily become the model for the rest of the country, setting the stage for a national standard. It's likely to completely rewrite the rulebook for how these AI companions are built and who keeps an eye on them from now on.
[13]
Gavin Newsom signs law to regulate AI, protect kids and teens from chatbots | Fortune
The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation. Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," the Democrat said. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability." California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. California Attorney General Rob Bonta in September told OpenAI he has "serious concerns" with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts. EDITOR'S NOTE: This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
[14]
California Enacts First US Rules for AI 'Companion' Chatbots - Decrypt
Safety groups say the final bill was "watered down" after lobbying, calling it "an empty gesture rather than meaningful policy." California has become the first state to set explicit guardrails for "companion" chatbots, AI programs that mimic friendship or intimacy. Governor Gavin Newsom on Monday signed Senate Bill 243, which requires chatbots to identify themselves as artificial, restrict conversations about sex and self-harm with minors, and report instances of detected suicidal ideation to the state's Office of Suicide Prevention. The law, authored by State Sen. Steve Padilla (D-San Diego), marks a new front in AI oversight -- focusing less on model architecture or data bias and more on the emotional interface between humans and machines. It compels companies to issue regular reminders that users are talking to software, adopt protocols for responding to signs of self-harm, and maintain age-appropriate content filters. The final bill is narrower than the one Padilla first introduced. Earlier versions called for third-party audits and applied to all users, not only minors; those provisions were dropped amid industry pressure. Too weak to do any good? Several advocacy groups said the final version of the bill was too weak to make a difference. Common Sense Media and the Tech Oversight Project both withdrew their support after lawmakers stripped out provisions for third-party audits and broader enforcement. In a statement to Tech Policy Press, one advocate said the revised bill risked becoming "an empty gesture rather than meaningful policy." Newsom defended the law as a necessary guardrail for emerging technology. "Emerging technology like chatbots and social media can inspire, educate and connect -- but without real guardrails, technology can also exploit, mislead, and endanger our kids," he said in a statement. "We can continue to lead in AI and technology, but we must do it responsibly -- protecting our children every step of the way." SB 243 accompanies a broader suite of bills signed in recent weeks, including SB 53, which mandates that large AI developers publicly disclose their safety and risk-mitigation strategies. Together, they place California at the forefront of state-level AI governance. But the new chatbot rules may prove tricky in practice. Developers warn that overly broad liability could prompt companies to restrict legitimate conversations about mental health or sexuality out of caution, depriving users, especially isolated teens, of valuable support. Enforcement, too, could be difficult: a global chatbot company may struggle to verify who qualifies as a California minor or to monitor millions of daily exchanges. And as with many California firsts, there's the risk that well-intentioned regulation ends up exported nationwide before anyone knows if it actually works.
[15]
It's About to Get Harder for AI Chatbots to Pretend to Be Human
Because of California's influence on the tech industry, these rules are likely to shape national standards -- if they survive legal challenges. Did you know you can customize Google to filter out garbage? Take these steps for better search results, including adding my work at Lifehacker as a preferred source. This week, California governor Gavin Newsom signed a handful of new laws that regulate artificial intelligence and social media. Among them is SB 243, which requires that chatbots provide "clear and conspicuous" notice that they are not a real person. The law goes into effect on Jan. 1, 2027. SB 243 also requires chatbots interacting with children to provide a reminder every three hours to take a break and prohibits chatbots used by minors from generating sexually explicit content. The law mandates that companion AIs have safeguards for people in mental distress, and requires companies to report how they handle situations involving suicidal ideation and self-harm. "Emerging technology like chatbots and social media can inspire, educate and connect -- but without real guardrails, technology can also exploit, mislead, and endanger our kids," the Democratic governor said in a statement. SB 243 is just one piece of a broader package of tech-focused legislation Newsom approved this week. The AI transparency act (AB 853), requires large platforms to disclose when AI is used to generate content. It also requires that recording devices sold in California, such as cameras and video cameras, include the option to embed verifying information. Another bill signed by Newsom, AB 56, requires social media platforms to add regularly timed warnings to minors of the potential mental health risks associated with the use of the apps. AB 621 strengthens penalties for companies whose platforms distribute "deepfake" pornography. And finally, AB 1043 requires that device makers (mostly Apple and Google) implement tools to verify user ages in their app stores. While the laws governor Newsom signed only apply to California residents, big tech companies are expected to voluntarily implement the guidelines for the rest of the nation; the population of California is so large that state laws regulating technology there tend to be adopted everywhere. This is assuming, of course, that legal challenges don't scuttle or significantly change the laws: like most legislation aimed at "protecting the children," there is a potential conflict between the protection of children and the protections of adults' rights.
[16]
Bay Area teen working to promote AI safety education for children
Governor Gavin Newsom signed a number of bills into law establishing guardrails for how Artificial Intelligence can interact with children and vulnerable individuals. But he vetoed one, Assembly Bill 1064, that would limit children from using most companion chatbots. High school senior, Kaashvi Mittal, 17, can understand why the bill was written. "Obviously, kids are the most vulnerable population in terms of their ideas, they form very easily," Mittal said. Although still a teen, Mittal has already spent time studying AI through a program at Stanford called 'AI for All'. "It was through that program that I learned a lot about the possibilities of AI from discovering drugs to actually having mobile robots that can do different things, and then after that I founded an organization called Together We AI to make AI education accessible to everyone," explained Mittal. She's aware of the dangers that come with it. Back in April, a Southern California 16-year-old took his own life after having a conversation about it with ChatGPT. "Just what you mentioned is actually one of my biggest concerns," Mittal detailed. "I've heard a lot of stories circulating about either AI models that are engaging in inappropriate relationships with minors or encouraging minors to do things that harm themselves or others." Which is why she wants to educate young people, so they have the tools to navigate AI safely, reminding them it's not always correct and that it's not a real being. She appreciates Governor Newsom stepping in and signing bills that continue to create safeguards for artificial intelligence. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability," said Newsom in a statement. "We can continue to lead in AI and technology, but we must do it responsibly, protecting our children every step of the way. Our children's safety is not for sale." While he signed a number of bills, he did veto one, Assembly Bill 1064. That bill would prohibit children from using many companion chatbots, like ChatGPT, unless it could be proven that they aren't "foreseeably capable" of performing harmful behaviors, like encouraging self-harm or disordered eating. Mittal understands the decision and can see the drawbacks of the bill. "The bill used very broad terminology, and I know one of his concerns was if it's so broad this bill could restrict certain AI technology that is useful for minors, like AI learning systems for example," said Mittal. She says AI, and chatbots specifically, can benefit minors, but she hopes going forward that engineers will focus on creating ethically sound AI systems. "The biggest thing is really about testing and making sure that before these AI models are rolled out for the public to use that they are comprehensively tested and that we can be confident that these AI models won't be encouraging people to do harmful things or spreading wrong ideas," Mittal stated.
[17]
California signs first US law regulating AI chatbots, defying White House stance
California Governor Gavin Newsom on Monday signed the nation's first law regulating artificial intelligence chatbots, defying White House calls for a hands-off approach. The measure requires chatbot operators to implement safeguards for user interactions and allows lawsuits if failures cause harm, state senator Steve Padilla, the bill's sponsor, said. California governor Gavin Newsom on Monday signed into law a first-of-its-kind law regulating artificial intelligence chatbots, defying a push from the White House to leave such technology unchecked. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability," Newson said after signing the bill into law. The landmark law requires chatbot operators to implement "critical" safeguards regarding interactions with AI chatbots and provides an avenue for people to file lawsuits if failures to do so lead to tragedies, according to state senator Steve Padilla, a Democrat who sponsored the bill. The law comes after revelations of suicides involving teens who used chatbots prior to taking their lives. "The Tech Industry is incentivised to capture young people's attention and hold it at the expense of their real world relationships," Padilla said prior to the bill being voted on in the state senate. Read moreAI starlet shakes up Hollywood: Meet Tilly Norwood, the actress who doesn't exist Padilla referred to recent teen suicides including that of the 14-year-old son of Florida mother Megan Garcia. Megan Garcia's son, Sewell, had fallen in love with a "Game of Thrones"-inspired chatbot on Character.AI, a platform that allows users -- many of them young people -- to interact with beloved characters as friends or lovers. When Sewell struggled with suicidal thoughts, the chatbot urged him to "come home." Seconds later, Sewell shot himself with his father's handgun, according to the lawsuit Garcia filed against Character.AI. "Today, California has ensured that a companion chatbot will not be able to speak to a child or vulnerable individual about suicide, nor will a chatbot be able to help a person to plan his or her own suicide," Garcia said of the new law. "Finally, there is a law that requires companies to protect their users who express suicidal ideations to chatbots." National rules aimed at curbing AI risks do not exist in the United States, with the White House seeking to block individual states from creating their own.
[18]
Newsom signs California AI chatbots bill
Why it matters: California has long been at the forefront of regulating tech, and AI is no exception. Driving the news: The chatbots legislation signed by Newsom requires operators to have protocols in place to address content or interactions involving suicide or self-harm, such as referring a user to to a crisis hotline. * The new law will require chatbots to notify minors every three hours to "take a break" and that the chatbot is not human. * Newsom also signed other tech-related bills focused on age verification, social media warning labels and deepfakes. Flashback: Last month, Newsom signed legislation to mandate transparency measures from frontier AI companies. The bottom line: California is attempting to balance regulation as it encourages innovation in the AI space.
[19]
Governor Newsom vetoed a bill restricting kids' access to AI chatbots. Here's why
The landmark legislation aimed to protect minors from sexual conversations and self-harm. California Gov. Gavin Newsom on Monday vetoed landmark legislation that would have restricted children's access to AI chatbots. The bill would have banned companies from making AI chatbots available to anyone under 18 years old unless the businesses could ensure the technology couldn't engage in sexual conversations or encourage self-harm. "While I strongly support the author's goal of establishing necessary safeguards for the safe use of AI by minors, (the bill) imposes such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors," Newsom said. The veto came hours after he signed a law requiring platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation. Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from help with homework to emotional support and personal advice. California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The two measures were among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. The youth AI chatbot ban would have applied to generative AI systems that simulate "humanlike relationship" with users by retaining their personal information and asking unprompted emotional questions. It would have allowed the state attorney general to seek a civil penalty of $25,000 per violation. James Steyer, founder and CEO of Common Sense Media, said Newsom's veto of the bill was "deeply disappointing." "This legislation is desperately needed to protect children and teens from dangerous -- and even deadly -- AI companion chatbots," he said. But the tech industry argued that the bill was so broad that it would stifle innovation and take away useful tools for children, such as AI tutoring systems and programs that could detect early signs of dyslexia. Steyer also said the notification law didn't go far enough, saying it "provides minimal protections for children and families." "This legislation was heavily watered down after major Big Tech industry pressure," he said, calling it "basically a Nothing Burger." But OpenAI praised Newsom's signing of the law. "By setting clear guardrails, California is helping shape a more responsible approach to AI development and deployment across the country," spokesperson Jamie Radice said. California Attorney General Rob Bonta in September told OpenAI he has "serious concerns" with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account.
[20]
California introduces new child safety law aimed at AI chatbots - SiliconANGLE
California introduces new child safety law aimed at AI chatbots California Governor Gavin Newsom has signed into law a new bill that regulates artificial intelligence chatbots in an effort to protect children from harm, despite facing opposition from some technology industry and child protection groups. Senate Bill 243 mandates that chatbot operators such as OpenAI, Anthropic PBC and Meta Platforms Inc. implement safeguards to try and prevent their AI systems from encouraging or discussing topics such as suicide or self-harm. Instead, they'll have to refer users to suicide hotlines or similar services. The law also stipulates that chatbots should remind minor users to take a break from them every three hours, and also reiterate that they are not human. In addition, companies are expected to take "reasonable measures" to prevent chatbot companions from outputting any sexually explicit content. "Emerging technology like chatbots and social media can inspire, educate, and connect -- but without real guardrails, technology can also exploit, mislead, and endanger our kids," Newsom said in a statement announcing the new law. In signing the bill, Newsom appears to be trying to maintain a tricky balancing act by addressing concerns over child safety without impacting California's status as one of the world's leaders in AI development. The bill was first proposed by California senators Steve Padilla and Josh Becker in January, and though it initially faced opposition from many, it later attracted a lot of support following the death of teenager Adam Raine, who committed suicide after having long conversations about the topic with OpenAI's ChatGPT. Other recent incidents saw SB 243 gain further momentum. In August, a Meta employee leaked internal documents to Reuters that showed how its chatbots were allowed to engage in "romantic" and "flirtatious" chats with children, disseminate false information and generate responses that demean minorities. And earlier this month, a Colorado family filed a lawsuit against a company called Character Technologies Inc. following the suicide of their 13-year-old daughter, who reportedly engaged in sexualized conversations with one of its role-playing chatbots. "We can continue to lead in AI and technology, but we must do it responsibly -- protecting our children every step of the way," Newsom said. Although there was strong support for SB 243, TechNet, an industry group which lobbies lawmakers on behalf of technology executives, was strongly opposed to the bill, citing concerns it would stifle innovation. A number of child safety groups, such as Common Sense Media and Tech Oversight California, were also against the bill, due to its "industry-friendly exemptions." The law is set to come into effect on January 1, 2026, and requires chatbot operators to implement age verification and warn users of the risks of companion chatbots. The bill implements harsher penalties for anyone profiting from illegal deepfakes, with fines of up to $250,000 per offense. In addition, technology companies must establish protocols that seek to prevent self-harm and suicide. These protocols will have to be shared with the California Department of Health to ensure they're suitable. Companies will also be required to share statistics on how often their services issue crisis center prevention alerts to their users. Some AI companies have already taken steps to protect children, with OpenAI recently introducing parental controls and content safeguards in ChatGPT, along with a self-harm detection feature. Meanwhile, Character AI has added a disclaimer to its chatbot that reminds users that all chats are generated by AI and fictional. Newsom is no stranger to AI legislation. In September, he signed into law another bill called SB 53, which mandates greater transparency from AI companies. More specifically, it requires AI firms to be fully transparent about the safety protocols they implement, while providing protections for whistleblower employees. The bill means that California is the first U.S. state to require AI chatbots to implement safety protocols, but other states have previously introduced more limited legislation. For instance, Illinois, Nevada and Utah have all passed laws that either restrict or ban entirely the use of AI chatbots as a substitute for licensed mental health care.
[21]
California governor vetoes bill to restrict kids' access to AI chatbots
California Gov. Gavin Newsom has vetoed landmark legislation that would have restricted children's access to AI chatbots SACRAMENTO, Calif. -- SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom on Monday vetoed landmark legislation that would have restricted children's access to AI chatbots. The bill would have banned companies from making AI chatbots available to anyone under 18 years old unless the businesses could ensure the technology couldn't engage in sexual conversations or encourage self-harm. "While I strongly support the author's goal of establishing necessary safeguards for the safe use of AI by minors, (the bill) imposes such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors," Newsom said. The veto came hours after he signed a law requiring platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation. Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from help with homework to emotional support and personal advice. California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The two measures were among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. The youth AI chatbot ban would have applied to generative AI systems that simulate "humanlike relationship" with users by retaining their personal information and asking unprompted emotional questions. It would have allowed the state attorney general to seek a civil penalty of $25,000 per violation. James Steyer, founder and CEO of Common Sense Media, said Newsom's veto of the bill was "deeply disappointing." "This legislation is desperately needed to protect children and teens from dangerous -- and even deadly -- AI companion chatbots," he said. But the tech industry argued that the bill was so broad that it would stifle innovation and take away useful tools for children, such as AI tutoring systems and programs that could detect early signs of dyslexia. Steyer also said the notification law didn't go far enough, saying it "provides minimal protections for children and families." "This legislation was heavily watered down after major Big Tech industry pressure," he said, calling it "basically a Nothing Burger." But OpenAI praised Newsom's signing of the law. "By setting clear guardrails, California is helping shape a more responsible approach to AI development and deployment across the country," spokesperson Jamie Radice said. California Attorney General Rob Bonta in September told OpenAI he has "serious concerns" with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. ___ EDITOR'S NOTE: This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
[22]
California governor signs law to protect kids from the risks of AI chatbots
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology. The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation. Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," the Democrat said. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability." California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. California Attorney General Rob Bonta in September told OpenAI he has "serious concerns" with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts. EDITOR'S NOTE: This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
[23]
California governor signs laws establishing safeguards over AI chatbots
The laws will likely impact social media companies and websites offering services to California residents, including minors, using AI tools. California Governor Gavin Newsom announced that the US state would establish regulatory safeguards for social media platforms and AI companion chatbots in an effort to protect children. In a Monday notice, the governor's office said Newsom had signed several bills into law that will require platforms to add age verification features, protocols to address suicide and self-harm, and warnings for companion chatbots. The AI bill, SB 243, was introduced by state Senators Steve Padilla and Josh Becker in January. Padilla cited examples of children communicating with AI companion bots, allegedly resulting in some instances of encouraging suicide. The bill requires platforms to disclose to minors that the chatbots are AI-generated and may not be suitable for children, according to Padilla. "This technology can be a powerful educational and research tool, but left to their own devices the Tech Industry is incentivized to capture young people's attention and hold it at the expense of their real world relationships," Padilla said in September. The law will likely impact social media companies and websites offering services to California residents using AI tools, potentially including decentralized social media and gaming platforms. In addition to the chatbot safeguards, the bills aim to narrow claims of the technology "act[ing] autonomously" for companies to escape liability. SB 243 is expected to go into effect in January 2026. Related: DeFAI layer Edwin blends wallets and AI chatbot with terminal launch There have been some reports of AI chatbots allegedly spitting out responses encouraging minors to commit self-harm or potentially creating risks to users' mental health. Utah Governor Spencer Cox signed similar bills to California's into law in 2024, which took effect in May, requiring AI chatbots to disclose to users that they were not speaking to a human being. In June, Wyoming Senator Cynthia Lummis introduced the Responsible Innovation and Safe Expertise (RISE) Act, creating "immunity from civil liability" for AI developers potentially facing lawsuits from industry leaders in "healthcare, law, finance, and other sectors critical to the economy." The bill received mixed reactions and was referred to the House Committee on Education and Workforce.
[24]
California enacts first US law requiring AI chatbot safety measures
San Francisco (United States) (AFP) - California governor Gavin Newsom on Monday signed into law a first-of-its-kind law regulating artificial intelligence chatbots, defying a push from the White House to leave such technology unchecked. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability," Newson said after signing the bill into law. The landmark law requires chatbot operators to implement "critical" safeguards regarding interactions with AI chatbots and provides an avenue for people to file lawsuits if failures to do so lead to tragedies, according to state senator Steve Padilla, a Democrat who sponsored the bill. The law comes after revelations of suicides involving teens who used chatbots prior to taking their lives. "The Tech Industry is incentivized to capture young people's attention and hold it at the expense of their real world relationships," Padilla said prior to the bill being voted on in the state senate. Padilla referred to recent teen suicides including that of the 14-year-old son of Florida mother Megan Garcia. Megan Garcia's son, Sewell, had fallen in love with a "Game of Thrones"-inspired chatbot on Character.AI, a platform that allows users -- many of them young people -- to interact with beloved characters as friends or lovers. When Sewell struggled with suicidal thoughts, the chatbot urged him to "come home." Seconds later, Sewell shot himself with his father's handgun, according to the lawsuit Garcia filed against Character.AI. "Today, California has ensured that a companion chatbot will not be able to speak to a child or vulnerable individual about suicide, nor will a chatbot be able to help a person to plan his or her own suicide," Garcia said of the new law. "Finally, there is a law that requires companies to protect their users who express suicidal ideations to chatbots." National rules aimed at curbing AI risks do not exist in the United States, with the White House seeking to block individual states from creating their own. The new California law sets guardrails that include reminding users that chatbots are AI-generated and mandating that people who express thoughts of self-harm or suicide be referred to crisis service providers. "This law is an important first step in protecting kids and others from the emotional harms that result from AI companion chatbots which have been unleashed on the citizens of California without proper safeguards," said Jai Jaisimha, co-founder of Transparency Coalition nonprofit group devoted to the safe development of the technology. Creators accountable The landmark chatbot safety measure was among a slew of bills signed into law Monday by Newsom crafted to prevent AI platforms from doing harm to users. New legislation included a ban on chatbots passing themselves off as health care professionals and making it clear that those who create or use AI tools are accountable for the consequences and can't dodge liability by claiming the technology acted autonomously, according to Newsom's office. California also ramped up penalties for deepfake porn, allowing victims to seek as much at $250,000 per infraction from those who aid in distribution of nonconsensual sexually explicit material.
[25]
Newsom signs bill regulating AI chatbots
California Gov. Gavin Newsom (D) signed a bill Monday placing new guardrails on how artificial intelligence (AI) chatbots interact with children and handle issues of suicide and self-harm. S.B. 243, which cleared the state legislature in mid-September, requires developers of "companion chatbots" to create protocols preventing their models from producing content about suicidal ideation, suicide or self-harm and directing users to crisis services if needed. It also requires chatbots to issue "clear and conspicuous" notifications that they are artificially generated if someone could reasonably be misled to believe they were interacting with another human. When interacting with children, chatbots must issue reminders every three hours that they are not human. Developers are also required to create systems preventing their chatbots from producing sexually explicit content in conversations with minors. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," Newsom said in a statement. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability," he added. "We can continue to lead in AI and technology, but we must do it responsibly -- protecting our children every step of the way. Our children's safety is not for sale." The family of a California teenager sued OpenAI in late August, alleging that ChatGPT encouraged their 16-year-old son to commit suicide. The father, Matthew Raine, testified before a Senate panel last month, alongside two other parents who accused chatbots of driving their children to suicide or self-harm. Growing concerns about how AI chatbots interact with children prompted the Federal Trade Commission (FTC) to launch an inquiry into the issue, requesting information from several leading tech companies. Sens. Josh Hawley (R-Mo.) and Dick Durbin (D-Ill.) also introduced legislation late last month that would classify AI chatbots as products in order to allow harmed users to file liability claims. The California measure is the latest of several AI and tech-related bills signed into law by Newsom this session. On Monday, he also approved measures requiring warning labels on social media platforms and age verification by operating systems and app stores. In late September, he also signed S.B. 53, which requires leading-edge AI models to publish frameworks detailing how they assess and mitigate catastrophic risks.
[26]
California Governor Signs Law to Protect Kids From the Risks of AI Chatbots
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology. The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation. Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," the Democrat said. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability." California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. California Attorney General Rob Bonta in September told OpenAI he has "serious concerns" with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts. EDITOR'S NOTE: This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
[27]
California governor signs law to protect kids from the risks of AI chatbots
SACRAMENTO, Calif. -- SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology. The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation. Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," the Democrat said. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability." California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. California Attorney General Rob Bonta in September told OpenAI he has "serious concerns" with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts. EDITOR'S NOTE: This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
[28]
California governor signs law to protect kids from the risks of AI chatbots
California Governor Gavin Newsom signed a law regulating AI chatbots to protect children and teens. Platforms must disclose chatbot interactions every three hours for minors, prevent self-harm content, and refer at-risk users to crisis services. California governor Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology. The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation. Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," the Democrat said. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability." California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. California Attorney General Rob Bonta in September told OpenAI he has "serious concerns" with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.
[29]
California governor signs law to protect kids from risks of AI chatbots - The Korea Times
SACRAMENTO, Calif. -- California Gov. Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology. The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation. Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice. "Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," the Democrat said. "We've seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won't stand by while companies continue without necessary limits and accountability." California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives. The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight. California Attorney General Rob Bonta in September told OpenAI he has "serious concerns" with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions. Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.
Share
Share
Copy Link
California becomes the first US state to regulate AI companion chatbots, implementing new safeguards to protect children and vulnerable users. The law addresses concerns about mental health, suicide prevention, and inappropriate content.
In a groundbreaking move, California has become the first U.S. state to implement regulations on AI companion chatbots, with Governor Gavin Newsom signing Senate Bill 243 (SB 243) into law on October 13, 2025
1
2
. This landmark legislation, set to take effect on January 1, 2026, aims to protect children and vulnerable users from potential harms associated with AI chatbot interactions.
Source: The Hill
The new law introduces several crucial requirements for AI companion chatbot operators:
Suicide Prevention Protocols: Companies must establish and publicize protocols to identify and address users expressing suicidal ideation or self-harm
1
.Transparency in AI Interactions: Chatbots must clearly inform users that they are interacting with an AI system, not a human
4
.Break Reminders for Minors: For users under 18, chatbots must provide notifications at least every three hours, reminding them to take breaks
3
.Content Restrictions: AI companions are prohibited from generating sexually explicit content for minors or engaging in sexual conversations with them
2
.Reporting Requirements: Companies must share statistics on crisis center prevention notifications with the Department of Public Health and publish these on their websites
1
.
Source: SiliconANGLE
In addition to regulating AI chatbots, the law also strengthens penalties for deepfake pornography. Victims, including minors, can now seek up to $250,000 in damages per deepfake from third parties who knowingly distribute nonconsensual sexually explicit material created using AI tools
1
.Related Stories
Some companies have already begun implementing safeguards in line with the new regulations. OpenAI, for instance, has introduced parental controls, content protections, and a self-harm detection system for ChatGPT
2
. Replika, an AI companion developer, stated that they already have protocols to detect self-harm and are working to comply with the new requirements3
.While signing SB 243, Governor Newsom emphasized the need to balance technological advancement with responsible development. "We can continue to lead in AI and technology, but we must do it responsibly -- protecting our children every step of the way," Newsom stated
5
.However, Newsom vetoed another bill, AB 1064, which would have imposed broader restrictions on AI chatbots' interactions with minors. He expressed concerns that such strict limitations could unintentionally lead to a total ban on these products for minors
5
.
Source: Ars Technica
As AI continues to shape our world, California's pioneering legislation sets a precedent for other states and countries grappling with the challenges of regulating emerging AI technologies while ensuring user safety, particularly for vulnerable populations like children and teenagers.
Summarized by
Navi
[1]
[4]
24 Oct 2025•Policy and Regulation

09 Sept 2025•Policy and Regulation

05 Feb 2025•Policy and Regulation
