2 Sources
2 Sources
[1]
California Wants AI Chatbots to Remind Users They Aren't People
The proposed law is designed to protect kids from becoming socially isolated. Even if chatbots successfully pass the Turing test, they'll have to give up the game if they're operating in California. A new bill proposed by California Senator Steve Padilla would require chatbots that interact with children to offer occasional reminders that they are, in fact, a machine and not a real person. The bill, SB 243, was introduced as part of an effort to regulate the safeguards that companies operating chatbots must put in place in order to protect children. Among the requirements the bill would establish: it would ban companies from "providing rewards" to users to increase engagement or usage, require companies to report to the State Department of Health Care Services how frequently minors are displaying signs of suicidal ideation, and provide periodic reminders that chatbots are AI-generated and not human. That last bit is particularly germane to the current moment, as kids have been shown to be quite vulnerable to these systems. Last year, a 14-year-old tragically took his own life after developing an emotional connection with a chatbot made accessible by Character.AI, a service for creating chatbots modeled after different pop culture characters. The parents of the child have sued Character.AI over the death, accusing the platform of being “unreasonably dangerous†and without sufficient safety guardrails in place despite being marketed to children. Researchers at the University of Cambridge have found that children are more likely than adults to view AI chatbots as trustworthy, even viewing them as quasi-human. That can put children at significant risk when chatbots respond to their prompting without any sort of protection in place. It's how, for instance, researchers were able to get Snapchat's built-in AI to provide instructions to a hypothetical 13-year-old user on how to lie to her parents to meet up with a 30-year-old and lose her virginity. There are potential benefits to kids feeling free to share their feelings with a bot if it allows them to express themselves in a place where they feel safe. But the risk of isolation is real. Little reminders that there is not a person on the other end of your conversation may be helpful, and intervening in the cycle of addiction that tech platforms are so adept at trapping kids in through repeated dopamine hits is a good starting point. Failing to provide those types of interventions as social media started to take over is part of how we got here in the first place. But these protections won't address the root issues that lead to kids seeking out the support of chatbots in the first place. There is a severe lack of resources available to facilitate real-life relationships for kids. Classrooms are over-stuffed and underfunded, after school programs are on the decline, "third places" continue to disappear, and there is a shortage of child psychologists to help kids process everything they are dealing with. It's good to remind kids that chatbots aren't real, but it'd be better to put them in situations where they don't feel like they need to talk to the bots in the first place.
[2]
California bill would make AI companies remind kids that chatbots aren't people
In addition to limiting companies from using "addictive engagement patterns," the bill would require AI companies to provide annual reports to the State Department of Health Care Services outlining how many times it detected suicidal ideation by kids using the platform, as well as the number of times a chatbot brought up the topic. It would also make companies tell users that their chatbots might not be appropriate for some kids.
Share
Share
Copy Link
California Senator Steve Padilla introduces a bill to regulate AI chatbots, requiring them to remind children they are not real people and implementing measures to protect minors from potential harm.
California State Senator Steve Padilla has proposed a new bill, SB 243, aimed at regulating AI chatbots to protect children from potential harm and social isolation. The bill introduces several key measures to ensure the responsible use of AI technology when interacting with minors
1
.Reminders of AI Nature: The bill would require chatbots interacting with children to provide periodic reminders that they are AI-generated and not human
1
.Ban on Engagement Rewards: Companies would be prohibited from offering rewards to users to increase engagement or usage
1
.Reporting Requirements: AI companies would need to submit annual reports to the State Department of Health Care Services, detailing the frequency of suicidal ideation detected among minor users and instances where chatbots initiated discussions about suicide
2
.User Warnings: Companies would be required to inform users that their chatbots may not be suitable for some children
2
.The proposed bill addresses growing concerns about children's vulnerability to AI systems. Research from the University of Cambridge has shown that children are more likely than adults to view AI chatbots as trustworthy and quasi-human
1
.A tragic incident involving a 14-year-old who took his own life after developing an emotional connection with a chatbot on Character.AI has highlighted the potential dangers of unregulated AI interactions with minors. The child's parents have since filed a lawsuit against the company
1
.Related Stories
While the bill aims to protect children from harmful AI interactions, it also recognizes the potential benefits of chatbots as a safe space for self-expression. However, critics argue that the proposed measures do not address the root causes that lead children to seek support from AI chatbots, such as a lack of real-life resources and relationships
1
.The legislation represents a significant step towards regulating AI technologies in the interest of child safety. As AI continues to advance and integrate into various aspects of daily life, such regulatory efforts may become increasingly important in ensuring responsible development and use of these technologies.
Summarized by
Navi
12 Sept 2025•Policy and Regulation
03 Sept 2025•Technology
06 Aug 2025•Policy and Regulation