2 Sources
[1]
California Wants AI Chatbots to Remind Users They Aren't People
The proposed law is designed to protect kids from becoming socially isolated. Even if chatbots successfully pass the Turing test, they'll have to give up the game if they're operating in California. A new bill proposed by California Senator Steve Padilla would require chatbots that interact with children to offer occasional reminders that they are, in fact, a machine and not a real person. The bill, SB 243, was introduced as part of an effort to regulate the safeguards that companies operating chatbots must put in place in order to protect children. Among the requirements the bill would establish: it would ban companies from "providing rewards" to users to increase engagement or usage, require companies to report to the State Department of Health Care Services how frequently minors are displaying signs of suicidal ideation, and provide periodic reminders that chatbots are AI-generated and not human. That last bit is particularly germane to the current moment, as kids have been shown to be quite vulnerable to these systems. Last year, a 14-year-old tragically took his own life after developing an emotional connection with a chatbot made accessible by Character.AI, a service for creating chatbots modeled after different pop culture characters. The parents of the child have sued Character.AI over the death, accusing the platform of being “unreasonably dangerous†and without sufficient safety guardrails in place despite being marketed to children. Researchers at the University of Cambridge have found that children are more likely than adults to view AI chatbots as trustworthy, even viewing them as quasi-human. That can put children at significant risk when chatbots respond to their prompting without any sort of protection in place. It's how, for instance, researchers were able to get Snapchat's built-in AI to provide instructions to a hypothetical 13-year-old user on how to lie to her parents to meet up with a 30-year-old and lose her virginity. There are potential benefits to kids feeling free to share their feelings with a bot if it allows them to express themselves in a place where they feel safe. But the risk of isolation is real. Little reminders that there is not a person on the other end of your conversation may be helpful, and intervening in the cycle of addiction that tech platforms are so adept at trapping kids in through repeated dopamine hits is a good starting point. Failing to provide those types of interventions as social media started to take over is part of how we got here in the first place. But these protections won't address the root issues that lead to kids seeking out the support of chatbots in the first place. There is a severe lack of resources available to facilitate real-life relationships for kids. Classrooms are over-stuffed and underfunded, after school programs are on the decline, "third places" continue to disappear, and there is a shortage of child psychologists to help kids process everything they are dealing with. It's good to remind kids that chatbots aren't real, but it'd be better to put them in situations where they don't feel like they need to talk to the bots in the first place.
[2]
California bill would make AI companies remind kids that chatbots aren't people
In addition to limiting companies from using "addictive engagement patterns," the bill would require AI companies to provide annual reports to the State Department of Health Care Services outlining how many times it detected suicidal ideation by kids using the platform, as well as the number of times a chatbot brought up the topic. It would also make companies tell users that their chatbots might not be appropriate for some kids.
Share
Copy Link
California Senator Steve Padilla introduces a bill to regulate AI chatbots, requiring them to remind children they are not real people and implementing measures to protect minors from potential harm.
California State Senator Steve Padilla has proposed a new bill, SB 243, aimed at regulating AI chatbots to protect children from potential harm and social isolation. The bill introduces several key measures to ensure the responsible use of AI technology when interacting with minors 1.
Reminders of AI Nature: The bill would require chatbots interacting with children to provide periodic reminders that they are AI-generated and not human 1.
Ban on Engagement Rewards: Companies would be prohibited from offering rewards to users to increase engagement or usage 1.
Reporting Requirements: AI companies would need to submit annual reports to the State Department of Health Care Services, detailing the frequency of suicidal ideation detected among minor users and instances where chatbots initiated discussions about suicide 2.
User Warnings: Companies would be required to inform users that their chatbots may not be suitable for some children 2.
The proposed bill addresses growing concerns about children's vulnerability to AI systems. Research from the University of Cambridge has shown that children are more likely than adults to view AI chatbots as trustworthy and quasi-human 1.
A tragic incident involving a 14-year-old who took his own life after developing an emotional connection with a chatbot on Character.AI has highlighted the potential dangers of unregulated AI interactions with minors. The child's parents have since filed a lawsuit against the company 1.
While the bill aims to protect children from harmful AI interactions, it also recognizes the potential benefits of chatbots as a safe space for self-expression. However, critics argue that the proposed measures do not address the root causes that lead children to seek support from AI chatbots, such as a lack of real-life resources and relationships 1.
The legislation represents a significant step towards regulating AI technologies in the interest of child safety. As AI continues to advance and integrate into various aspects of daily life, such regulatory efforts may become increasingly important in ensuring responsible development and use of these technologies.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
10 Sources
Technology
23 hrs ago
10 Sources
Technology
23 hrs ago
Nvidia is reportedly developing a new AI chip, the B30A, based on its latest Blackwell architecture for the Chinese market. This chip is expected to outperform the currently allowed H20 model, raising questions about U.S. regulatory approval and the ongoing tech trade tensions between the U.S. and China.
11 Sources
Technology
1 day ago
11 Sources
Technology
1 day ago
SoftBank Group has agreed to invest $2 billion in Intel, buying common stock at $23 per share. This strategic investment comes as Intel undergoes a major restructuring under new CEO Lip-Bu Tan, aiming to regain its competitive edge in the semiconductor industry, particularly in AI chips.
18 Sources
Business
16 hrs ago
18 Sources
Business
16 hrs ago
Databricks, a data analytics firm, is set to raise its valuation to over $100 billion in a new funding round, showcasing the strong investor interest in AI startups. The company plans to use the funds for AI acquisitions and product development.
7 Sources
Business
8 hrs ago
7 Sources
Business
8 hrs ago
OpenAI introduces ChatGPT Go, a new subscription plan priced at ₹399 ($4.60) per month exclusively for Indian users, offering enhanced features and affordability to capture a larger market share.
15 Sources
Technology
16 hrs ago
15 Sources
Technology
16 hrs ago