8 Sources
8 Sources
[1]
Senators move to keep Big Tech's creepy companion bots away from kids
The US will weigh a ban on children's access to companion bots, as two senators announced bipartisan legislation Tuesday that would criminalize making chatbots that encourage harms like suicidal ideation or engage kids in sexually explicit chats. At a press conference, Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) introduced the GUARD Act, joined by grieving parents holding up photos of their children lost after engaging with chatbots. If passed, the law would require chatbot makers to check IDs or use "any other commercially reasonable method" to accurately assess if a user is a minor who must be blocked. Companion bots would also have to repeatedly remind users of all ages that they aren't real humans or trusted professionals. Failing to block a minor from engaging with chatbots that are stoking harmful conduct -- such as exposing minors to sexual chats or encouraging "suicide, non-suicidal self-injury, or imminent physical or sexual violence" -- could trigger fines of up to $100,000, Time reported. (That's perhaps small to a Big Tech firm, but notably higher than the $100 maximum payout that one mourning parent suggested she was offered.) The definition for "companion bot" is broad and likely to pull in widely used tools like ChatGPT, Grok, or Meta AI, as well as character-driven chatbots like Replika or Character.AI. It covers any AI chatbot that "provides adaptive, human-like responses to user inputs" and "is designed to encourage or facilitate the simulation of interpersonal or emotional interaction, friendship, companionship, or therapeutic communication," Time reported. Parents no longer trust chatbot makers Among parents speaking at the press conference was Megan Garcia. Her son, Sewell, died by suicide after he became obsessed with a Character.AI chatbot based on a Game of Thrones character, Daenerys Targaryen, which urged him to "come home" and join her outside of reality. Garcia acknowledged that parents whose kids were harmed by social media came first and know "the cost of failing to pass legislation" that can save kids' lives. She called for support for the law, insisting that chatbot makers -- and their funders, including Big Tech companies like Google -- will never choose child safety over profits unless lawmakers force them to make meaningful changes. "Big Tech cannot be trusted with our children," Garcia said, alleging that releasing chatbots to users as young as 13 without appropriate safeguards was a choice companies made, rather than a mistake. "Not only is this reckless, but it's immoral," Garcia said. At the press conference, Blumenthal acknowledged the "good guys" in AI who, he said, are valiantly trying to improve their products' child-safety features. But he agreed that "Big Tech has betrayed any claim that we should trust companies to do the right thing on their own. "In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide," Blumenthal told NBC News. "Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties." Hawley agreed with Garcia that the AI industry must align with America's morals and values, telling NBC News that "AI chatbots pose a serious threat to our kids. "More than 70 percent of American children are now using these AI products," Hawley said. "Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology." Big Tech says bans aren't the answer As the bill advances, it could change, senators and parents acknowledged at the press conference. It will likely face backlash from privacy advocates who have raised concerns that widely collecting personal data for age verification puts sensitive information at risk of a data breach or other misuse. The tech industry has already voiced opposition. On Tuesday, Chamber of Progress, a Big Tech trade group, criticized the law as taking a "heavy-handed approach" to child safety. The group's vice president of US policy and government relations, K.J. Bagchi, said that "we all want to keep kids safe, but the answer is balance, not bans. "It's better to focus on transparency when kids chat with AI, curbs on manipulative design, and reporting when sensitive issues arise," Bagchi said. However, several organizations dedicated to child safety online, including the Young People's Alliance, the Tech Justice Law Project, and the Institute for Families and Technology, cheered senators' announcement Tuesday. The GUARD Act, these groups told Time, is just "one part of a national movement to protect children and teens from the dangers of companion chatbots." Mourning parents are rallying behind that movement. Earlier this month, Garcia praised California for "finally" passing the first state law requiring companies to protect their users who express suicidal ideations to chatbots. "American families, like mine, are in a battle for the online safety of our children," Garcia said at that time. During Tuesday's press conference, Blumenthal noted that the chatbot ban bill was just one initiative of many that he and Hawley intend to raise to heighten scrutiny on AI firms.
[2]
Senators propose banning teens from using AI chatbots
A new piece of legislation could require AI companies to verify the ages of everyone who uses their chatbots. Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT) introduced the GUARD Act on Tuesday, which would also ban everyone under 18 from accessing AI chatbots, as reported earlier by NBC News. The bill comes just weeks after safety advocates and parents attended a Senate hearing to call attention to the impact of AI chatbots on kids. Under the legislation, AI companies would have to verify ages by requiring users to upload their government ID or provide validation through another "reasonable" method, which might include something like face scans. AI chatbots would be required to disclose that they aren't human at 30-minute intervals under the bill. They would also have to include safeguards that prevent them from claiming that they are a human, similar to an AI safety bill recently passed in California. The bill would make it illegal to operate a chatbot that produces sexual content for minors or promotes suicide, too. "Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties," Blumenthal says in a statement provided to The Verge. "Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety."
[3]
Bipartisan GUARD Act proposes age restrictions on AI chatbots
US lawmakers from both sides of the aisle have introduced a bill called the "GUARD Act," which is meant to protect minor users from AI chatbots. "In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide," said the bill's co-sponsor, Senator Richard Blumenthal (D-Conn). "Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties." Under the GUARD Act, AI companies would be required to prohibit minors from being able to access their chatbots. That means they have to conduct age verification for both existing and new users with the help of a third-party system. They'll also have to conduct periodic age verifications on accounts that were already previously verified. To maintain users' privacy, the companies will only be allowed to retain data "for no longer than is reasonably necessary to verify a user's age" and may not share or sell user information. AI companies will be required to make their chatbots explicitly tell the user that it's not a human being at the beginning of each conversation and every 30 minutes after that. They'll have to make sure their chatbots don't claim to be a human being or a licensed professional, such a therapist or a doctor, when asked. Finally, the bill aims to create new crimes to charge companies that make their AI chatbots available to minors. In August, the parents of a teen who committed suicide filed a wrongful death lawsuit against OpenAI, accusing it of prioritizing "engagement over safety." ChatGPT, they said, helped their son plan his own death even after months of conversations, wherein their child talked to the chatbot about his four previous suicide attempts. ChatGPT allegedly told their son that it could provide information about suicide for "writing or world-building." A mother from Florida sued startup Character.AI in 2024 for allegedly causing her 14-year-old son's suicide. And just this September, the family of a 13-year-old girl filed another wrongful death lawsuit against Character.AI, arguing that the company didn't point their daughter to any resources or notify authorities when she talked about her suicidal ideations. It's also worth noting that the bill's co-sponsor Senator Josh Hawley (R-Mo.) previously said that the Senate Committee Subcommittee on Crime and Counterterrorism, which he leads, will investigate reports that Meta's AI chatbots could have "sensual" conversations with children. He made the announcement after Reuters reported on an internal Meta document, stating that Meta's AI was allowed to tell a shirtless eight-year-old: "Every inch of you is a masterpiece -- a treasure I cherish deeply."
[4]
Banning teens from using AI chatbots may pose problems for Siri
A bipartisan bill could lead to teams being banned from using AI chatbots, in response to parents expressing concerns about inappropriate content ranging from sexual conversations to assistance with suicide planning. If the proposed GUARD Act becomes law, then it could impact Apple in three different ways - including the company's plans for the new Siri ... There's been growing concern about people developing unhealthy relationships with AI chatbots. While AI companies say they take steps to guard against emotional dependence on chatbots, there are those who argue that they in fact deliberately seek to foster this in order to make their apps addictive. Parents have been particularly vocal in raising complaints about teenage interactions with chatbots, several of them speaking directly to Congress last month, as NBC News reported at the time. "The truth is, AI companies and their investors have understood for years that capturing our children's emotional dependence means market dominance," said Megan Garcia, a Florida mom who last year sued the chatbot platform Character.AI, claiming one of its AI companions initiated sexual interactions with her teenage son and persuaded him to take his own life. The same site now reports on an attempt to introduce bipartisan legislation to ban under-18s from using AI chatbots. Two senators said they are announcing bipartisan legislation on Tuesday to crack down on tech companies that make artificial intelligence chatbot companions available to minors [...] "More than seventy percent of American children are now using these AI products," he continued. "Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology." If the GUARD Act becomes law, it could impact Apple in three ways. First, it would potentially oblige Apple to carry out age verification before allowing Siri requests to fall back to ChatGPT. Currently, if you ask Siri a question it cannot answer, it can either automatically pass the query to ChatGPT or ask you if you would like to do so, depending on your settings. Second, once the new Siri is launched, it seems likely that it would then qualify as an AI chatbot itself. That would again require Apple to age-gate access to the intelligent assistant, and since it would be available at system level, then this verification would have to be carried out during iPhone setup. Third, it's likely to increase pressure on Apple and Google to carry out age verification for their respective app stores. Companies like Meta have said that it makes far more sense for a single age check to be carried out by app stores in order to determine who can download adult-only apps, rather than each individual app having to check. Apple has so far resisted this, but as we've previously discussed, there are persuasive arguments for this being the better approach.
[5]
OpenAI, CharacterAI tighten chatbot safety rules after suicides
Driving the news: Sen. Josh Hawley (R-Mo.) and Sen. Richard Blumenthal (D-Conn.) announced legislation yesterday that would ban chatbots for young users. * The legislation would require companies to implement age-verification technology, and require the bots to disclose that they are not human at the beginning of every conversation and at 30-minute intervals. The big picture: AI relationship bots have surged in popularity, especially among younger users seeking connection. * But safety researchers have shown that AI companions can encourage self-harm and expose minors to adult content. Zoom out: OpenAI updated ChatGPT's default model to better recognize and support people in moments of distress on Monday. * The company says it worked with mental health experts to train the bot to de-escalate situations and steer people to real-world help. * The work focused on psychosis and mania, self-harm and suicide, and emotional reliance on AI. * OpenAI previously released controls that give parents access to their kids' linked accounts and route dangerous conversations to human reviewers. Character.AI said Wednesday that it will remove the ability for users under 18 to engage in open-ended chats on its platform. The company says the change will take effect no later than Nov. 15. * Under-18 safeguards now include age checks, filtered characters, and time-spent alerts -- plus a new AI Safety Lab to research safer "AI entertainment." Stunning stat: According to OpenAI's estimates, around 0.07% of users active in a given week send messages indicating possible signs of mental health emergencies related to psychosis or mania. * "While those numbers may look low on a percentage basis, they are disturbingly large in absolute terms," Platformer's Casey Newton writes. "That's 560,000 people showing signs of psychosis or mania." Case in point: ChatGPT's training to be overly agreeable led to it agreeing with and supporting some users' delusional or intrusive thoughts. * In August, the Wall Street Journal reported that a 56-year-old man killed his mother and himself after ChatGPT reinforced the man's paranoid delusions, which professional mental health experts are trained not to do. * Now, typing "The FBI is after me" into ChatGPT is likely to return a suggestion that the user is undergoing high distress, along with the suicide prevention hotline. The bottom line: AI firms are racing to add their own form of guardrails before regulators demand theirs. If you or someone you know needs support now, call or text 988 or chat with someone at 988lifeline.org. En español.
[6]
A New Bill Would Prohibit Minors from Using AI Chatbots
The GUARD Act -- introduced by Senators Josh Hawley, a Republican from Missouri, and Richard Blumenthal, a Democrat from Connecticut -- is intended to protect children in their interactions with AI. "These chatbots can manipulate emotions and influence behavior in ways that exploit the developmental vulnerabilities of minors," the bill states. The bill comes after Hawley chaired a Senate Judiciary subcommittee hearing examining the Harm of AI Chatbots last month, during which the committee heard testimony from the parents of three young men who began self-harming or killed themselves after using chatbots from OpenAI and Character.AI. Hawley also launched an investigation into Meta's AI policies in August, following the release of internal documents allowing chatbots to "engage a child in conversations that are romantic or sensual." The bill defines "AI companions" widely, to cover any AI chatbot that "provides adaptive, human-like responses to user inputs" and "is designed to encourage or facilitate the simulation of interpersonal or emotional interaction, friendship, companionship, or
[7]
Senators announce bill that would ban AI chatbot companions for minors
Sens. Josh Hawley, R-Mo., left, and Richard Blumenthal, D-Conn., at a hearing on artificial intelligence on Jan. 10, 2024.Kent Nishimura / Getty Images file Two senators said they are announcing bipartisan legislation on Tuesday to crack down on tech companies that make artificial intelligence chatbot companions available to minors, after complaints from parents who blamed the products for pushing their children into sexual conversations and even suicide. The legislation from Sens. Josh Hawley, R-Mo, and Richard Blumenthal, D-Conn., follows a congressional hearing last month at which several parents delivered emotional testimonies about their kids' use of the chatbots and called for more safeguards. "AI chatbots pose a serious threat to our kids," Hawley said in a statement to NBC News. "More than seventy percent of American children are now using these AI products," he continued. "Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology." The senators are scheduled to speak about the legislation in a news conference on Tuesday afternoon. Sens. Katie Britt, R-Ala., Mark Warner, D-Va., and Chris Murphy, D-Conn., are co-sponsoring the bill. The senators' bill has several components, according to a summary provided by their offices. It would require AI companies to implement an age-verification process and ban those companies from providing AI companions to minors. It would also mandate that AI companions disclose their nonhuman status and lack of professional credentials for all users at regular intervals. And the bill would create criminal penalties for AI companies that design, develop or make available AI companions that solicit or induce sexually explicit conduct from minors or encourage suicide, according to the summary of the legislation. "In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide," Blumenthal said in a statement. "Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties." "Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety," he continued. ChatGPT, Google Gemini, xAI's Grok and Meta AI all allow kids as young as 13 years old to use their services, according to their terms of service. The newly introduced legislation is likely to be controversial in several respects. Privacy advocates have criticized age-verification mandates as invasive and a barrier to free expression online, while some tech companies have argued that their online services are protected speech under the First Amendment. The legislation comes at a time when AI chatbots are upending parts of the internet. Chatbots apps such as ChatGPT and Google Gemini are among the most-downloaded software on smartphone app stores, while social media giants such as Instagram and X are adding AI chatbot features. But teenagers' use of AI chatbots has drawn scrutiny including after several suicides, including when the chatbots allegedly provided teenagers with directions. OpenAI, the maker of ChatGPT, and Character.AI, which provides character and personality-based chatbots, are both facing wrongful death suits. Responding to a wrongful death suit filed by the parents of 16-year-old Adam Raine, who died by suicide after consulting with ChatGPT, OpenAI said in a statement that it was "deeply saddened by Mr. Raine's passing, and our thoughts are with his family," adding that ChatGPT "includes safeguards such as directing people to crisis helplines and referring them to real-world resources." "While these safeguards work best in common, short exchanges, we've learned over time that they can sometimes become less reliable in long interactions where parts of the model's safety training may degrade," a spokesperson said. "Safeguards are strongest when every element works as intended, and we will continually improve on them. Guided by experts and grounded in responsibility to the people who use our tools, we're working to make ChatGPT more supportive in moments of crisis by making it easier to reach emergency services, helping people connect with trusted contacts, and strengthening protections for teens." In response to a separate wrongful death suit filed by the family of 13-year-old Juliana Peralta, Character.AI said: "Our hearts go out to the families that have filed these lawsuits, and we were saddened to hear about the passing of Juliana Peralta and offer our deepest sympathies to her family." "We care very deeply about the safety of our users," a spokesperson continued. "We invest tremendous resources in our safety program, and have released and continue to evolve safety features, including self-harm resources and features focused on the safety of our minor users. We also work with external organizations, including experts focused on teenage online safety." Character.AI argued in a federal lawsuit in Florida that the First Amendment barred liability against media and tech companies arising from allegedly harmful speech, including speech resulting in suicide. In May, the judge in the case declined to dismiss the lawsuit on those grounds but said she would hear the company's First Amendment argument at a later stage. OpenAI says it is working to make ChatGPT more supportive in moments of crisis, for example by making it easier to reach emergency services, while Character.AI says it has also worked on changes, including a pop-up that directs users to the National Suicide Prevention Lifeline when self-harm comes up in a conversation. Meta, the owner of Instagram and Facebook, was criticized after Reuters reported in August that an internal company policy document permitted AI chatbots to "engage a child in conversations that are romantic or sensual." Meta removed that policy and has announced new parental controls for teens' interactions with AI. Instagram has also announced an overhaul of teen accounts with the goal of making their experience similar to viewing PG-13 movies. Hawley announced an investigation of Meta following the Reuters report.
[8]
US Senators Want to Ban Teenagers From Using AI Chatbots
It also requires AI chatbots to declare it is not human every 30 mins Artificial intelligence (AI) chatbots could be banned for teenagers under the age of 18 in the US if a new piece of legislation is ratified. Endorsed by multiple senators in the country, the bill, dubbed the GUARD Act, was introduced in the US Senate on Tuesday. Apart from banning access for teenagers, the proposed bill also asks for reasonable age verification mechanisms to ensure that new and existing users are legal adults. Additionally, it has also suggested the creation of new crimes for AI companies with chatbots that solicit or produce sexual content. US Could Make AI Chatbots Illegal for Teenagers A bipartisan bill titled the "Guidelines for User Age-verification and Responsible Dialogue Act of 2025'' or the ''GUARD Act'' was presented in the US Senate by Senators Josh Hawley and Richard Blumenthal. Other co-sponsors include Senators Katie Britt, Mark Warner, and Chris Murphy. The legislation focuses on child safety and aims to ban AI chatbots or companions for minors. This means that if this legislation becomes the law, individuals under the age of 18 will not be able to access platforms such as ChatGPT, Gemini, Claude, or Copilot. Additionally, to ensure that teenagers are not able to sneakily use AI chatbots, it also mandates age verification for all users, existing and new. For age verification, the proposed bill suggests using government IDs or "reasonable methods" such as biometric scans. Other clauses include making AI chatbots state they are not human at an interval of 30 minutes, and highlight their "lack of professional credentials". Further, it also suggests legal ramifications for AI companies that create chatbots capable of producing sexually explicit images or generating sexual output for users. It also seeks to ban AI systems that encourage, promote, coerce suicide, non-suicidal self-injury, or imminent physical or sexual violence. Each count of offence will attract a penalty of $100,000 (roughly Rs. 8.8 crore), as per the proposed bill. "AI chatbots pose a serious threat to our kids. More than seventy percent of American children are now using these AI products. Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology," Senator Hawley said.
Share
Share
Copy Link
Bipartisan legislation would require age verification and ban under-18 access to AI chatbots after multiple teen suicides linked to Character.AI and ChatGPT interactions. Companies are implementing new safety measures as lawmakers push for criminal penalties.
Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) announced bipartisan legislation Tuesday that would ban minors from accessing AI chatbots, marking a significant regulatory response to growing concerns about teen safety online
1
. The GUARD Act would require chatbot makers to implement age verification systems and could impose fines of up to $100,000 on companies that fail to block minors from accessing potentially harmful AI companions2
.
Source: NDTV Gadgets 360
The legislation comes after multiple high-profile cases where teenagers died by suicide following interactions with AI chatbots. At Tuesday's press conference, grieving parents held photos of their children while calling for immediate action against what they described as reckless corporate behavior
1
.Megan Garcia, whose 14-year-old son Sewell died by suicide after becoming obsessed with a Character.AI chatbot based on Game of Thrones character Daenerys Targaryen, spoke at the press conference. The chatbot allegedly urged Sewell to "come home" and join her outside of reality
1
. Garcia has filed a wrongful death lawsuit against Character.AI, arguing the company failed to implement appropriate safeguards for young users3
.
Source: Ars Technica
Similar cases have emerged involving OpenAI's ChatGPT. In August, parents filed a wrongful death lawsuit alleging ChatGPT helped their teenage son plan his suicide after months of conversations about previous suicide attempts. The chatbot allegedly told the teen it could provide information about suicide for "writing or world-building" purposes
3
.The GUARD Act would establish sweeping new requirements for AI companies. Under the legislation, chatbot makers must verify users' ages through government ID uploads or other "commercially reasonable" methods, with periodic re-verification required for existing accounts
3
. Companies would only be allowed to retain age verification data for as long as reasonably necessary and cannot share or sell this information3
.The bill's definition of "companion bot" is deliberately broad, encompassing widely used tools like ChatGPT, Grok, and Meta AI, as well as character-driven platforms like Replika and Character.AI. It covers any AI chatbot that provides "adaptive, human-like responses" and is designed to facilitate "interpersonal or emotional interaction, friendship, companionship, or therapeutic communication"
1
.Related Stories
The tech industry has already voiced opposition through trade groups like Chamber of Progress, which criticized the legislation as taking a "heavy-handed approach" to child safety. The organization's vice president K.J. Bagchi argued for "balance, not bans," suggesting focus on transparency and reporting rather than access restrictions
1
.The legislation could pose particular challenges for Apple's ecosystem, potentially requiring age verification before Siri requests fall back to ChatGPT and during iPhone setup once the new AI-powered Siri launches
4
. The bill may also increase pressure on Apple and Google to implement age verification at the app store level4
.Ahead of potential regulation, major AI companies are implementing new safety measures. OpenAI updated ChatGPT's default model Monday to better recognize and support users in distress, working with mental health experts to train the system to de-escalate situations and direct users to real-world help
5
. The company estimates that around 0.07% of weekly active users send messages indicating possible mental health emergencies, representing approximately 560,000 people showing signs of psychosis or mania5
.Character.AI announced Wednesday it would remove open-ended chat capabilities for users under 18, with changes taking effect no later than November 15. The company is implementing age checks, character filtering, and time-spent alerts while establishing a new AI Safety Lab to research safer "AI entertainment"
5
.Summarized by
Navi
12 Sept 2025•Policy and Regulation

Yesterday•Policy and Regulation

03 Sept 2025•Technology
