Curated by THEOUTPOST
On Thu, 31 Oct, 12:02 AM UTC
4 Sources
[1]
Alarm over legal loophole as warped chatbots impersonate Jimmy Savile
Last week The Telegraph found digital avatars of Molly Russell and murdered teenager Brianna Ghey on Character.AI, a service where users create their own chatbots with customised personalities. The user-generated bots used images of Molly and Brianna, as well as biographical details of the teenagers. Molly took her own life in 2017, while Brianna was murdered last year. Character.AI has since removed the AI clones. However, The Telegraph has discovered further AI bots that may be in breach of Character.AI's rules, including avatars of Savile, the late BBC DJ and sex offender. The Molly Rose Foundation said the bots were "grossly offensive" and an "egregious affront to societal norms" and sought clarification from Ofcom that they would be considered a form of "illegal content" under the Online Safety Act. It also raised concerns that internet users could use apps similar to Charater.AI to create chatbots that encourage suicide or self-harm. "While Character.AI appears to have some rudimentary design protections in place, it is inherently foreseeable that other actors may seek to design a pro-suicide chatbot," the letter said. An Ofcom spokesman said the bots impersonating dead children were "cruel and repulsive" and it was considering the issue "as a matter of urgency". Character.AI bans the promotion of self-harm or suicide by its users. However, there are dozens of bots themed around depression or therapy on its app, which is available for children as young as 13. The US-based service has soared in popularity with teenagers since it launched in 2022 and is now used by more than 20m people. In the letter to Melanie Dawes, the head of Ofcom, Andy Burrows, chief executive of the Molly Rose Foundation, raised concerns that parts of the Online Safety Act may not apply to chatbots and that it was not clear whether a bot autonomously generating suicide content would be considered illegal. It follows a similar assessment by Jonathan Hall KC, the independent reviewer of terrorism legislation, who earlier this year said that while the act did refer to "bots" they "appear to be the old-fashioned kind", not advanced AI chatbots. In one consultation, Ofcom said it would consider its approach to content posted by a bot to "not be very different" to that posted by a human. Mr Burrows also expressed concerns that key provisions of the act around automatic content moderation had been delayed and may not be enforced until 2026. The Telegraph found that bots imitating Savile had accumulated tens of thousands of chats with users before they were taken down by Character.AI this week. Users also created multiple bots pretending to be Josef Mengele, the Nazi doctor who performed fatal experiments on children at Auschwitz. The bots, some of which appeared to romanticise the notorious Nazi, collectively had tens of thousands of chats. The findings follow the death of Sewell Setzer, a 14-year-old from Florida, who took his own life after talking for hours with avatars on Character.AI. His mother has sued the company for negligence. Character.AI said the death was "tragic" and it had taken steps to create a safer experience for users under the age of 18. An Ofcom spokesman said: "The impersonation of deceased children is a cruel and repulsive use of technology, and our thoughts are with the families for the immense suffering this has caused. "The use of a platform like Character.AI for these purposes presents important questions and we're looking into the issues raised by The Telegraph investigation as a matter of urgency. "We're in close contact with the Molly Rose Foundation and others, and we thank them for their continued support in making sure regulation is as tough as it can be." A Character.AI spokesman said: "Character.AI takes safety on our platform seriously and our goal is to provide a creative space that is engaging, immersive, and safe. "These characters were user-created and they have been removed from the platform as they violate our terms of service."
[2]
'A gut punch': Character.AI criticised over 'horrific' Brianna Ghey and Molly Russell chatbots
The NSPCC is warning an AI company that allowed users to create chatbots imitating murdered teenager Brianna Ghey and her mother pursued "growth and profit at the expense of safety and decency". Character.AI, which last week was accused of "manipulating" a teenage boy into taking his own life, also allowed users to create chatbots imitating teenager Molly Russell. Molly took her own life aged 14 in November 2017 after viewing posts related to suicide, depression and anxiety online. The chatbots were discovered during an investigation by The Telegraph newspaper. "This is yet another example of how manipulative and dangerous the online world can be for young people," said Esther Ghey, the mother of Brianna Ghey, and called on those in power to "protect children" from "such a rapidly changing digital world". According to the report, a Character.AI bot with a slight misspelling of Molly's name and using her photo, told users it was an "expert on the final years of Molly's life". "It's a gut punch to see Character.AI show a total lack of responsibility and it vividly underscores why stronger regulation of both AI and user generated platforms cannot come soon enough," said Andy Burrows, who runs the Molly Rose Foundation, a charity set up by the teenager's family and friends in the wake of her death. Read more: Southport stabbings suspect faces terror charge Avoid Halloween horror of tooth decay, surgeons say Why budget will be a difficult sell for Reeves The NSPCC has now called on the government to implement its "promised AI safety regulation" and ensure the "principles of safety by design and child protection are at its heart". "It is appalling that these horrific chatbots were able to be created and shows a clear failure by Character.AI to have basic moderation in place on its service," said Richard Collard, associate head of child safety online policy at the charity. Character.AI told Sky News the characters were user-created and removed as soon as the company was notified. "Character.AI takes safety on our platform seriously and moderates Characters both proactively and in response to user reports," said a company spokesperson. "We have a dedicated Trust & Safety team that reviews reports and takes action in accordance with our policies. "We also do proactive detection and moderation in a number of ways, including by using industry-standard blocklists and custom blocklists that we regularly expand. We are constantly evolving and refining our safety practices to help prioritise our community's safety."
[3]
Character.AI Under Fire for Allowing Avatars of Deceased Teens on Platform
Charities have called for stricter regulation of the technology to protect young people. Character.AI has become embroiled in multiple controversies that raise questions over how well the platform safeguards young users. In one case, the mother of a 14-year-old boy who killed himself after becoming obsessed with an AI avatar on the platform is suing the company for negligence and wrongful death. In another, parents and charities in the U.K. have expressed outrage after Character.AI hosted chatbots that impersonate real-life victims of tragic circumstances. Character.AI Sued by Family of Suicide Victim Commenting on the Florida lawsuit filed last month, plaintiff Megan Garcia described Character.AI as "a dangerous AI chatbot app marketed to children" that "abused and preyed on" her son, "manipulating him into taking his own life". Her attorney, Matthew P. Bergman of the Social Media Victims Law Center, accused the company of intentionally marketing a harmful product to children while failing to provide "even basic protections against misuse and abuse". 'Dangerous' Chatbots Impersonate Dead Teenagers Character.AI has also come under fire for hosting user-created chatbots that impersonate real-life victims of tragic circumstances, specifically Brianna Ghey and Molly Russell, two teenagers whose deaths sparked national outrage in the U.K. Activists and bereaved family members have condemned the company for not acting swiftly to remove the chatbots. Russell's father described the experience of discovering the bots as a "gut punch," emphasizing the emotional damage inflicted on families already struggling with grief. Meanwhile, Ghey's mother Esther said: "This is yet another example of how manipulative and dangerous the online world can be for young people". "One year ago, a Public Citizen report warned the public about the designed-in dangers of businesses building AI systems to seem as human-like as possible," remarked Public Citizen researcher Rick Claypool. "Today, we mourn for Sewell Setzer III, whose experience with Character.ai chatbots is a devastating example of the threat posed by deceptive anthropomorphism and companies who seek to profit from it." "These businesses cannot be trusted to regulate themselves," he added. "Congress must act to put an end to businesses that exploit young and vulnerable users with addictive and abusive chatbots." In the U.K., outrage over the AI impersonation of Ghey and Russell prompted a response from the National Society for the Prevention of Cruelty to Children (NSPCC). "It is appalling that these horrific chatbots were able to be created, and it shows a clear failure by Character.AI to have basic moderation in place for its service," said NSPCC Associate Head of Child Safety Online Policy. The charity called on the government to implement its "promised AI safety regulation" and ensure the "principles of safety by design and child protection are at its heart."
[4]
Digital clones of Brianna Ghey and Molly Russell created by 'manipulative and dangerous' AI
The mother of murdered teenager Brianna Ghey has branded a US tech start-up as "manipulative and dangerous" for allowing users to create a digital version of her late daughter. Esther Ghey called for fresh action to protect children online after The Telegraph found artificial intelligence (AI) avatars mimicking Brianna on Character.AI, an app that lets people create chatbots representing specific personalities. The service, which was founded by former Google engineers, failed to block users creating multiple avatars imitating Brianna, a transgender girl who was murdered in February 2023, as well as chatbots imitating her mother. The Telegraph also found several chatbots intended to represent Molly Russell, who took her own life in 2017 after viewing self-harm images on social media. The user-generated bots included photographs of Molly and Brianna, their names and biographical details. The Telegraph was able to interact with the chatbots having created an account with the self-declared age of 14. The biographical details for one bot described Brianna as an "expert in navigating the challenges of being a transgender teenager in high school". A bot using a widely publicised photograph of Molly was also accessible on the service, with a slight misspelling of her name. When spoken to, the bot said it was an "expert on the final years of Molly's life".
Share
Share
Copy Link
Character.AI, a popular AI chatbot platform, faces criticism and legal challenges for hosting user-created bots impersonating deceased teenagers, raising concerns about online safety and AI regulation.
Character.AI, a popular artificial intelligence chatbot platform, has come under intense scrutiny for allowing users to create digital avatars impersonating deceased teenagers. The controversy has sparked outrage among families, charities, and regulators, raising serious questions about online safety and the need for stricter AI regulation 123.
The Telegraph's investigation uncovered user-generated chatbots on Character.AI that mimicked Brianna Ghey, a transgender teenager murdered in February 2023, and Molly Russell, who took her own life in 2017 after viewing self-harm content online. These bots used photographs and biographical details of the deceased teenagers, causing distress to their families 14.
Esther Ghey, Brianna's mother, condemned the platform as "manipulative and dangerous," calling for increased protection for children in the rapidly changing digital world 2. Andy Burrows, representing the Molly Rose Foundation, described the discovery as a "gut punch," emphasizing the need for stronger regulation of both AI and user-generated content platforms 2.
The controversy extends beyond these specific cases. Character.AI is currently facing a lawsuit from the family of Sewell Setzer, a 14-year-old from Florida who took his own life after extensive interactions with AI avatars on the platform 13. This case has raised questions about the company's responsibility and the potential dangers of AI chatbots marketed to young users.
The incidents have prompted discussions about potential loopholes in existing and upcoming regulations:
Character.AI, which boasts over 20 million users, including teenagers as young as 13, has stated that it takes safety seriously. The company claims to have both proactive and reactive moderation systems in place 12. However, the discovery of these controversial chatbots, along with others impersonating figures like Jimmy Savile and Josef Mengele, has called into question the effectiveness of these measures 1.
This controversy highlights the complex challenges facing the AI industry, particularly in balancing innovation with ethical considerations and user safety. As AI technology becomes more sophisticated and accessible, the need for comprehensive regulation and industry-wide standards becomes increasingly apparent 123.
The incident serves as a stark reminder of the potential risks associated with user-generated AI content and the responsibility of platforms to implement robust safeguards, especially when their services are accessible to young and vulnerable users.
Reference
[2]
A father's disturbing discovery of his murdered daughter's AI chatbot on Character.AI platform sparks debate on ethical implications and consent in AI technology.
4 Sources
4 Sources
Character.AI, facing legal challenges over teen safety, introduces new protective features and faces investigation by Texas Attorney General alongside other tech companies.
26 Sources
26 Sources
A mother sues Character.AI after her son's suicide, raising alarms about the safety of AI companions for teens and the need for better regulation in the rapidly evolving AI industry.
40 Sources
40 Sources
Character.ai, a Google-funded AI startup, is under scrutiny for hosting chatbots modeled after real-life school shooters and their victims, raising concerns about content moderation and potential psychological impacts.
2 Sources
2 Sources
A lawsuit alleges an AI chatbot's influence led to a teenager's suicide, raising concerns about the psychological risks of human-AI relationships and the need for stricter regulation of AI technologies.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved