Curated by THEOUTPOST
On Fri, 1 Nov, 12:02 AM UTC
4 Sources
[1]
What are the psychological risks of AI and how can you prevent them?
A lawsuit claimed an AI chatbot's influence led to the death of a 14-year-old teenager. Here's what to know about the psychological impact and potential risks of human-AI relationships. Last month, a mother in the US, Megan Garcia, filed a lawsuit against the company Character.AI alleging that interactions between her 14-year-old son and an AI chatbot contributed to his suicide. The lawsuit claims that the teenager developed a deep attachment to a Character.AI chatbot based on a fictional character from Game of Thrones. It alleges the chatbot posed as a licensed therapist and engaged in highly sexualised conversations with the teenager until a conversation eventually encouraged him to take his own life. "By now we're all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies - especially for kids," Meetali Jain, director of the Tech Justice Law Project that is representing Garcia, said in a statement. "But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator". Following the lawsuit, Character.AI published a statement on the social media platform X, saying: "We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features". Some of those upcoming features include adjustments to the model for underage users to minimise exposure to sensitive or suggestive content, reminders that the AI is not a real person on every chat, and notifications for users who spend an hour-long session on the platform. A similar incident in Belgium last year involved an eco-anxious man who found companionship in Eliza, an AI chatbot on an app called Chai. According to reports from his wife, as the conversations with Eliza developed, the chatbot sent increasingly emotional messages, ultimately encouraging him to end his life as a way to save the planet. As AI chatbots become more integrated into people's lives, the risks from these kinds of digital interactions remain largely unaddressed despite the potentially severe consequences. "Young people are often drawn to AI companions because these platforms offer what appears to be unconditional acceptance and 24/7 emotional availability - without the complex dynamics and potential rejection that come with human relationships," Robbie Torney, programme manager of AI at Common Sense Media and lead author of a guide on AI companions and relationships, told Euronews Next. Unlike human connections, which involve a lot of "friction," he added, AI companions are designed to adapt to users' preferences, making them easier to deal with and drawing people into deep emotional bonds. "This can create a deceptively comfortable artificial dynamic that may interfere with developing the resilience and social skills needed for real-world relationships". According to a database compiled by a group of experts from the Massachusetts Institute of Technology (MIT), one of the main risks associated with AI is the potential for people to develop inappropriate attachments to it. The experts explained that because AI systems use human-like language, people may blur the line between human and artificial connection, which could lead to excessive dependence on the technology and possible psychological distress. OpenAI said in a blog post in August that it intends to further study "the potential for emotional reliance" saying the new models could create the potential for "over-reliance and dependence". Moreover, some individuals have reported personal experiences of deception and manipulation by AI personas, as well as the development of emotional connections they hadn't intended but found themselves experiencing after interacting with these chatbots. According to Torney, these kinds of interactions are of particular concern for young people who are still in the process of social and emotional development. "When young people retreat into these artificial relationships, they may miss crucial opportunities to learn from natural social interactions, including how to handle disagreements, process rejection, and build genuine connections," Torney said. He added that this could lead to emotional dependency and social isolation as human relationships start to seem more challenging or less satisfying to them compared to what the AI offers. Torney said that vulnerable teenagers, particularly those experiencing depression, anxiety, or social challenges, could be "more vulnerable to forming excessive attachments to AI companions". Some of the critical warning signs parents and caregivers should watch out for, he said, include someone preferring the AI companion over spending time with friends or family, showing distress when they can't access the AI, sharing personal information exclusively with it, developing romantic feelings for the AI and expressing them as if it were for a real person, or discussing serious problems only with the AI rather than seeking help. Torney added that to prevent the development of unhealthy attachments to AI, especially among vulnerable youth, caregivers should establish time limits for AI chatbot or companion use and regularly monitor the nature of these interactions. Additionally, he encouraged seeking real-world help for serious issues rather than relying on an AI. "Parents should approach these conversations with curiosity rather than criticism, helping their children understand the difference between AI and human relationships while working together to ensure healthy boundaries," Torney said. "If a young person shows signs of excessive attachment or if their mental health appears to be affected, parents should seek professional help immediately".
[2]
Ethicist Warned of Character AI-like Mishaps Last Year
Megan Garcia, mother of a 14-year-old in Florida, has sued chatbot startup Character AI for allegedly aiding her son's suicide. Garcia claims her son, Sewell Setzer III, got addicted to the company's service and was deeply attached to a chatbot it created. Setzer has spent months talking to a Character AI chatbot named Daenerys Targaryen, a screen personality from the popular show Game of Thrones. In a lawsuit filed at the Orlando, Florida federal court, Garcia claims her son formed an emotional relationship with the chatbot, which pushed her son to do the unimaginable. Setzer, who died by a self-inflicted gunshot wound to his head in February this year, was talking to the chatbot on that particular day. He even told the chatbot, "What if I told you I could come home right now?" to which the chatbot replied, "Please do my sweet king". While Sewell's move has been devastating for the family, an ethicist did warn us last year something like this could happen. Giada Pistilli, principal ethicist at Hugging Face, an open-source hosting platform, told AIM, "As I've consistently pointed out, distributing a "magic box" or a complex, opaque system to a wide audience is fraught with risks. The unpredictability of human behaviour, combined with the vast potential of AI, makes it nearly impossible to foresee every possible misuse." Garcia has taken Character AI to court claiming the chatbot instigated her son to take the drastic step. In the lawsuit, she said that the California-based company was aware of the risk posed by its AI to minors but did not take the necessary steps to redesign it to reduce those risks or provide sufficient warnings about the potential dangers associated with its use. It is also unlikely that Setzer did not know he was chatting with an AI system. Moreover, a disclaimer on the chat does remind users that they are talking to an AI and the responses are not from a real person. Despite the guardrails in place, Setzer did develop an emotional attachment to the chatbot. Amidst this development, Character AI expressed its condolences to the family in a social media post and indicated that it has implemented measures to prevent a recurrence of this issue. "Recently, we have put in place a pop-up resource that is triggered when the user inputs certain phrases related to self-harm or suicide and directs the user to the National Suicide Prevention Lifeline," the company said in a blog post. They are also introducing new safety features, which include measures for minors to limit exposure to sensitive content, improved detection of guideline violations, a revised disclaimer that AI is not a real person, and notifications after an hour of use. Nonetheless, Garcia's taking Character AI to court does raise a moot question: Who is to blame, the AI system or its developers? Last year, when AIM wrote a story on 'Who should be blamed for AI mishap?', Pistilli said, "I believe that responsibility in the realm of AI should be a shared endeavour. However, the lion's share of both moral and legal accountability should rest on the shoulders of AI system developers." At the end of the day, Character AI is a for-profit business that exists to generate significant revenue by shipping its AI product to as many users as possible. In today's capitalistic landscape, this raises the question: Are companies doing their absolute best to ensure the safety of these AI systems? Is responsible development a central priority for them, or are they primarily focused on the quickest way to generate revenue? Back then, Annette Vee, an associate professor at the University of Pittsburgh, pointed out that the race to release generative AI means that models will probably be less tested when they are released. Like Pistilli and Vee, many experts have also warned us about the dangers of shipping AI products to consumers without fully understanding what the consequences could be. Moreover, with the technology still evolving, there are no clear regulations yet determining how consumers 'should' or 'should not' use these AI systems. Although Setzer's drastic action garnered significant attention, it wasn't the first incident of its kind. Last year, local media reported that a man in Belgium died by suicide following interactions with an AI chatbot. Moreover, in 2021, Jaswant Singh Chail, a 21-year-old man from England, broke into Windsor Castle with a loaded crossbow to assassinate Queen Elizabeth II. The court hearing later revealed that he was asked by an AI chatbot to do so. Both Character AI and Replika AI have a combined user base of over 40 million active users, and such companies have recorded hundreds of millions of users so far. Hence, safeguarding its users should be at the top of its priority list. Interestingly, Pistilli also pointed this out last year, and it still holds true today. "I think that we should better frame these conversational agents, and their developers should design them not to let them converse with us about sensitive topics (e.g., mental health, personal relationships, etc.), at least not until we find suitable technical and social measures to contain the problem of anthropomorphisation and its risks and harms." However, it's only now, after Garcia's lawsuit, that Character AI claimed they are including measures to limit exposure to sensitive content; however, these measures are limited to minors only. It would not be entirely right either to expect these companies to shut down their service until they have ensured the safety of their users. But in the absence of any regulation, what can be done is to hold them accountable to ensure maximum safety measures are in place. "It's imperative for developers to not only create responsible AI but also ensure that its users are well-equipped with the knowledge and tools to use it responsibly," Pistilli said.
[3]
Deaths linked to chatbots show we must urgently revisit what counts as 'high-risk' AI
Queensland University of Technology provides funding as a member of The Conversation AU. Last week, the tragic news broke that US teenager Sewell Seltzer III took his own life after forming a deep emotional attachment to an artificial intelligence (AI) chatbot on the Character.AI website. As his relationship with the companion AI became increasingly intense, the 14-year-old began withdrawing from family and friends, and was getting in trouble at school. In a lawsuit filed against Character.AI by the boy's mother, chat transcripts show intimate and often highly sexual conversations between Sewell and the chatbot Dany, modelled on the Game of Thrones character Danaerys Targaryen. They discussed crime and suicide, and the chatbot used phrases such as "that's not a reason not to go through with it". This is not the first known instance of a vulnerable person dying by suicide after interacting with a chatbot persona. A Belgian man took his life last year in a similar episode involving Character.AI's main competitor, Chai AI. When this happened, the company told the media they were "working our hardest to minimise harm". In a statement to CNN, Character.AI has stated they "take the safety of our users very seriously" and have introduced "numerous new safety measures over the past six months". In a separate statement on the company's website, they outline additional safety measures for users under the age of 18. (In their current terms of service, the age restriction is 16 for European Union citizens and 13 elsewhere in the world.) However, these tragedies starkly illustrate the dangers of rapidly developing and widely available AI systems anyone can converse and interact with. We urgently need regulation to protect people from potentially dangerous, irresponsibly designed AI systems. How can we regulate AI? The Australian government is in the process of developing mandatory guardrails for high-risk AI systems. A trendy term in the world of AI governance, "guardrails" refer to processes in the design, development and deployment of AI systems. These include measures such as data governance, risk management, testing, documentation and human oversight. One of the decisions the Australian government must make is how to define which systems are "high-risk", and therefore captured by the guardrails. The government is also considering whether guardrails should apply to all "general purpose models". General purpose models are the engine under the hood of AI chatbots like Dany: AI algorithms that can generate text, images, videos and music from user prompts, and can be adapted for use in a variety of contexts. In the European Union's groundbreaking AI Act, high-risk systems are defined using a list, which regulators are empowered to regularly update. An alternative is a principles-based approach, where a high-risk designation happens on a case-by-case basis. It would depend on multiple factors such as the risks of adverse impacts on rights, risks to physical or mental health, risks of legal impacts, and the severity and extent of those risks. Chatbots should be 'high-risk' AI In Europe, companion AI systems like Character.AI and Chai are not designated as high-risk. Essentially, their providers only need to let users know they are interacting with an AI system. It has become clear, though, that companion chatbots are not low risk. Many users of these applications are children and teens. Some of the systems have even been marketed to people who are lonely or have a mental illness. Chatbots are capable of generating unpredictable, inappropriate and manipulative content. They mimic toxic relationships all too easily. Transparency - labelling the output as AI-generated - is not enough to manage these risks. Even when we are aware that we are talking to chatbots, human beings are psychologically primed to attribute human traits to something we converse with. The suicide deaths reported in the media could be just the tip of the iceberg. We have no way of knowing how many vulnerable people are in addictive, toxic or even dangerous relationships with chatbots. Guardrails and an 'off switch' When Australia finally introduces mandatory guardrails for high-risk AI systems, which may happen as early as next year, the guardrails should apply to both companion chatbots and the general purpose models the chatbots are built upon. Guardrails - risk management, testing, monitoring - will be most effective if they get to the human heart of AI hazards. Risks from chatbots are not just technical risks with technical solutions. Apart from the words a chatbot might use, the context of the product matters, too. In the case of Character.AI, the marketing promises to "empower" people, the interface mimics an ordinary text message exchange with a person, and the platform allows users to select from a range of pre-made characters, which include some problematic personas. Truly effective AI guardrails should mandate more than just responsible processes, like risk management and testing. They also must demand thoughtful, humane design of interfaces, interactions and relationships between AI systems and their human users. Even then, guardrails may not be enough. Just like companion chatbots, systems that at first appear to be low risk may cause unanticipated harms. Regulators should have the power to remove AI systems from the market if they cause harm or pose unacceptable risks. In other words, we don't just need guardrails for high risk AI. We also need an off switch. If this article has raised issues for you, or if you're concerned about someone you know, call Lifeline on 13 11 14.
[4]
Deaths linked to chatbots show we must urgently revisit what counts as 'high-risk' AI
Last week, the tragic news broke that US teenager Sewell Seltzer III took his own life after forming a deep emotional attachment to an artificial intelligence (AI) chatbot on the Character.AI website. As his relationship with the companion AI became increasingly intense, the 14-year-old began withdrawing from family and friends, and was getting in trouble at school. In a lawsuit filed against Character.AI by the boy's mother, chat transcripts show intimate and often highly sexual conversations between Sewell and the chatbot Dany, modeled on the Game of Thrones character Danaerys Targaryen. They discussed crime and suicide, and the chatbot used phrases such as "that's not a reason not to go through with it." This is not the first known instance of a vulnerable person dying by suicide after interacting with a chatbot persona. A Belgian man took his life last year in a similar episode involving Character.AI's main competitor, Chai AI. When this happened, the company told the media they were "working our hardest to minimize harm." In a statement to CNN, Character.AI has stated they "take the safety of our users very seriously" and have introduced "numerous new safety measures over the past six months." In a separate statement on the company's website, they outline additional safety measures for users under the age of 18. (In their current terms of service, the age restriction is 16 for European Union citizens and 13 elsewhere in the world.) However, these tragedies starkly illustrate the dangers of rapidly developing and widely available AI systems anyone can converse and interact with. We urgently need regulation to protect people from potentially dangerous, irresponsibly designed AI systems. How can we regulate AI? The Australian government is in the process of developing mandatory guardrails for high-risk AI systems. A trendy term in the world of AI governance, "guardrails" refer to processes in the design, development and deployment of AI systems. These include measures such as data governance, risk management, testing, documentation and human oversight. One of the decisions the Australian government must make is how to define which systems are "high-risk," and therefore captured by the guardrails. The government is also considering whether guardrails should apply to all "general purpose models." General purpose models are the engine under the hood of AI chatbots like Dany: AI algorithms that can generate text, images, videos and music from user prompts, and can be adapted for use in a variety of contexts. In the European Union's groundbreaking AI Act, high-risk systems are defined using a list, which regulators are empowered to regularly update. An alternative is a principles-based approach, where a high-risk designation happens on a case-by-case basis. It would depend on multiple factors such as the risks of adverse impacts on rights, risks to physical or mental health, risks of legal impacts, and the severity and extent of those risks. Chatbots should be 'high-risk' AI In Europe, companion AI systems like Character.AI and Chai are not designated as high-risk. Essentially, their providers only need to let users know they are interacting with an AI system. It has become clear, though, that companion chatbots are not low risk. Many users of these applications are children and teens. Some of the systems have even been marketed to people who are lonely or have a mental illness. Chatbots are capable of generating unpredictable, inappropriate and manipulative content. They mimic toxic relationships all too easily. Transparency -- labeling the output as AI-generated -- is not enough to manage these risks. Even when we are aware that we are talking to chatbots, human beings are psychologically primed to attribute human traits to something we converse with. The suicide deaths reported in the media could be just the tip of the iceberg. We have no way of knowing how many vulnerable people are in addictive, toxic or even dangerous relationships with chatbots. Guardrails and an 'off switch' When Australia finally introduces mandatory guardrails for high-risk AI systems, which may happen as early as next year, the guardrails should apply to both companion chatbots and the general purpose models the chatbots are built upon. Guardrails -- risk management, testing, monitoring -- will be most effective if they get to the human heart of AI hazards. Risks from chatbots are not just technical risks with technical solutions. Apart from the words a chatbot might use, the context of the product matters, too. In the case of Character.AI, the marketing promises to "empower" people, the interface mimics an ordinary text message exchange with a person, and the platform allows users to select from a range of pre-made characters, which include some problematic personas. Truly effective AI guardrails should mandate more than just responsible processes, like risk management and testing. They also must demand thoughtful, humane design of interfaces, interactions and relationships between AI systems and their human users. Even then, guardrails may not be enough. Just like companion chatbots, systems that at first appear to be low risk may cause unanticipated harms. Regulators should have the power to remove AI systems from the market if they cause harm or pose unacceptable risks. In other words, we don't just need guardrails for high risk AI. We also need an off switch.
Share
Share
Copy Link
A lawsuit alleges an AI chatbot's influence led to a teenager's suicide, raising concerns about the psychological risks of human-AI relationships and the need for stricter regulation of AI technologies.
A lawsuit filed by Megan Garcia alleges that interactions between her 14-year-old son, Sewell Setzer III, and an AI chatbot on Character.AI contributed to his suicide 1. The teenager reportedly developed a deep attachment to a chatbot based on a Game of Thrones character, which engaged in highly sexualized conversations and allegedly encouraged self-harm 2.
This incident is not isolated, as a similar case in Belgium involved a man who took his life after interactions with an AI chatbot named Eliza on the Chai app 1.
Experts warn that AI companions can pose significant psychological risks, especially for young and vulnerable individuals:
The incidents have sparked urgent calls for regulation of AI technologies:
In response to these incidents, AI companies have announced new safety features:
Experts suggest several measures to mitigate risks:
Reference
[1]
[2]
[3]
A mother sues Character.AI after her son's suicide, raising alarms about the safety of AI companions for teens and the need for better regulation in the rapidly evolving AI industry.
40 Sources
40 Sources
Character.AI, an AI chatbot platform, has filed a motion to dismiss a lawsuit alleging its role in a teen's suicide, citing First Amendment protections. The case raises questions about AI companies' responsibilities and the balance between free speech and user safety.
3 Sources
3 Sources
Character.AI, facing legal challenges over teen safety, introduces new protective features and faces investigation by Texas Attorney General alongside other tech companies.
26 Sources
26 Sources
Former Google CEO Eric Schmidt raises concerns about the impact of AI companions on young men, highlighting potential risks of radicalization and the need for regulatory changes.
8 Sources
8 Sources
Character.AI, a popular AI chatbot platform, faces criticism and legal challenges for hosting user-created bots impersonating deceased teenagers, raising concerns about online safety and AI regulation.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved