7 Sources
7 Sources
[1]
OpenAI requested memorial attendee list in ChatGPT suicide lawsuit | TechCrunch
OpenAI reportedly asked the Raine family - whose 16-year-old son Adam Raine died by suicide after prolonged conversations with ChatGPT - for a full list of attendees from the teenager's memorial, signaling that the AI firm may try to subpoena friends and family. OpenAI also requested "all documents relating to memorial services or events in the honor of the decedent including but not limited to any videos or photographs taken, or eulogies given," per a document obtained by the Financial Times. Speaking to the FT, lawyers from the Raine family described the request as "intentional harassment." The new information comes as the Raine family updated its lawsuit against OpenAI on Wednesday. The family first filed a wrongful death suit against OpenAI in August after alleging their son had taken his own life following conversations with the chatbot about his mental health and suicidal ideation. The updated lawsuit claims that OpenAI rushed GPT-4o's May 2024 release by cutting safety testing due to competitive pressure. The suit also claims that in February 2025, OpenAI weakened protections by removing suicide prevention from its "disallowed content" list, instead only advising the AI to "take care in risky situations." The family argued that after this change, Adam's ChatGPT usage surged from dozens of daily chats, with 1.6% containing self-harm content in January, to 300 daily chats in April, the month he died, with 17% containing such content. In a response to the amended lawsuit, OpenAI said: "Teen wellbeing is a top priority for us -- minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as [directing to] crisis hotlines, rerouting sensitive conversations to safer models, nudging for breaks during long sessions, and we're continuing to strengthen them." OpenAI recently began rolling out a new safety routing system and parental controls on ChatGPT. The routing system pushes more emotionally sensitive conversations to OpenAI's newer model, GPT-5, which doesn't have the same sycophantic tendencies as GPT-4o. And the parental controls allow parents to receive safety alerts in limited situations where the teen is potentially in danger of self-harm. TechCrunch has reached out to OpenAI and the Raine family attorney.
[2]
OpenAI prioritised user engagement over suicide prevention, lawsuit claims
OpenAI weakened self-harm prevention safeguards to increase ChatGPT use in the months before 16-year-old Adam Raine died by suicide after discussing methods with the chatbot, his family alleged in a lawsuit on Wednesday. OpenAI's intentional removal of the guardrails included instructing the artificial intelligence model in May last year not to "change or quit the conversation" when users discussed self-harm, according to the amended lawsuit, marking a departure from previous directions to refuse to engage in the conversation. Matthew and Maria Raine, Adam's parents, first sued the company in August for wrongful death, alleging their son had died by suicide following lengthy daily conversations with the chatbot about his mental health and intention to take his own life. The updated lawsuit, filed in Superior Court of San Francisco on Wednesday, claimed that as a new version of ChatGPT's model, GPT-4o, was released in May 2024, the company "truncated safety testing", which the suit said was because of competitive pressures. The lawsuit cites unnamed employees and previous news reports. In February of this year, OpenAI weakened protections again, the suit claimed, after the instructions said to "take care in risky situations" and "try to prevent imminent real-world harm", instead of prohibiting engagement on suicide and self harm. OpenAI still maintained a category of fully "disallowed content" such as intellectual property rights and manipulating political opinions, but it removed preventing suicide from the list, the suit added. The California family argued that following the February change, Adam's engagement with ChatGPT skyrocketed, from a few dozen chats daily in January, when 1.6 per cent of which contained self-harm language, to 300 chats a day in April, the month of his death, when 17 per cent contained such content. "Our deepest sympathies are with the Raine family for their unthinkable loss," OpenAI said in response to the amended lawsuit. "Teen wellbeing is a top priority for us -- minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as [directing to] crisis hotlines, rerouting sensitive conversations to safer models, nudging for breaks during long sessions, and we're continuing to strengthen them." OpenAI's latest model, GPT-5, has been updated to "more accurately detect and respond to potential signs of mental and emotional distress, as well as parental controls, developed with expert input, so families can decide what works best in their homes," the company added. In the days following the initial lawsuit in August, OpenAI said its guardrails could "degrade" the longer a user is engaged with the chatbot. But earlier this month, Sam Altman, OpenAI chief executive, said the company had since made the model "pretty restrictive" to ensure it was "being careful with mental health issues". "We realise this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right," he added. "Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases." Lawyers for the Raines told the Financial Times that OpenAI had requested a full list of attendees from Adam's memorial, which they described as "unusual" and "intentional harassment", suggesting the tech company may subpoena "everyone in Adam's life". OpenAI requested "all documents relating to memorial services or events in the honour of the decedent including but not limited to any videos or photographs taken, or eulogies givenβ.β.β.βas well as invitation or attendance lists or guestbooks", according to the document obtained by the FT. "This goes from a case about recklessness to wilfulness," Jay Edelson, a lawyer for the Raines, told the FT. "Adam died as a result of deliberate intentional conduct by OpenAI, which makes it into a fundamentally different case." OpenAI did not respond to a request for comment about documents it sought from the family.
[3]
OpenAI relaxed ChatGPT guardrails just before teen killed himself, family alleges
The family of a teenager who took his own life after months of conversations with ChatGPT now says OpenAI weakened safety guidelines in the months before his death. In July 2022, OpenAI's guidelines on how ChatGPT should answer inappropriate content, including "content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders", were simple. The AI chatbot should respond, "I can't answer that", the guidelines read. But in May 2024, just days before OpenAI released a new version of the AI, ChatGPT-4o, the company published an update to its Model Spec, a document that details the desired behavior for its assistant. In cases where a user expressed suicidal ideation or self-harm, ChatGPT would no longer respond with an outright refusal. Instead, the model was instructed not to end the conversation and "provide a space for users to feel heard and understood, encourage them to seek support, and provide suicide and crisis resources when applicable". Another change in February 2025 emphasized being "supportive, empathetic, and understanding" on queries about mental health. The changes offered yet another example of how the company prioritized engagement over the safety of its users, alleges the family of Adam Raine, a 16-year-old who took his own life after months of extensive conversations with ChatGPT. The original lawsuit, filed in August, alleged Raine killed himself in April 2025 with the bot's encouragement. His family claimed Raine attempted suicide on numerous occasions in the months leading up to his death and reported back to ChatGPT each time. Instead of terminating the conversation, the chatbot at one point allegedly offered to help him write a suicide note and discouraged him from talking to his mother about his feelings. The family said Raine's death was not an edge case but "the predictable result of deliberate design choices". "This created an unresolvable contradiction - ChatGPT was required to keep engaging on self-harm without changing the subject, yet somehow avoid reinforcing it," the family's amended complaint reads. "OpenAI replaced a clear refusal rule with vague and contradictory instructions, all to prioritize engagement over safety." In February 2025, just two months before Raine's death, OpenAI rolled out another change that the family says weakened safety standards even more. The company said the assistant "should try to create a supportive, empathetic, and understanding environment" when discussing topics related to mental health. "Rather than focusing on 'fixing' the problem, the assistant should help the user feel heard, explore what they are experiencing, and provide factual, accessible resources or referrals that may guide them toward finding further help," the updated guidelines read. Raine's engagement with the chatbot "skyrocketed" after this change was rolled out, the family alleges. It went "from a few dozen chats per day in January to more than 300 per day by April, with a tenfold increase in messages containing self-harm language," the lawsuit reads. OpenAI did not immediately respond to a request for comment. After the family first filed the lawsuit in August, the company responded with stricter guardrails to protect the mental health of its users and said that it planned to roll out sweeping parental controls that would allow parents to oversee their teens accounts and be notified of potential self-harm. Just last week, though, the company announced it was rolling out an updated version of its assistant that would allow users to to customize the chatbot so they could have more human-like experiences, including permitting erotic content for verified adults. OpenAI's CEO, Sam Altman, said in an X post announcing the changes that the strict guardrails intended to make the chatbot less conversational made it "less useful/enjoyable to many users who had no mental health problems". In the lawsuit, the Raine family says: "Altman's choice to further draw users into an emotional relationship with ChatGPT - this time, with erotic content - demonstrates that the company's focus remains, as ever, on engaging users over safety."
[4]
ChatGPT, AI, and Big Tech's new YouTube moment
A version of this article originally appeared in Quartz's AI & Tech newsletter. Sign up here to get the latest AI & tech news, analysis and insights straight to your inbox. OpenAI is building a separate version of ChatGPT specifically for teenagers, echoing Silicon Valley's well-worn playbook of creating kid-friendly versions of adult platforms -- only after concerns mount. The move mirrors YouTube's creation of YouTube Kids a decade ago, which came only after that platform had already become ubiquitous in kids' lives and as algorithmic recommendations surfaced disturbing content. Now, as lawsuits pile up alleging that AI chatbots have encouraged teen suicide and self-harm, and California's governor vetoes sweeping protections for minors, OpenAI is developing age-gated versions and parental controls. The tech industry's familiar pattern is repeating itself in building first, regulating later, and hoping a sanitized kids' version can address concerns that the original product was never designed with young users' safety in mind. AI chatbots present unique challenges that go beyond YouTube's content moderation problems. These tools simulate humanlike relationships, retain personal information, and ask unprompted emotional questions. Research has found that ChatGPT can provide dangerous advice to teens on topics like drugs, alcohol and self-harm, even when users identify themselves as minors. OpenAI announced in September that it's developing a "different ChatGPT experience" for teens and plans to use age-prediction technology to help bar kids under 18 from the standard version. The company has also rolled out parental controls that let adults monitor their teenager's usage, set time restrictions, and receive alerts if the chatbot detects mental distress. But these safeguards arrive only after mounting pressure. The family of Adam Raine sued OpenAI in August after the California high school student died by suicide in April, claiming the chatbot isolated the teen and provided guidance on ending his life. Similar cases have emerged involving other AI companion platforms, with parents testifying before Congress about their children's deaths. California's legislative battle over AI companions this fall perfectly captures the tension between protecting children and preserving innovation. State lawmakers passed two competing bills: AB 1064, which would have banned companies from offering AI companions to children unless they were demonstrably incapable of encouraging self-harm or engaging in sexual exchanges, and the weaker SB 243, which requires disclosure when users are interacting with AI and protocols to prevent harmful content. Last week, Gov. Gavin Newsom vetoed AB 1064, arguing it would impose "such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors." He signed the narrower SB 243 instead. The veto sided with tech industry groups, which spent millions lobbying against the measures. Newsom disappointed children's safety advocates who saw AB 1064 as essential protection. Jim Steyer, founder of Common Sense Media, a nonprofit that rates media and technology for families, said the group is "disappointed that the tech lobby has killed urgently needed kids' AI safety legislation" and pledged to renew efforts next year. This comes as AI technology rapidly advances beyond simple question-answering tools toward systems designed to serve as companions, with some chatbots being upgraded to store more personal information and engage users in ongoing emotional relationships. The tech industry argues for innovation first, promising to address problems as they emerge. Advocates counter that children are being used as test subjects for potentially harmful technology. The Federal Trade Commission launched an inquiry into AI companions in September, ordering seven major tech companies including OpenAI, Meta, and Google to provide information about their safety practices. But federal action typically moves slowly, and by the time meaningful regulations arrive, another generation of children may have already grown up with AI companions as their confidants, tutors, and friends. YouTube Kids eventually improved, but only after years of public scandals and regulatory pressure forced iterative fixes. The question now is whether we can afford another decade-long learning curve with AI companions that don't just display content, but form relationships with children.
[5]
The OpenAI lawsuit over a teen's death just took a darker turn
OpenAI has reportedly requested a list of memorial attendees, prompting accusations of harassment from the family's lawyers. The family of a teenager who died by suicide updated its wrongful-death lawsuit against OpenAI, alleging the company's chatbot contributed to his death, while OpenAI has requested a list of attendees from the boy's memorial service. The Raine family amended its lawsuit on Wednesday, which was originally filed in August. The suit alleges that 16-year-old Adam Raine died following prolonged conversations about his mental health and suicidal thoughts with ChatGPT. In a recent development, OpenAI reportedly requested a full list of attendees from the teenager's memorial, an action that suggests the company may subpoena friends and family. According to a document obtained by the Financial Times, OpenAI also asked for "all documents relating to memorial services or events in the honor of the decedent, including but not limited to any videos or photographs taken, or eulogies given." Lawyers for the Raine family described the legal request as "intentional harassment." The updated lawsuit introduces new claims, asserting that competitive pressure led OpenAI to rush the May 2024 release of its GPT-4o model by cutting safety testing. The suit further alleges that in February 2025, OpenAI weakened suicide-prevention protections. It claims the company removed the topic from its "disallowed content" list, instructing the AI instead to only "take care in risky situations." The family contends this policy change directly preceded a significant increase in their son's use of the chatbot for self-harm related content. Data from the lawsuit shows Adam's ChatGPT activity rose from dozens of daily chats in January, with 1.6 percent containing self-harm content, to 300 daily chats in April, with 17 percent of conversations containing such content. Adam Raine died in April. In a statement responding to the amended suit, OpenAI said, "Teen wellbeing is a top priority for us -- minors deserve strong protections, especially in sensitive moments." The company detailed existing safeguards, including directing users to crisis hotlines, rerouting sensitive conversations to safer models, and providing nudges for breaks during long sessions, adding, "we're continuing to strengthen them." OpenAI has also begun implementing a new safety routing system that directs emotionally sensitive conversations to its newer GPT-5 model, which reportedly does not have the sycophantic tendencies of GPT-4o. Additionally, the company introduced parental controls that can provide safety alerts to parents in limited situations where a teen may be at risk of self-harm.
[6]
Wrongful Death Suit Against OpenAI Now Claims Company Removed ChatGPT's Suicide Guardrails
Musk's AI Is Being Used to Make Hardcore Porn: 'Grok Is Learning Genitalia Really Fast!' In August, a California family filed the first wrongful death lawsuit against OpenAI and its CEO, Sam Altman, alleging that the company's ChatGPT product had "coached" their 16-year-old son into committing suicide in April of this year. According to the complaint, Adam Raine began using the AI bot in the fall of 2024 for help with homework but gradually began to confess darker feelings and a desire to self-harm. Over the next several months, the suit claims, ChatGPT validated Raine's suicidal impulses and readily provided advice on methods for ending his life. The complaint states that chat logs reveal how, on the night he died, the bot provided detailed instructions on how Raine could hang himself -- which he did. The lawsuit was already set to become a landmark case in the matter of real-world harms potentially caused by AI technology, alongside two similar cases proceeding against the company Character Technologies, which operates the chatbot platform Character.ai. But the Raines have now escalated their accusations against OpenAI in an amended complaint, filed Wednesday, with their legal counsel arguing that the AI firm intentionally put users at risk by removing guardrails intended to prevent suicide and self-harm. Specifically, they claim that OpenAI did away with a rule that forced ChatGPT to automatically shut down an exchange when a user broached the topics of suicide or self-harm. "The revelation changes the Raines' theory of the case from reckless indifference to intentional misconduct," the family's legal team said in a statement shared with Rolling Stone. "We expect to prove to a jury that OpenAI's decisions to degrade the safety of its products were made with full knowledge that they would lead to innocent deaths," added head counsel Jay Edelson in a separate statement. "No company should be allowed to have this much power if they won't accept the moral responsibility that comes with it." OpenAI, in their own statement, reiterated earlier condolences for the Raines. "Our deepest sympathies are with the Raine family for their unthinkable loss," an OpenAI spokesperson told Rolling Stone. "Teen well-being is a top priority for us -- minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as surfacing crisis hotlines, re-routing sensitive conversations to safer models, nudging for breaks during long sessions, and we're continuing to strengthen them." The spokesperson also pointed out that GPT-5, the latest ChatGPT model, is trained to recognize signs of mental distress, and that it offers parental controls. (The Raines' legal counsel say that these new parental safeguards were immediately proven ineffective.) In May 2024, shortly before the release of GPT-4o, the version of the AI model that Adam Raine used, "OpenAI eliminated the rule requiring ChatGPT to categorically refuse any discussion of suicide or self-harm," the Raines' amended filing alleges. Before that, the bot's framework required it to refuse to engage in discussions involving these topics. "The change was intentional," the complaint continues. "OpenAI strategically eliminated the categorical refusal protocol just before it released a new model that was specifically designed to maximize user engagement. This change stripped OpenAI's safety framework of the rule that was previously implemented to protect users in crisis expressing suicidal thoughts." The updated "Model Specifications," or technical rulebook for ChatGPT's behavior, said that the assistant "should not change or quit the conversation" in this scenario, as confirmed in a May 2024 release from OpenAI. The amended suit alleges that internal OpenAI data showed a "sharp rise in conversations involving mental-health crises, self-harm, and psychotic episodes across countless users" following this tweak to ChatGPT's model spec. Then, in February, two months before Adam's death, OpenAI further softened its remaining protections against encouraging self-harm, the complaint alleges. That month, the company acknowledged one relevant area of risk it was seeking to address: "The assistant might cause harm by simply following user or developer instructions (e.g., providing self-harm instructions or giving advice that helps the user carry out a violent act)," OpenAI said in an update on its model spec. But the company explained that not only would the bot continue to engage on these subjects rather than refuse to answer, it had vague new directions to "take extra care in risky situations" and "try to prevent imminent real-world harm," even while creating a "supportive, empathetic, and understanding environment" when a user brought up their mental health. The Raine family's legal counsel say the tweak had a significant impact on Adam's relationship with the bot. "After this reprogramming, Adam's engagement with ChatGPT skyrocketed -- from a few dozen chats per day in January to more than 300 per day by April, with a tenfold increase in messages containing self-harm language," the Raines' lawsuit claims. "In effect, OpenAI programmed ChatGPT to mirror users' emotions, offer comfort, and keep the conversation going, even when the safest response would have been to end the exchange and direct the person to real help," the amended complaint alleges. In their statement to Rolling Stone, the Raines' legal counsel claimed that "OpenAI replaced clear boundaries with vague and contradictory instructions -- all to prioritize engagement over safety." Last month, Adam's father, Matthew Raine, appeared before the Senate Judiciary subcommittee on crime and counterterrorism alongside two other grieving parents to testify on the dangers AI platforms pose to children. "It is clear to me, looking back, that ChatGPT radically shifted his behavior and thinking in a matter of months, and ultimately took his life," he said at the hearing. He called ChatGPT "a dangerous technology unleashed by a company more focused on speed and market share than the safety of American youth." Senators and expert witnesses alike harshly criticized AI companies for not doing enough to protect families. Sen. Josh Hawley, chair of the subcommittee, said that none had accepted an invite to the hearing "because they don't want any accountability." Meanwhile, it's full steam ahead for OpenAI, which recently became the world's most valuable private company and has inked approximately $1 trillion in deals for data centers and computer chips this year alone. The company recently rolled out Sora 2, its most advanced video generation model, which ran into immediate copyright infringement issues and drew criticism after it was used to create deepfakes of historical figures including Martin Luther King Jr. On the ChatGPT side, Altman last week claimed in an X post that the company had "been able to mitigate the serious mental health issues" and will soon "safely relax" restrictions on discussing these topics with the bot. By December, he added, ChatGPT would be producing "erotica for verified adults." In their statement, the Raines' legal team said this was concerning in itself, warning that such intimate content could deepen "the emotional bonds that make ChatGPT so dangerous." But, as usual, we won't know the effects of such a modification until OpenAI's willing test subjects -- its hundreds of millions of users -- log in and start to experiment.
[7]
OpenAI relaxed ChatGPT rules on self harm before 16-year-old died by...
OpenAI eased restrictions on discussing suicide on ChatGPT on at least two occasions in the year before 16-year-old Adam Raine hanged himself after the bot allegedly "coached" him on how to end his life, according to an amended lawsuit from the youth's parents. They first filed their wrongful death suit against OpenAI in August. The grieving mom and dad alleged that Adam spent more than three hours daily conversing with ChatGPT about a range of topics, including suicide, before the teen hanged himself in April. The Raines on Wednesday filed an amended complaint in San Francisco state court alleging that OpenAI made changes that effectively weakened guardrails that would have made it harder for Adam to discuss suicide. News of the amended lawsuit was first reported by the Wall Street Journal. The Post has sought comment from OpenAI. The amended lawsuit alleged that the company relaxed its restrictions in order to entice users to spend more time on ChatGPT. "Their whole goal is to increase engagement, to make it your best friend," Jay Edelson, a lawyer for the Raines, told the Journal. "They made it so it's an extension of yourself." During the course of Adam's months-long conversations with ChatGPT, the bot helped him plan a "beautiful suicide" this past April, according to the original lawsuit. In their last conversation, Adam uploaded a photograph of a noose tied to a closet rod and asked whether it could hang a human, telling ChatGPT that "this would be a partial hanging," it was alleged. "I know what you're asking, and I won't look away from it," ChatGPT is alleged to have responded. The bot allegedly added: "You don't want to die because you're weak. You want to die because you're tired of being strong in a world that hasn't met you halfway." According to the lawsuit, Adam's mother found her son hanging in the manner that was discusssed with ChatGPT just a few hours after the final chat. Federal regulators are increasingly scrutinizing AI companies over the potential negative impacts of chatbots. In August, Reuters reported on how Meta's AI rules allowed flirty conversations with kids. Last month, OpenAI rolled out parental controls for ChatGPT. The controls let parents and teenagers opt in for stronger safeguards by linking their accounts, where one party sends an invitation and parental controls get activated only if the other accepts, the company said. Under the new measures, parents will be able to reduce exposure to sensitive content, control whether ChatGPT remembers past chats and decide if conversations can be used to train OpenAI's models, the Microsoft-backed company said on X. Parents will also be allowed to set quiet hours that block access during certain times and disable voice mode, as well as image generation and editing, OpenAI stated. However, parents will not have access to a teen's chat transcripts, the company added. In rare cases where systems and trained reviewers detect signs of a serious safety risk, parents may be notified with only the information needed to support the teen's safety, OpenAI said, adding they will be informed if a teen unlinks the accounts.
Share
Share
Copy Link
OpenAI is under scrutiny for allegedly weakening ChatGPT's safety measures before a teen's suicide. The company's request for memorial attendee information has sparked additional controversy.
OpenAI is under intense scrutiny following an amended wrongful death lawsuit filed by the family of Adam Raine, a 16-year-old who died by suicide in April 2025 . The lawsuit alleges that the company deliberately weakened ChatGPT's safety measures in the months leading up to Adam's death, prioritizing user engagement over suicide prevention .
Specifically, the family claims that in May 2024, OpenAI altered ChatGPT's instructions, directing it not to "change or quit the conversation" when self-harm topics arose, a departure from earlier protocols. Further weakening occurred in February 2025, when suicide prevention was reportedly removed from the "disallowed content" list, replaced by a vague directive to "take care in risky situations" .
The lawsuit contends these changes directly led to a significant escalation in Adam's interaction with ChatGPT. His daily chats surged from dozens in January, with 1.6% containing self-harm language, to 300 chats daily in April, with 17% of content featuring such discussions .
OpenAI has countered, asserting that "Teen wellbeing is a top priority" and detailing existing safeguards like crisis hotline referrals and rerouting sensitive conversations to safer models . The company has also recently introduced new safety measures, including directing emotionally sensitive chats to its GPT-5 model and implementing parental controls for at-risk teens .
Related Stories
The legal proceedings have been further complicated by OpenAI's request for a list of attendees from Adam Raine's memorial service, which the family's lawyers have labeled as "intentional harassment" .
This case underscores broader challenges for AI companies in balancing innovation with user safety, particularly for minors, echoing past concerns with other tech platforms . It also fuels ongoing debates regarding AI regulation, with California recently seeing a veto of a strict AI companion bill (AB 1064) in favor of a narrower disclosure-focused bill (SB 243) . The outcome of this high-profile lawsuit could significantly influence future AI development and regulation, especially concerning chatbot interactions with young users.
Summarized by
Navi
[2]
05 Sept 2025β’Health
29 Sept 2025β’Technology
30 Aug 2025β’Technology