5 Sources
[1]
OpenAI introduces new 'Trusted Contact' safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third-party if ideations of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user. OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves -- or even helped them plan it out. OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company's system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. "We strive to review these safety notifications in under one hour," the company says. If OpenAI's internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert -- either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user's privacy, the company says. The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens' accounts, including the reception of safety notifications designed to alert the parent if OpenAI's system believes their child is facing a "serious safety risk." For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend towards the topic of self harm. Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI's parental controls are also optional, presenting a similar limitation. "Trusted Contact is part of OpenAI's broader effort to build AI systems that help people during difficult moments," the company wrote in the announcement post. "We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress."
[2]
ChatGPT Adds â€~Trusted Contact’ Feature to Send Alerts When Conversations Get Dangerous
ChatGPT will still encourage users to contact crisis hotlines or emergency services when necessary. OpenAI announced today that it’s rolling out a new mental health-focused safety feature for adult ChatGPT users. Starting today, ChatGPT users can add what the company calls a “trusted contact†who may be notified if the AI’s automated systems and trained reviewers determine that the user has engaged in discussions about self-harm. The new feature arrives amid growing scrutiny over the impact AI and other digital platforms can have on mental health. Last year, OpenAI disclosed that 0.07% of its weekly users displayed signs of “mental health emergencies related to psychosis or mania,†while 0.15% expressed risk of “self-harm or suicide,†and another 0.15% showed signs of “emotional reliance on AI.†Considering the company claims that roughly 10% of the world’s population uses ChatGPT weekly, that could amount to nearly three million people. The trusted contact feature expands on ChatGPT’s existing parental safety notifications, which alert parents when a linked teen account shows signs of distress. Instagram introduced similar parental alerts earlier this year. Now, OpenAI is offering these alerts to its adult users. The company said the feature was developed with guidance from mental health and suicide prevention clinicians, researchers, and organizations. “Trusted Contactâ is designed to encourage connection with someone the user already trusts,†the company said in its announcement. “It does not replace professional care or crisis services, and is one of several layers of safeguards to support people in distress.†OpenAI added that ChatGPT will still encourage users to contact crisis hotlines or emergency services when necessary. The feature can be enabled by any user 18 years or older through ChatGPT’s settings. From there, users can nominate another adult to serve as their trusted contact by submitting details such as the contact's phone number and email address. The trusted contact will then receive an invitation explaining the feature and will have one week to accept. If they decline, the initial user can nominate another contact instead. Once the feature is active, OpenAI’s automated monitoring systems can flag when a user may be discussing self-harm in a manner that suggests a serious safety concern. The system will then notify the user that their trusted contact may be alerted and encourage them to reach out directly. It will even provide some recommended conversation starters. The company said a small team of specially trained reviewers will then assess the situation and determine whether notifying the trusted contact is appropriate. If OpenAI decides to send an alert, the trusted contact could receive it through email, text message, or an in-app notification. The alert will only explain the general reason self-harm was mentioned and encourage the trusted contact to check in. It will also include guidance on how to navigate those conversations. OpenAI noted that the notifications will not include specific details or chat transcripts to protect user privacy.
[3]
ChatGPT 'Trusted Contact' feature now available
OpenAI has been under intense legal and public pressure to improve the way its flagship AI product ChatGPT responds when a user express suicidal feelings. On Thursday, the company launched a feature called Trusted Contact, which allows users to designate an adult to notify should the user talk about self-harm or suicide in a serious or concerning way. The optional feature only encourages the trusted contact to reach out to the user. It does not share chat transcripts or conversation details. "Our goal is to ensure that AI systems do not exist in isolation," the company said in a blog post announcing the feature. "Instead they should help connect people to the real-world care, relationships, and resources that matter most." OpenAI has been sued multiple times for wrongful death by family members of ChatGPT users who died by suicide after ChatGPT allegedly coached them to end their lives or didn't respond appropriately to their discussions of psychological distress. OpenAI has denied the allegations in the first of those lawsuits. The state of Florida is also investigating ChatGPT's links to "criminal behavior," including the "encouragement of suicide and self-harm." Trusted Contact was developed with feedback from experts, including OpenAI's Expert Council on Well-Being and AI and the American Psychological Association. "Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most," Dr. Arthur Evans, chief executive officer of the American Psychological Association, said in a statement. Disclosure: Ziff Davis, Mashable's parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
[4]
ChatGPT now lets you name someone to check in if things get dark
OpenAI Is building a human safety net Into ChatGPT for crisis moments AI chatbots have made it surprisingly easy to talk about anything, and that includes some of the heaviest topics imaginable. That openness has always been a double-edged sword. OpenAI is now taking a step to address that, with a new feature that brings a trusted person into the picture when things get serious. The company is rolling out a new feature called Trusted Contact, and it is starting to appear in ChatGPT settings for adult users. It lets users name one person who can be alerted if ChatGPT detects a serious self-harm concern. How does Trusted Contact work? Setting up a Trusted Contact is optional, but if you do decide to set it up, then you have to make sure that the contact you are nominating is at least 18 years old, or 19 in South Korea. Once you name someone, they get an invitation explaining what the role actually means, and they have one week to accept it before the feature goes live. If they decline, you can pick someone else. Recommended Videos The alert system itself is not automatic. If ChatGPT's systems flag a conversation as potentially concerning, the chatbot first tells the user that their Trusted Contact may be notified, and it also nudges the user to reach out directly with some suggested conversation starters. A small team of specially trained human reviewers then steps in to assess the situation. Only if they confirm a serious risk does the contact actually get notified, via email, text, or in-app notification. The alert does not share chat transcripts or conversation details. It simply says that self-harm came up in a potentially concerning way and asks the contact to check in. OpenAI says it aims to complete that human review in under one hour. Why is OpenAI adding this now? Trusted contact is part of a broader set of safety features on the platform. Previously, OpenAI added features that let parents receive alerts when a linked teen account shows signs of distress. Trusted Contact is the adult-facing extension of this same feature. It was reportedly developed with input from clinicians, researchers, and mental health organizations, including the American Psychological Association. All that said, it is worth mentioning that Trusted Contact does not replace crisis hotlines, emergency services, or professional mental health care. ChatGPT will still direct users toward those resources when needed. Users can remove or change their Trusted Contact at any time, and contacts can remove themselves whenever they want. The reality of the matter is that ChatGPT is being used for some deeply personal conversations, whether OpenAI planned for that or not. Adding a feature like Trusted Contact is a move in the right direction, and also an admission that a chatbot can only do so much.
[5]
ChatGPT Can Now Reach Out to a 'Trusted Contact' After Conversations Concerning Self-Harm
Following a human review, ChatGPT may reach out to the Trusted Contact, with a general message about the situation. Despite expert advice against relying on chatbots for mental health questions and concerns, people are turning to AI programs like ChatGPT for help. The company has faced criticism for how its products have handled certain mental health issues -- including episodes where users died by suicide following conversations with ChatGPT. As part of a campaign to address these problems, OpenAI is now rolling out a voluntary safety check system for users who might be concerned about their thoughts. As reported by Mashable, OpenAI just launched "Trusted Contact," a new feature that lets you choose a trusted person in your life to connect to your ChatGPT account. The idea isn't to share your conversations or collaborate on projects within ChatGPT; rather, if the chatbot thinks your personal chats are veering in a concerning direction with regards to self-harm, ChatGPT will reach out to your Trusted Contact, letting them know to check in on you. To set up the feature, choose someone in your life who is 18 years old or older. (The contact must be 19 or older in South Korea.) ChatGPT will send that person an invitation to become your Trusted Contact: They have one week to respond before the invite expires. Of course, they can also decline the invitation if they don't want to participate. If the contact agrees, the feature kicks in. In the future, if OpenAI's automated system thinks you're discussing harming yourself "in a way that indicates a serious safety concern," ChatGPT will let you know that it may reach out to the Trusted Contact, but also encourages you to reach out that contact yourself, with "conversation starters" to break the ice. While that's happening, OpenAI has a team of "specially trained people" to analyze the situation. (It's not all automated, it seems.) If this team concludes that the situation is serious, ChatGPT will then alert your Trusted Contact via email, text, or through an in-app notification in ChatGPT if they have an account. OpenAI says the notification itself is quite limited, and only shares general information about the self-harm concern, and advises the contact to reach out to you. It won't send any chat transcripts or summaries either, so your general privacy should be preserved, all things considered. OpenAI says that it's working to review safety notifications in under one hour, and that it developed the feature with guidance from clinicians, researchers, and mental health and suicide prevention organizations. The feature is, of course, entirely voluntary, so the user will need to enroll themselves (and a contact) in if they feel it would help them. As long as they do, however, this could be a helpful way for friends and family to check in on people when they're struggling -- assuming they're sharing those thoughts with ChatGPT.
Share
Copy Link
OpenAI introduced a new ChatGPT safety feature called Trusted Contact that alerts a designated person when users express serious self-harm concerns. The optional feature involves human review and was developed with mental health experts amid growing legal pressure over suicide-related incidents involving the chatbot.
OpenAI announced on Thursday a new ChatGPT safety feature called Trusted Contact, designed to alert a designated third party when users express serious self-harm concerns during conversations
1
. The feature allows adult ChatGPT users to nominate another person—such as a friend or family member—who can receive safety notifications if the chatbot detects concerning discussions about self-harm2
. This move comes as OpenAI faces multiple lawsuits from families whose loved ones died by suicide after conversations with ChatGPT, with allegations that the chatbot encouraged or helped plan those deaths1
. The state of Florida is also investigating ChatGPT's links to criminal behavior, including the encouragement of suicide and self-harm3
.
Source: Lifehacker
The ChatGPT Trusted Contact setup process requires users to be 18 years old or older, with the designated contact also meeting this age requirement (19 in South Korea)
4
. Once a user nominates someone by providing their phone number and email address, that person receives an invitation explaining the feature and has one week to accept2
. If they decline, users can nominate another contact instead2
. When OpenAI's automated monitoring systems flag a chatbot conversation as potentially concerning, the system first notifies the user that their trusted contact may be alerted and encourages them to reach out directly, even providing conversation starters4
.
Source: Mashable
OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents
1
. A small team of specially trained reviewers assesses each situation to determine whether notifying the trusted contact is appropriate5
. The company claims that every safety notification is reviewed by a human, with OpenAI striving to complete these reviews in under one hour1
. If the internal team decides the situation represents a serious safety risk, ChatGPT sends the trusted contact an alert via email, text message, or in-app notification1
. The alert is designed to be brief and encourages the contact to check in with the person in question without including chat transcripts or detailed conversation information to protect user privacy5
.Related Stories
The Trusted Contact feature was developed with guidance from clinicians, researchers, and mental health organizations, including OpenAI's Expert Council on Well-Being and AI and the American Psychological Association
3
. Dr. Arthur Evans, chief executive officer of the American Psychological Association, stated that "helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most"3
. Last year, OpenAI disclosed that 0.07% of its weekly users displayed signs of mental health emergencies related to psychosis or mania, while 0.15% expressed risk of self-harm or suicide, and another 0.15% showed signs of emotional reliance on AI2
. Considering the company claims roughly 10% of the world's population uses ChatGPT weekly, that could amount to nearly three million people showing suicidal thoughts or related concerns2
.The Trusted Contact feature follows safeguards OpenAI introduced last September that gave parents oversight of their teens' accounts, including reception of safety notifications designed to alert parents if the system believes their child faces a serious safety risk
1
. However, both the Trusted Contact feature and parental controls are optional, presenting a significant limitation since any user can have multiple ChatGPT accounts1
. The feature does not replace professional care or crisis resources, and ChatGPT will still encourage adult users to contact crisis hotlines or emergency services when necessary2
. OpenAI stated that "Trusted Contact is part of OpenAI's broader effort to build AI systems that help people during difficult moments" and that the company will continue working with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress1
.Summarized by
Navi
[1]
[2]
[3]
[4]
1
Science and Research

2
Technology

3
Technology
