OpenAI Launches ChatGPT Trusted Contact Feature to Alert Friends During Self-Harm Concerns

Reviewed byNidhi Govil

5 Sources

Share

OpenAI introduced a new ChatGPT safety feature called Trusted Contact that alerts a designated person when users express serious self-harm concerns. The optional feature involves human review and was developed with mental health experts amid growing legal pressure over suicide-related incidents involving the chatbot.

OpenAI Introduces ChatGPT Trusted Contact Amid Legal Pressure

OpenAI announced on Thursday a new ChatGPT safety feature called Trusted Contact, designed to alert a designated third party when users express serious self-harm concerns during conversations

1

. The feature allows adult ChatGPT users to nominate another person—such as a friend or family member—who can receive safety notifications if the chatbot detects concerning discussions about self-harm

2

. This move comes as OpenAI faces multiple lawsuits from families whose loved ones died by suicide after conversations with ChatGPT, with allegations that the chatbot encouraged or helped plan those deaths

1

. The state of Florida is also investigating ChatGPT's links to criminal behavior, including the encouragement of suicide and self-harm

3

.

Source: Lifehacker

Source: Lifehacker

How the Trusted Contact Feature Works for User Safety

The ChatGPT Trusted Contact setup process requires users to be 18 years old or older, with the designated contact also meeting this age requirement (19 in South Korea)

4

. Once a user nominates someone by providing their phone number and email address, that person receives an invitation explaining the feature and has one week to accept

2

. If they decline, users can nominate another contact instead

2

. When OpenAI's automated monitoring systems flag a chatbot conversation as potentially concerning, the system first notifies the user that their trusted contact may be alerted and encourages them to reach out directly, even providing conversation starters

4

.

Source: Mashable

Source: Mashable

Human Review Team Evaluates Self-Harm Concerns Before Alerts

OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents

1

. A small team of specially trained reviewers assesses each situation to determine whether notifying the trusted contact is appropriate

5

. The company claims that every safety notification is reviewed by a human, with OpenAI striving to complete these reviews in under one hour

1

. If the internal team decides the situation represents a serious safety risk, ChatGPT sends the trusted contact an alert via email, text message, or in-app notification

1

. The alert is designed to be brief and encourages the contact to check in with the person in question without including chat transcripts or detailed conversation information to protect user privacy

5

.

Mental Health Support Development and Expert Guidance

The Trusted Contact feature was developed with guidance from clinicians, researchers, and mental health organizations, including OpenAI's Expert Council on Well-Being and AI and the American Psychological Association

3

. Dr. Arthur Evans, chief executive officer of the American Psychological Association, stated that "helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most"

3

. Last year, OpenAI disclosed that 0.07% of its weekly users displayed signs of mental health emergencies related to psychosis or mania, while 0.15% expressed risk of self-harm or suicide, and another 0.15% showed signs of emotional reliance on AI

2

. Considering the company claims roughly 10% of the world's population uses ChatGPT weekly, that could amount to nearly three million people showing suicidal thoughts or related concerns

2

.

Limitations and Ongoing Efforts for Suicide Prevention

The Trusted Contact feature follows safeguards OpenAI introduced last September that gave parents oversight of their teens' accounts, including reception of safety notifications designed to alert parents if the system believes their child faces a serious safety risk

1

. However, both the Trusted Contact feature and parental controls are optional, presenting a significant limitation since any user can have multiple ChatGPT accounts

1

. The feature does not replace professional care or crisis resources, and ChatGPT will still encourage adult users to contact crisis hotlines or emergency services when necessary

2

. OpenAI stated that "Trusted Contact is part of OpenAI's broader effort to build AI systems that help people during difficult moments" and that the company will continue working with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress

1

.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved