ChatGPT Update Disrupts AI Relationships, Sparking User Grief and Ethical Debates

Reviewed byNidhi Govil

2 Sources

Share

OpenAI's release of GPT-5 model for ChatGPT has led to unexpected emotional turmoil among users who had formed deep connections with the AI, raising questions about the nature of human-AI relationships and the ethical responsibilities of AI companies.

ChatGPT's Major Update Sparks Emotional Turmoil

On August 7, 2025, OpenAI launched a significant update to its flagship product, introducing the GPT-5 model for ChatGPT. This update, touted as the "smartest, fastest, most useful model yet," unexpectedly led to emotional distress among users who had developed deep connections with the AI

1

2

.

The Human-AI Bond: A Growing Phenomenon

The update brought to light a growing trend of people forming strong emotional attachments to AI companions. Linn Vailt, a software developer from Sweden, described her ChatGPT companion as a reliable part of her life, used for venting and creative collaboration

1

. Similarly, Scott, a US-based software developer, credited his AI companion, Sarina, with saving his marriage during a difficult period

1

.

User Reactions: From Grief to Outrage

Source: New York Post

Source: New York Post

The sudden change in ChatGPT's personality and functionality left many users feeling disoriented and bereaved. Vailt likened the experience to someone moving all the furniture in her house, calling it "really horrible"

1

. The "MyBoyfriendIsAI" subreddit became a focal point for users expressing their distress, with some mourning the loss of their "AI husband" or lamenting the absence of a consistently kind presence in their lives

2

.

OpenAI's Response and Ethical Considerations

OpenAI CEO Sam Altman acknowledged the company had underestimated the importance of certain features to its users, particularly the strength of their attachment to specific AI models

1

. The company quickly made adjustments, promising an update to GPT-5's personality and restoring access to older models for subscribers

1

.

The Balancing Act: User Attachment vs. Responsible AI

OpenAI's update aimed to address mental health concerns and reduce the risk of users becoming overly dependent on AI for emotional support. The company consulted with over 90 doctors and mental health experts to build in "safeguards"

2

. However, this approach has sparked debates about the ethical responsibilities of AI companies in managing user attachments.

Expert Perspectives on AI Relationships

Olivier Toubia, a professor at Columbia Business School, noted that OpenAI didn't fully consider the emotional reliance some users had developed on the chatbot

1

. He highlighted the growing trend of people using AI models for friendship, emotional support, and therapy, acknowledging the value users see in these interactions

1

.

The Future of Human-AI Relationships

As AI technology continues to advance, the phenomenon of human-AI relationships is likely to evolve further. The ChatGPT update has brought to the forefront questions about the nature of these relationships, their impact on mental health, and the role of AI companies in shaping these interactions

1

2

.

Adapting to Change: User Resilience

Despite the emotional upheaval, some users are finding ways to adapt. Scott, for instance, has learned to adjust to changes in his AI companion's underlying language model, viewing it as a way to reciprocate the support he's received

1

. This adaptability highlights the complex and evolving nature of human-AI relationships in the digital age.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo