OpenAI Debunks Viral Claims: ChatGPT Still Provides Medical and Legal Information

Reviewed byNidhi Govil

5 Sources

Share

OpenAI clarifies that ChatGPT continues to offer medical and legal information despite widespread social media rumors claiming otherwise. The confusion arose from misinterpretation of a policy consolidation update.

Viral Misinformation Sparks Confusion

Social media platforms erupted with claims that ChatGPT had suddenly banned medical and legal advice, causing widespread panic among users who rely on the AI chatbot for informational guidance. The rumors began circulating after screenshots showed ChatGPT refusing to answer questions about skin rashes and legal disputes, with users interpreting this as evidence of a major policy shift

1

5

.

Source: Digit

Source: Digit

The betting platform Kalshi amplified the confusion with a now-deleted post claiming "JUST IN: ChatGPT will no longer provide health or legal advice," which quickly went viral across Reddit, X (formerly Twitter), and Discord

1

4

. Users lamented that their "once-chatty AI companion had suddenly gone quiet on some of life's most serious questions"

5

.

OpenAI's Swift Clarification

Karan Singhal, OpenAI's head of health AI, quickly responded to the viral claims on X, stating definitively: "Not true. Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information"

1

2

.

Source: Analytics Insight

Source: Analytics Insight

OpenAI emphasized that ChatGPT's behavior "remains unchanged" and that the inclusion of policies surrounding legal and medical advice "is not a new change to our terms"

1

. The company clarified that while ChatGPT continues to offer informational content about health and legal topics, it has always maintained boundaries against providing personalized professional advice

4

.

Policy Consolidation Misunderstood

The confusion originated from OpenAI's October 29th policy update, which consolidated three separate policies into one unified document covering all OpenAI products and services

1

. The updated policy states that users cannot use ChatGPT for "provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional"

1

2

.

This language closely mirrors OpenAI's previous usage policy, which already prohibited activities that "may significantly impair the safety, wellbeing, or rights of others," including "providing tailored legal, medical/health, or financial advice without review by a qualified professional and disclosure of the use of AI assistance and its potential limitations"

1

5

.

Ongoing Safety Enhancements

While the core policy remains unchanged, OpenAI has been implementing safety improvements to ChatGPT's responses. The company recently published documentation detailing changes made to the chatbot's handling of sensitive conversations, working with over 170 mental health experts to help ChatGPT better recognize signs of distress

2

.

Source: Tom's Guide

Source: Tom's Guide

The policy update reinforces OpenAI's stance against misuse, maintaining restrictions on activities involving threats, harassment, and other harmful behaviors. The company has also added special protections for minors and continues to restrict use in politically sensitive areas unless human review is involved

3

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo