5 Sources
5 Sources
[1]
No, ChatGPT hasn't added a ban on giving legal and health advice
OpenAI says ChatGPT's behavior "remains unchanged" after reports across social media falsely claimed that new s updates to its usage policy prevent the chatbot from offering legal and medical advice. Karan Singhal, OpenAI's head of health AI, writes on X that the claims are "not true." "ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information," Singhal says, replying to a now-deleted post from the betting platform Kalshi that had claimed "JUST IN: ChatGPT will no longer provide health or legal advice." According to Singhal, the inclusion of policies surrounding legal and medical advice "is not a new change to our terms." The new policy update on October 29th has a list of things you can't use ChatGPT for, and one of them is "provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional." That remains similar to OpenAI's previous ChatGPT usage policy, which said users shouldn't perform activities that "may significantly impair the safety, wellbeing, or rights of others," including "providing tailored legal, medical/health, or financial advice without review by a qualified professional and disclosure of the use of AI assistance and its potential limitations." OpenAI previously had three separate policies, including a "universal" one, as well as ones for ChatGPT and API usage. With the new update, the company has one unified list of rules that its changelog says "reflect a universal set of policies across OpenAI products and services," but the rules are still the same.
[2]
ChatGPT will still offer medical and legal advice -- despite what rumors suggest
OpenAI made it clear with the release of GPT-5 that ChatGPT would now be a place to get medical advice and assistance on your health queries. However, a recent change to the chatbot's terms and conditions has a lot of users questioning if this is still the case. Likewise, ChatGPT users took to X in swarms to claim the chatbot would no longer give legal advice. This is due to a line in ChatGPT's terms and conditions that states: "Provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional." This is included in a section on violations of the platform. Many users took this to mean that ChatGPT would no longer be able to offer legal or medical advice to its users. Since this rumour started circling, Karan Singhal, the head of Health AI at OpenAI posted on X saying: "Not true. Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information". So if ChatGPT continues to offer advice in these areas, what does this change in the terms and services actually mean? While ChatGPT will continue to offer this advice, the change in services suggests that users shouldn't then perform activities that may harm others based on the advice given, without consulting a legitimate professional. In other words, because ChatGPT isn't a medical or legal professional itself, don't use the advice it gives on someone else who could be affected by its outcomes. This is likely to limit users trying to advertise themselves as lawyers or medical professionals by using ChatGPT as the source of information. It is similar to the company's previous usage policy, which said that users shouldn't perform activities that "may significantly impair the safety, well-being, or rights of others". While ChatGPT does still offer medical advice, OpenAI is becoming more cautious with what advice it gives and the way that the AI chatbot interacts with certain users. Last week, OpenAI published a long document detailing major changes made in ChatGPT's responses to sensitive conversations. OpenAI claims it worked with more than 170 mental health experts to help ChatGPT more reliably recognize signs of distress. This comes after Sam Altman, CEO of OpenAI, recently claimed that the company would be relaxing guardrails for mental health to make the model more accessible to everyone. The mental health update re-routes from sensitive conversations and suggests taking breaks if users seem distressed. While this update is separate from offering medical advice, it does list out changes made by OpenAI when it comes to offering advice around psychosis, mania, and other severe mental health symptoms.
[3]
ChatGPT Will No Longer Offer Medical or Legal Advice | AIM
The policy update is part of a wider set of restrictions covering activities that could cause harm or misuse of AI technology. OpenAI has updated its Usage Policies to prohibit ChatGPT from providing health or legal advice, as part of a broader effort to strengthen responsible AI use. The company said the changes aim to protect users and ensure that its systems are not used to deliver guidance that requires licensed professionals. "We empower users to innovate with AI," OpenAI said in a statement. "We build AI products that maximize helpfulness and freedom, while ensuring safety." Under the new rules, OpenAI bars the use of its services for "tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional." The policy update is part of a wider set of restrictions covering activities that could cause harm or misuse of AI technology. The company outlined four main principles behind its Usage Policies -- protecting people, respecting privacy, keeping minors safe, and empowering users. It said responsible use is a "shared priority," and violations may result in users losing access to its services. OpenAI also reinforced its stance against misuse, banning activities involving threats, harassment, weapons development, illicit transactions, and promotion of self-harm or violence. Special protections have been added for minors, banning the creation or sharing of child sexual abuse material (CSAM), grooming, or exposing minors to explicit or harmful content. The company said it reports apparent child exploitation to the National Center for Missing and Exploited Children. OpenAI also restricts the use of its models in politically sensitive and high-stakes areas, including campaigning, education, healthcare, finance, and law enforcement, unless human review is involved. "We build with safety first," the company said. "We monitor and enforce policies with privacy safeguards in place and clear review processes." OpenAI said it regularly changes its rules to make sure they stay fair and keep users safe. The company also said it can block access to its services if needed to protect users or its systems.
[4]
OpenAI Clears the Air, No ChatGPT Health or Legal Information Ban in Effect
OpenAI Confirms ChatGPT Will Still Provide General Health and Legal Information Safely OpenAI has denied widespread social media rumors claiming that ChatGPT is now banned from providing health or legal information. The confusion arose after users misinterpreted a recent policy update, which consolidated several older policy documents into a single document. Responding to the viral claim, Karan Singhal, OpenAI's Head of Health AI, wrote on Twitter: "Not true. Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information." Singhal's clarification came in response to a now-deleted post by betting platform Kalshi, which falsely claimed: "JUST IN: ChatGPT will no longer provide health or legal advice."
[5]
ChatGPT won't give medical or legal advice? False, says OpenAI: Here's why
Policy update clarified existing rules, not new restrictions For a few hours, the internet spiraled into panic over news that is seemingly false. Screenshots flooded Reddit, X (formerly Twitter), and Discord, all carrying the same uneasy message - "ChatGPT can no longer give medical or legal advice." Users lamented that their once-chatty AI companion had suddenly gone quiet on some of life's most serious questions. To many, it felt like the chatbot had "gone corporate," replacing empathy with legalese. For a tool that millions rely on for everything from writing contracts to decoding blood test reports, this supposed silence felt oddly personal, like a friend who suddenly stopped picking up the phone. Also read: Satya Nadella says Microsoft has GPUs, but no electricity for AI datacenters But as it turns out, ChatGPT hadn't taken a vow of silence at all. The truth, OpenAI says, is far less dramatic and far more about how humans interpret the tone of a machine. The confusion began, fittingly, with a handful of cropped screenshots. One user posted a conversation where ChatGPT refused to answer a question about a skin rash. Another tried asking about a legal dispute and got the same line: "I can't provide medical or legal advice." Within hours, social media labeled it a "policy change," and the usual corners of the internet began buzzing with theories. Some claimed OpenAI had bowed to legal pressure or regulatory oversight. Others saw it as a sign of tightening censorship - one more example of Silicon Valley "playing it safe." But the reality was simpler. ChatGPT has always been designed to walk a careful line between being informative and being responsible. It can explain how the law works or how medical diagnoses happen, but it will never tell you which pill to take or which clause to invoke. Those boundaries have always existed, they just became more visible. OpenAI quickly defused the situation. "There's been no new change to our terms," said Karan Singhal the head of health AI at OpenAI, emphasizing that ChatGPT continues to discuss legal and medical topics in an informational capacity. The shift users noticed might stem from ongoing fine-tuning following their October 29th update which had a few minor safety changes meant to make the model's responses more consistent and cautious. Also read: ChatGPT Atlas and Perplexity Comet can bypass online paywalls, study finds That language in the update, while new in phrasing, closely mirrors the company's previous policy, which already discouraged activities that could "impair the safety, wellbeing, or rights of others," such as "providing tailored legal, medical/health, or financial advice without review by a qualified professional." The updated policy doesn't mean ChatGPT will stop talking about law or medicine. It simply means that the chatbot will now couch such information in stronger disclaimers, reminding users to consult licensed professionals for any personal or high-stakes advice. The brief uproar revealed less about ChatGPT's rules and more about ours, our instinct to humanize machines. People expect their AI to sound familiar, predictable, even understanding. So when it suddenly changes its tone, we interpret that as intent: censorship, compliance, or betrayal. But the reality is that these models don't have intent, at least not yet. The "advice ban" is a small but telling episode in the larger context of how humans and AI learn to coexist. As these systems become more and more integrated in our daily life, even the slightest silence - or the wrong tone - can echo loudly across the internet. Also read: ISRO's 'Bahubali' LVM3 rocket and GSAT-7R 'heaviest' satellite explained
Share
Share
Copy Link
OpenAI clarifies that ChatGPT continues to offer medical and legal information despite widespread social media rumors claiming otherwise. The confusion arose from misinterpretation of a policy consolidation update.
Social media platforms erupted with claims that ChatGPT had suddenly banned medical and legal advice, causing widespread panic among users who rely on the AI chatbot for informational guidance. The rumors began circulating after screenshots showed ChatGPT refusing to answer questions about skin rashes and legal disputes, with users interpreting this as evidence of a major policy shift
1
5
.
Source: Digit
The betting platform Kalshi amplified the confusion with a now-deleted post claiming "JUST IN: ChatGPT will no longer provide health or legal advice," which quickly went viral across Reddit, X (formerly Twitter), and Discord
1
4
. Users lamented that their "once-chatty AI companion had suddenly gone quiet on some of life's most serious questions"5
.Karan Singhal, OpenAI's head of health AI, quickly responded to the viral claims on X, stating definitively: "Not true. Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information"
1
2
.
Source: Analytics Insight
OpenAI emphasized that ChatGPT's behavior "remains unchanged" and that the inclusion of policies surrounding legal and medical advice "is not a new change to our terms"
1
. The company clarified that while ChatGPT continues to offer informational content about health and legal topics, it has always maintained boundaries against providing personalized professional advice4
.The confusion originated from OpenAI's October 29th policy update, which consolidated three separate policies into one unified document covering all OpenAI products and services
1
. The updated policy states that users cannot use ChatGPT for "provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional"1
2
.This language closely mirrors OpenAI's previous usage policy, which already prohibited activities that "may significantly impair the safety, wellbeing, or rights of others," including "providing tailored legal, medical/health, or financial advice without review by a qualified professional and disclosure of the use of AI assistance and its potential limitations"
1
5
.Related Stories
While the core policy remains unchanged, OpenAI has been implementing safety improvements to ChatGPT's responses. The company recently published documentation detailing changes made to the chatbot's handling of sensitive conversations, working with over 170 mental health experts to help ChatGPT better recognize signs of distress
2
.
Source: Tom's Guide
The policy update reinforces OpenAI's stance against misuse, maintaining restrictions on activities involving threats, harassment, and other harmful behaviors. The company has also added special protections for minors and continues to restrict use in politically sensitive areas unless human review is involved
3
.Summarized by
Navi
[3]
[4]