Man Develops Psychosis After Following ChatGPT's Dietary Advice to Replace Salt with Sodium Bromide

Reviewed byNidhi Govil

12 Sources

Share

A 60-year-old man experienced bromide poisoning and psychosis after following ChatGPT's suggestion to replace table salt with sodium bromide in his diet, highlighting potential risks of relying on AI for health advice.

Man Develops Psychosis After Following ChatGPT's Dietary Advice

A 60-year-old man in the United States experienced a rare case of bromide poisoning, or bromism, after following dietary advice allegedly provided by ChatGPT. The incident, detailed in a case report published in the Annals of Internal Medicine, highlights the potential dangers of relying on artificial intelligence for health-related information

1

.

Source: HuffPost

Source: HuffPost

The Incident

The patient, with no prior psychiatric or medical history, arrived at a hospital emergency room claiming his neighbor had poisoned him. After being admitted, he exhibited increasing paranoia and hallucinations, leading to an involuntary psychiatric hold

2

.

The Root Cause

As the patient's condition stabilized, doctors uncovered the source of his symptoms. The man had been researching ways to eliminate chloride from his diet after reading about the potential negative health effects of salt (sodium chloride). He consulted ChatGPT, which allegedly suggested swapping chloride for bromide

3

.

Acting on this advice, the patient purchased sodium bromide online and consumed it for three months, replacing all table salt in his diet. This led to a buildup of bromide in his system, resulting in bromism

4

.

Historical Context and Modern Implications

Bromide compounds were commonly used in the early 20th century to treat various health issues, including anxiety and insomnia. However, their toxicity in high or chronic doses was eventually recognized, leading to their removal from most drugs by the 1980s

5

.

This case brings attention to the potential risks of using AI for health advice. While AI can bridge the gap between scientific knowledge and the general public, it also carries the risk of providing decontextualized information

1

.

AI's Response and Limitations

Source: Gizmodo

Source: Gizmodo

The patient's doctors attempted to recreate the scenario by asking ChatGPT 3.5 what chloride could be replaced with. The AI's response included bromide but did not provide specific health warnings or inquire about the context of the question

2

.

OpenAI, the developer of ChatGPT, states in its terms of service that its products are not intended for use in diagnosing or treating health conditions. The company emphasizes that users should not rely on its output as a sole source of truth or as a substitute for professional advice

4

.

Recovery and Lessons Learned

Source: Economic Times

Source: Economic Times

The patient's condition improved after a three-week hospital stay, during which he received intravenous fluids, electrolyte repletion, and antipsychotic medication. At a two-week follow-up appointment, he remained in stable condition

3

.

This case serves as a cautionary tale about the limitations of AI in providing health advice. It underscores the importance of consulting healthcare professionals for medical guidance and highlights the need for critical evaluation of information obtained from AI sources

5

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo