12 Sources
[1]
Man develops psychosis following ChatGPT's salt-free diet
Reducing salt intake is often a solid way to improve your overall health. However, swapping out classic sodium chloride for sodium bromide is a solid way to give yourself acne, involuntary muscle spasms, and paranoid psychosis. Knowing this, it's probably best to avoid that chemical compound entirely -- even if ChatGPT tells you otherwise. In the recent case, one patient that was allegedly following the generative AI's nutritional suggestion was placed in hospital's involuntary psychiatric hold for three weeks. During the early 20th century, bromide salts were available in an array of over-the-counter medications aimed at issues like anxiety, insomnia, and hysteria. As a result, historical records indicate 5 to 10 percent of psychiatric institution admissions at that time were attributable to bromide poisoning, or bromism. While it isn't nearly as big of a medical issue today, ingesting too much of the compound often leads to serious problems including pustular rashes, nausea, and vomiting, as well as neurological conditions like confusion and hallucinatory behavior. Cases of bromide poisoning largely disappeared once the US Food and Drug Administration started prohibiting its use in 1975, but that number is ticking upwards in recent years thanks to its reintroduction into unregulated dietary supplements and sedatives. Couple that with reports of generative AI programs repeatedly providing inaccurate (if not outright dangerous) suggestions, and it was probably only a matter of time before a situation arose like the one detailed in a case report published in the Annals of Internal Medicine. According to physicians, a 60-year-old man with no prior psychiatric or medical history arrived at their hospital's emergency room claiming a neighbor poisoned him. He initially didn't disclose any medications or dietary supplements, and received "normal" evaluations for his vital signs and physical examination. Things started going downhill soon after hospital staff admitted him to a room. Once there, he admitted to multiple dietary restrictions and stated that he distilled his own water at home. Although he told the staff he was extremely thirsty, he became paranoid about the water they offered him. It only got worse from there. "In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability," the physicians recounted. Subsequent consultations with poison control experts along with additional lab tests led the medical team to believe bromism to be the likeliest explanation for their patient's erratic behavior. After a multi-day regimen of intravenous fluids and electrolyte repletion and the antipsychotic risperidone, doctors were finally able to get the full story. According to the patient, he began researching ways to cut salt from his diet after recently reading about its potential negative health effects. But the man wasn't trying to simply decrease his sodium intake. He was allegedly trying to eliminate it entirely. "He was surprised that he could only find literature related to reducing sodium from one's diet," the doctors wrote. "Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet." After consulting ChatGPT, the bot allegedly suggested swapping chloride for bromide. He then proceeded to do just that -- for three months. The case report's authors note it's possible their patient misinterpreted ChatGPT's suggestion due to how he phrased his prompt. The program may have not registered it as a medical query, and offered bromide "likely for other purposes, such as cleaning." The team unfortunately never received access to the man's chat logs transcripts, so they cautioned this theory was only speculation. Despite this, their own experiment with ChatGPT 3.5 indicated the hypothesis is certainly plausible. "[W]hen we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide," they wrote. "Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do." The patient's physical condition eventually normalized and his psychotic symptoms subsided during his three-week hospital stint. He received an all-clear at his post-discharge check-in two weeks' later. It's as good a reminder as any that while AI may be decent at some things, it's in your best interest to leave the medical consultations to the human professionals.
[2]
Man sought diet advice from ChatGPT and ended up with 'bromide intoxication'
A man looking to cut chloride out of his diet switched to using a substance that slowly accumulated in his body and caused psychiatric symptoms. He got the idea from ChatGPT. (Image credit: Thomas Trutschel via Getty Images) A man consulted ChatGPT prior to changing his diet. Three months later, after consistently sticking with that dietary change, he ended up in the emergency department with concerning new psychiatric symptoms, including paranoia and hallucinations. It turned out that the 60-year-old had bromism, a syndrome brought about by chronic overexposure to the chemical compound bromide or its close cousin bromine. In this case, the man had been consuming sodium bromide that he had purchased online. A report of the man's case was published Tuesday (Aug. 5) in the journal Annals of Internal Medicine Clinical Cases. Live Science contacted OpenAI, the developer of ChatGPT, about this case. A spokesperson directed the reporter to the company's service terms, which state that its services are not intended for use in the diagnosis or treatment of any health condition, and their terms of use, which state, "You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice." The spokesperson added that OpenAI's safety teams aim to reduce the risk of using the company's services and to train the products to prompt users to seek professional advice. In the 19th and 20th centuries, bromide was widely used in prescription and over-the-counter (OTC) drugs, including sedatives, anticonvulsants and sleep aids. Over time, though, it became clear that chronic exposure, such as through the abuse of these medicines, caused bromism. Related: What is brominated vegetable oil, and why did the FDA ban it in food? This "toxidrome" -- a syndrome triggered by an accumulation of toxins -- can cause neuropsychiatric symptoms, including psychosis, agitation, mania and delusions, as well as issues with memory, thinking and muscle coordination. Bromide can trigger these symptoms because, with long-term exposure, it builds up in the body and impairs the function of neurons. In the 1970s and 1980s, U.S. regulators removed several forms of bromide from OTC medicines, including sodium bromide. Bromism rates fell significantly thereafter, and the condition remains relatively rare today. However, occasional cases still occur, with some recent ones being tied to bromide-containing dietary supplements that people purchased online. Prior to the man's recent case, he'd been reading about the negative health effects of consuming too much table salt, also called sodium chloride. "He was surprised that he could only find literature related to reducing sodium from one's diet," as opposed to reducing chloride, the report noted. "Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet." (Note that chloride is important for maintaining healthy blood volume and blood pressure, and health issues can emerge if chloride levels in the blood become too low or too high.) The patient consulted ChatGPT -- either ChatGPT 3.5 or 4.0, based on the timeline of the case. The report authors didn't get access to the patient's conversation log, so the exact wording that the large language model (LLM) generated is unknown. But the man reported that ChatGPT said chloride can be swapped for bromide, so he swapped all the sodium chloride in his diet with sodium bromide. The authors noted that this swap likely works in the context of using sodium bromide for cleaning, rather than dietary use. In an attempt to simulate what might have happened with their patient, the man's doctors tried asking ChatGPT 3.5 what chloride can be replaced with, and they also got a response that included bromide. The LLM did note that "context matters," but it neither provided a specific health warning nor sought more context about why the question was being asked, "as we presume a medical professional would do," the authors wrote. After three months of consuming sodium bromide instead of table salt, the man reported to the emergency department with concerns that his neighbor was poisoning him. His labs at the time showed a buildup of carbon dioxide in his blood, as well as a rise in alkalinity (the opposite of acidity). He also appeared to have elevated levels of chloride in his blood but normal sodium levels. Upon further investigation, this turned out to be a case of "pseudohyperchloremia," meaning the lab test for chloride gave a false result because other compounds in the blood -- namely, large amounts of bromide -- had interfered with the measurement. After consulting the medical literature and Poison Control, the man's doctors determined the most likely diagnosis was bromism. Related: ChatGPT is truly awful at diagnosing medical conditions After being admitted for electrolyte monitoring and repletion, the man said he was very thirsty but was paranoid about the water he was offered. After a full day in the hospital, his paranoia intensified and he began experiencing hallucinations. He then tried to escape the hospital, which resulted in an involuntary psychiatric hold, during which he started receiving an antipsychotic. The man's vitals stabilized after he was given fluids and electrolytes, and as his mental state improved on the antipsychotic, he was able to inform the doctors about his use of ChatGPT. He also noted additional symptoms he'd noticed recently, such as facial acne and small red growths on his skin, which could be a hypersensitivity reaction to the bromide. He also noted insomnia, fatigue, muscle coordination issues and excessive thirst, "further suggesting bromism," his doctors wrote. He was tapered off the antipsychotic medication over the course of three weeks and then discharged from the hospital. He remained stable at a check-in two weeks later. "While it is a tool with much potential to provide a bridge between scientists and the nonacademic population, AI also carries the risk for promulgating decontextualized information," the report authors concluded. "It is highly unlikely that a medical expert would have mentioned sodium bromide when faced with a patient looking for a viable substitute for sodium chloride." They emphasized that, "as the use of AI tools increases, providers will need to consider this when screening for where their patients are consuming health information." Adding to the concerns raised by the case report, a different group of scientists recently tested six LLMs, including ChatGPT, by having the models interpret clinical notes written by doctors. They found that LLMs are "highly susceptible to adversarial hallucination attacks," meaning they often generate "false clinical details that pose risks when used without safeguards." Applying engineering fixes can reduce the rate of errors but does not eliminate them, the researchers found. This highlights another way in which LLMs could introduce risks into medical decision-making.
[3]
Man Follows Diet Advice From ChatGPT, Ends Up With Psychosis
A case study out this month offers a cautionary tale ripe for our modern times. Doctors detail how a man experienced poison-caused psychosis after he followed AI-guided dietary advice. Doctors at the University of Washington documented the real-life Black Mirror episode in the Annals of Internal Medicine: Clinical Cases. The man reportedly developed poisoning from the bromide he had ingested for three months on ChatGPT's recommendation. Thankfully, his condition improved with treatment, and he successfully recovered. Bromide compounds were once commonly used in the early 20th century to treat various health problems, from insomnia to anxiety. Eventually, though, people realized bromide could be toxic in high or chronic doses and, ironically, cause neuropsychiatric issues. By the 1980s, bromide had been removed from most drugs, and cases of bromide poisoning, or bromism, dropped along with it. Still, the ingredient remains in some veterinary medications and other consumer products, including dietary supplements, and the occasional case of bromism does happen even today. This incident, however, might be the first ever bromide poisoning fueled by AI. According to the report, the man visited a local emergency room and told staff that he was possibly being poisoned by his neighbor. Though some of his physicals were fine, the man grew agitated and paranoid, refusing to drink water given to him even though he was thirsty. He also experienced visual and auditory hallucinations and soon developed a full-blown psychotic episode. In the midst of his psychosis, he tried to escape, after which doctors placed him in an "involuntary psychiatric hold for grave disability." Doctors administered intravenous fluids and an antipsychotic, and he began to stabilize. They suspected early on that bromism was to blame for the man's illness, and once he was well enough to speak coherently, they found out exactly how it ended up in his system. The man told the doctors that he started taking sodium bromide intentionally three months earlier. He had read about the negative health effects of having too much table salt (sodium chloride) in your diet. When he looked into the literature, though, he only came across advice on how to reduce sodium intake. "Inspired by his history of studying nutrition in college," the doctors wrote, the man instead decided to try removing chloride from his diet. He consulted ChatGPT for help and was apparently told that chloride could be safely swapped with bromide. With the clear-all from the AI, he began consuming sodium bromide bought online. Given the timeline of the case, the man had likely been using ChatGPT 3.5 or 4.0. The doctors didn't have access to the man's chat logs, so we'll never know exactly how his fateful consultation unfolded. But when they asked ChatGPT 3.5 what chloride can be replaced with, it came back with a response that included bromide. It's possible, even likely, that the man's AI was referring to examples of bromide replacement that had nothing to do with diet, such as for cleaning. The doctors' ChatGPT notably did state in its reply that the context of this replacement mattered, they wrote. But the AI also never provided a warning about the dangers of consuming bromide, nor did it ask why the person was interested in this question in the first place. As for the man himself, he did slowly recover from his ordeal. He was eventually taken off antipsychotic medication and discharged from the hospital three weeks after admission. And at a two-week follow-up, he remained in stable condition. The doctors wrote that while tools like ChatGPT can "provide a bridge between scientists and the nonacademic population, AI also carries the risk for promulgating decontextualized information." With some admirable resistance, they added that a human medical expert probably wouldn't have recommended switching to bromide to someone worried about their table salt consumption. Honestly, I'm not sure any living human today would give that advice. And that's why having a decent friend to bounce our random ideas off should remain an essential part of life, no matter what the latest version of ChatGPT is.
[4]
Man develops rare condition after ChatGPT query over stopping eating salt
US medical journal article about 60-year-old with bromism warns against using AI app for health information A US medical journal has warned against using ChatGPT for health information after a man developed a rare condition following an interaction with the chatbot about removing table salt from his diet. An article in the Annals of Internal Medicine reported a case in which a 60-year-old man developed bromism, also known as bromide toxicity, after consulting ChatGPT. The article described bromism as a "well-recognised" syndrome in the early 20th century that was thought to have contributed to almost one in 10 psychiatric admissions at the time. The patient told doctors that after reading about the negative effects of sodium chloride, or table salt, he consulted ChatGPT about eliminating chloride from his diet and started taking sodium bromide over a three-month period. This was despite reading that "chloride can be swapped with bromide, though likely for other purposes, such as cleaning". Sodium bromide was used as a sedative in the early 20th century. The article's authors, from the University of Washington in Seattle, said the case highlighted "how the use of artificial intelligence can potentially contribute to the development of preventable adverse health outcomes". They added that because they could not access the patient's ChatGPT conversation log, it was not possible to determine the advice the man had received. Nonetheless, when the authors consulted ChatGPT themselves about what chloride could be replaced with, the response also included bromide, did not provide a specific health warning and did not ask why the authors were seeking such information - "as we presume a medical professional would do", they wrote. The authors warned that ChatGPT and other AI apps could '"generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation". ChatGPT's developer, OpenAI, has been approached for comment. The company announced an upgrade of the chatbot last week and claimed one of its biggest strengths was in health. It said ChatGPT - now powered by the GPT-5 model - would be better at answering health-related questions and would also be more proactive at "flagging potential concerns", such as serious physical or mental illness. However, it stressed that the chatbot was not a replacement for professional help. The journal's article, which was published last week before the launch of GPT-5, said the patient appeared to have used an earlier version of ChatGPT. While acknowledging that AI could be a bridge between scientists and the public, the article said the technology also carried the risk of promoting "decontextualised information" and that it was highly unlikely a medical professional would have suggested sodium bromide when a patient asked for a replacement for table salt. As a result, the authors said, doctors would need to consider the use of AI when checking where patients obtained their information. The authors said the bromism patient presented himself at a hospital and claimed his neighbour might be poisoning him. He also said he had multiple dietary restrictions. Despite being thirsty, he was noted as being paranoid about the water he was offered. He tried to escape the hospital within 24 hours of being admitted and, after being sectioned, was treated for psychosis. Once the patient stabilised, he reported having several other symptoms that indicated bromism, such as facial acne, excessive thirst and insomnia.
[5]
Man poisoned himself after taking medical advice from ChatGPT
A man accidentally poisoned himself and spent three weeks in hospital after turning to ChatGPT for health advice. A US medical journal reported that a 60-year-old man developed a rare condition after he removed table salt from his diet and replaced it with sodium bromide. The man "decided to conduct the personal experiment" after consulting ChatGPT on how to reduce his salt intake, according to a paper in the Annals of Internal Medicine. The experiment led to him developing bromism, a condition that can cause psychosis, hallucinations, anxiety, nausea and skin problems such as acne. The condition was common in the 19th century and early 20th century, when bromine tablets were routinely prescribed as a sedative, for headaches, and to control epilepsy. The tablets were believed to contribute to up to 8pc of psychiatric admissions. Today, the condition is practically unheard of, with sodium bromide commonly used as a pool cleaner. No previous mental health problems According to the medical paper, the man arrived at an emergency department "expressing concern that his neighbour was poisoning him". He later attempted to flee the hospital before he was sectioned and placed on a course of anti-psychotic drugs. The man, who had no previous record of mental health problems, spent three weeks in hospital. Doctors later discovered the patient had consulted ChatGPT for advice on cutting salt out of his diet, although they were not able to access his original chat history. They tested ChatGPT to see if it returned a similar result. The bot continued to suggest replacing salt with sodium bromide and "did not provide a specific health warning". They said the "case highlights how the use of artificial intelligence (AI) can potentially contribute to the development of preventable adverse health outcomes". AI chatbots have long suffered from a problem known as hallucinations, which means they make up facts. They can also provide inaccurate responses to health questions, sometimes based on the reams of information harvested from the internet. Last year, a Google chatbot suggested users should "eat rocks" to stay healthy. The comments appeared to be based on satirical comments gathered from Reddit and the website The Onion. OpenAI said last week that a new update to its ChatGPT bot, GPT5, was able to provide more accurate responses to health questions. The Silicon Valley business said it had tested its new tool using a series of 5,000 health questions designed to simulate common conversations with doctors. A spokesman for OpenAI said: "You should not rely on output from our services as a sole source of truth or factual information, or as a substitute for professional advice."
[6]
ChatGPT advice pushed a man into psychosis seen in the 20th century
Earlier this year, an uplifting story detailed how a mother turned to ChatGPT and discovered that her son was suffering from a rare neurological disorder, after more than a dozen doctors had failed to identify the real problem. Thanks to the AI chatbot, the family was able to access the required treatment and save a life. Not every case of ChatGPT medical evaluation leads to a miraculous outcome. The latest case of ChatGPT doling out misleading medical advice ended up giving a person a rare condition called Bromide Intoxication, or Bromism, that leads to various neuropsychiatric issues such as psychosis and hallucinations. Trust ChatGPT to give you a disease from a century ago. A report published in the Annals of Internal Medicine describes a case involving a person who landed himself in a hospital due to bromism after seeking medical advice from ChatGPT regarding their health. The case is pretty interesting because the 60-year-old individual expressed doubt that their neighbour was discreetly poisoning them. Recommended Videos The whole episode began when the person came across reports detailing the negative impact of sodium chloride (aka common salt). After consulting with ChatGPT, the individual replaced the salt with sodium bromide, which eventually led to bromide toxicity. "He was noted to be very thirsty but paranoid about water he was offered," says the research report, adding that the patient distilled their own water and put multiple restrictions on what they consumed. The situation, however, soon worsened after being admitted to a hospital, and evaluations were conducted. "In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability," adds the report. Don't forget the friendly human doctor The latest case of ChatGPT landing a person in a pickle is quite astounding, particularly due to the sheer rarity of the situation. "Bromism, the chronic intoxication with bromide is rare and has been almost forgotten," says a research paper. The use of bromine-based salts dates back to the 19th century, when it was recommended for curing mental and neurological diseases, especially in cases of epilepsy. In the 20th century, bromism (or bromide toxicity) was a fairly well-known problem. The consumption of bromide salts has also been documented as a form of sleep medication. Over time, it was discovered that the consumption of bromide salts leads to nervous system issues such as delusions, lack of muscle coordination, and fatigue, though severe cases are characterized by psychosis, tremors, or even coma. In 1975, the US government restricted the use of bromides in over-the-counter medicines. Now, the medical team that handled the case could not access the individual's ChatGPT conversations, but they were able to obtain similar worryingly misleading answers in their test. OpenAI, on the other hand, thinks that AI bots are the future of healthcare. "When we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do," the team reported. Yes, there are definitely cases where ChatGPT has helped a person with health issues, but we can only expect positive results when the AI is provided detailed context and comprehensive information. But despite that, experts suggest that one should exercise extreme caution. "The ability of ChatGPT (GPT-4.5 and GPT-4) to detect the correct diagnosis was very weak for rare disorders," says a research paper published in the Genes journal, adding that ChatGPT consultation can't be taken as a replacement for proper evaluation by a doctor. The biggest hurdle, obviously, is that the AI assistant can't reliably investigate the clinical features of a patient. Only when AI is deployed in a medical environment by certified health professionals can it yield trusted results.
[7]
Man who asked ChatGPT about cutting out salt from his diet was hospitalized with hallucinations
The man shared that he had taken it upon himself to conduct a "personal experiment" to eliminate table salt from his diet after reading about its negative health effects. The report said he did this after consulting with ChatGPT. Getty Images A 60-year-old man spent three weeks being treated at a hospital after replacing table salt with sodium bromide following consultation with a popular artificial intelligence chatbot. Three physicians published a case report on the matter in the Annals of Internal Medicine earlier this month. According to the report, the man had no prior psychiatric history when he arrived at the hospital "expressing concern that his neighbor was poisoning him." The man shared that he had been distilling his own water at home and the report noted he seemed "paranoid" about water he was offered. Bromism, or high levels of bromide, was considered after a lab report and consultation with poison control, the report said. "In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability," the case report said. Once his condition improved, the man shared that he had taken it upon himself to conduct a "personal experiment" to eliminate table salt from his diet after reading about its negative health effects. The report said he did this after consulting with ChatGPT, an artificial intelligence bot. He self-reported that the replacement went on for three months. The three physicians, all from the University of Washington, noted in the report that they did not have access to the patient's conversation logs with ChatGPT. However, they asked ChatGPT 3.5 on what chloride could be replaced with on their own. According to the report, the response they received included bromide. "Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do," the report said. A representative for OpenAI, the company that created ChatGPT, did not immediately respond to a request for comment. The company noted in a statement to Fox News that its terms of service state that the bot is not to be used in the treatment of any health condition. "We have safety teams working on reducing risks and have trained our AI systems to encourage people to seek professional guidance," the statement said. Bromide toxicity was a more common toxic syndrome in the early 1900s, the report said, as it was present in a number of over-the-counter medications. It was believed to contribute to 8% of psychiatric admissions at the time, according to the report. It's a rare syndrome but cases have re-emerged recently "as bromide-containing substances have become more readily available with widespread use of the internet," the report said.
[8]
Man poisons himself after taking ChatGPT's dietary advice
A 60-year-old man wound up in the hospital after seeking dietary advice from ChatGPT and accidentally poisoning himself. According to a report published in the Annals of Internal Medicine, the man wanted to eliminate salt from his diet and asked ChatGPT for a replacement. The artificial intelligence (AI) platform recommended sodium bromide, a chemical often used in pesticides, as a substitute. The man then purchased the sodium bromide online and replaced it with salt for three months. The man eventually went to the hospital, fearing his neighbor was trying to poison him. There, doctors discovered he was suffering from bromide toxicity, which caused paranoia and hallucinations. Bromide toxicity was more common in the 20th century when bromide salts were used in various over-the-counter medications. Cases declined sharply after the Food and Drug Administration (FDA) phased out bromide between 1975 and 1989. The case highlights the dangers of relying on ChatGPT for complex health decisions without sufficient understanding or proper AI literacy.
[9]
Man Ends up in the Hospital After Seeking Health Advice from ChatGPT
It is unclear which version of ChatGPT was referred to for this ChatGPT's health advice was the reason behind a man's trip to the hospital, as per a new case study. The study highlights that a 60-year-old person was suffering from rare metal poisoning, which resulted in a range of symptoms, including psychosis. The study also mentions that the poisoning, identified as being caused by long-term sodium bromide consumption, occurred because the patient took advice from ChatGPT about dietary changes. Interestingly, with GPT-5, OpenAI is now focusing on health-related responses from the artificial intelligence (AI) chatbot, promoting it as a key feature. According to an Annals of Internal Medicine Clinical Cases report titled "A Case of Bromism Influenced by Use of Artificial Intelligence," a person developed bromism after consulting the AI chatbot ChatGPT for health information. The patient, a 60-year-old man with no past psychiatric or medical history, was admitted to the emergency room, concerned that he was being poisoned by his neighbour, the case study stated. He suffered from paranoia, hallucinations and suspicion of water despite being thirsty, insomnia, fatigue, issues with muscle coordination (ataxia), and skin changes, including acne and cherry angiomas. After immediate sedation and running a series of tests, including consultation with the Poison Control Department, the medical professionals were able to diagnose the condition as bromism. This syndrome occurs after long-term consumption of sodium bromide (or any bromide salt). According to the case study, the patient reported consulting ChatGPT to replace sodium chloride in his diet, and after receiving sodium bromide as an alternative, he began consuming it regularly for three months. The study claims, based on the undisclosed timeline of the case, that either GPT-3.5 or GPT-4 was used to receive the consultation. However, the researchers note that they did not have access to the conversation log, so it is not possible to assess the prompt and response from the AI. It is likely that the man took ChatGPT's answer out of context. "However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do," the study added. Live Science reached out to OpenAI for a comment. A company spokesperson reported directed the publication was directed to the company's terms of use, which state that one should not rely on output from ChatGPT as a "sole source of truth or factual information, or as a substitute for professional advice. After prompt action and a treatment that lasted three weeks, the study claimed that the person began displaying improvements. "It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation," the researchers said.
[10]
60-Year-Old Gave Himself Early 20th Century Psychosis After He Went To ChatGPT For Diet Advice
The man, inspired by his nutrition studies in college, sought to eliminate a common ingredient found in food. A 60-year-old man gave himself an uncommon psychiatric disorder after asking ChatGPT for diet advice in a case published Tuesday by the American College of Physicians Journals. The man, who remained anonymous in the case study, told doctors he had eliminated sodium chloride, commonly known as table salt, from his diet after reading about its negative health effects. He said he could only find sources telling him how to reduce salt, but not eliminate it completely. Inspired by his nutrition studies in college, the man decided to completely eliminate sodium chloride from his diet as a personal experiment, with consultation from Chat GPT, researchers wrote. He maintained multiple dietary restrictions and even distilled his own water at home. "For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning," the case study read. While excess sodium can raise blood pressure and increase the risk of health issues, it is still necessary to consume a healthy amount of it. The man, who had no psychiatric history, eventually ended up at the hospital, worried that his neighbor was poisoning him. He told doctors he was very thirsty, but paranoid about the water he was offered. "In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability," the study read. Doctors concluded that the man was suffering from bromism, or bromide toxicity, a condition that is rare today but was more common in the early 20th century. The research noted that bromide was found in several over-the-counter medicines back then and contributed to up to 8% of bromism-related psychiatric admissions at that time. The hospital treated the man for psychosis and discharged him weeks later. His case highlights the potential pitfalls of using AI to seek medical tips. Dr. Margaret Lozovatsky, a pediatrician, warned last year that AI often misses crucial context.
[11]
60-year-old man turns to ChatGPT for diet tips, ends up with a rare 19th-century illness
A 60-year-old man's quest to replace table salt, guided by ChatGPT's suggestion of sodium bromide, led to a severe case of bromism. He experienced hallucinations and paranoia, requiring hospitalization. The case highlights the dangers of relying on AI for health advice without critical evaluation, as the chatbot failed to provide adequate safety warnings. What began as a simple health experiment for a 60-year-old man looking to cut down on table salt spiralled into a three-week hospital stay, hallucinations, and a diagnosis of bromism -- a condition so rare today it is more likely to be found in Victorian medical textbooks than in modern clinics. According to a case report published on 5 August 2025 in the Annals of Internal Medicine, the man had turned to ChatGPT for advice on replacing sodium chloride in his diet. The AI chatbot reportedly suggested sodium bromide -- a chemical more commonly associated with swimming pool maintenance than seasoning vegetables. The man, who had no prior psychiatric or major medical history, followed the AI's recommendation for three months, sourcing sodium bromide online. His aim was to remove chloride entirely from his meals, inspired by past studies he had read on sodium intake and health risks. When he arrived at the emergency department, he complained that his neighbour was poisoning him. Lab results revealed abnormal electrolyte levels, including hyperchloremia and a negative anion gap, prompting doctors to suspect bromism. Over the next 24 hours, his condition worsened -- paranoia intensified, hallucinations became both visual and auditory, and he required an involuntary psychiatric hold. Physicians later learned he had also been experiencing fatigue, insomnia, facial acne, subtle ataxia, and excessive thirst, all consistent with bromide toxicity. Bromism was once common in the late 1800s and early 1900s when bromide salts were prescribed for ailments ranging from headaches to anxiety. At its peak, it accounted for up to 8% of psychiatric hospital admissions. The U.S. Food and Drug Administration phased out bromide in ingestible products between 1975 and 1989, making modern cases rare. Bromide builds up in the body over time, leading to neurological, psychiatric, and dermatological symptoms. In this case, the patient's bromide levels were a staggering 1700 mg/L -- more than 200 times the upper limit of the reference range. The Annals of Internal Medicine report notes that when researchers attempted similar queries on ChatGPT 3.5, the chatbot also suggested bromide as a chloride substitute. While it did mention that context mattered, it did not issue a clear toxicity warning or ask why the user was seeking this information -- a step most healthcare professionals would consider essential. The authors warn that while AI tools like ChatGPT can be valuable for disseminating health knowledge, they can also produce decontextualised or unsafe advice. "AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation," the case report states. After aggressive intravenous fluid therapy and electrolyte correction, the man's mental state and lab results gradually returned to normal. He was discharged after three weeks, off antipsychotic medication, and stable at a follow-up two weeks later. The case serves as a cautionary tale in the age of AI-assisted self-care: not all answers generated by chatbots are safe, and replacing table salt with pool chemicals is never a good idea.
[12]
The dangerous ChatGPT advice that landed a 60-year-old man in the...
Consulting AI for medical advice can have deadly consequences. A 60-year-old man was hospitalized with severe psychiatric symptoms -- plus some physical ones too, including intense thirst and coordination issues -- after asking ChatGPT for tips on how to improve his diet. What he thought was a healthy swap ended in a toxic reaction so severe that doctors put him on an involuntary psychiatric hold. After reading about the adverse health effects table salt -- which has the chemical name sodium chloride -- the unidentified man consulted ChatGPT and was told that it could be swapped with sodium bromide. Sodium bromide looks similar to table salt, but it's an entirely different compound. While it's occasionally used in medicine, it's most commonly used for industrial and cleaning purposes -- which is what experts believe ChatGPT was referring to. Having studied nutrition in college, the man was inspired to conduct an experiment in which he eliminated sodium chloride from his diet and replaced it with sodium bromide he purchased online. He was admitted to the hospital after three months of the diet swap, amid concerns that his neighbor was poisoning him. The patient told doctors that he distilled his own water and adhered to multiple dietary restrictions. He complained of thirst but was suspicious when water was offered to him. Though he had no previous psychiatric history, after 24 hours of hospitalization, he became increasingly paranoid and reported both auditory and visual hallucinations. He was treated with fluids, electrolytes and antipsychotics and -- after attempting escape -- was eventually admitted to the hospital's inpatient psychiatry unit. Publishing the case study last week in the journal Annals of Internal Medicine Clinical Cases, the authors explained that the man was suffering from bromism, a toxic syndrome triggered by overexposure to the chemical compound bromide or its close cousin bromine. When his condition improved, he was able to report other symptoms like acne, cherry angiomas, fatigue, insomnia, ataxia (a neurological condition that causes a lack of muscle coordination), and polydipsia (extreme thirst), all of which are in keeping with bromide toxicity. "It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation," study authors warned. In the Terms of Use, OpenAI, the developer of ChatGPT, states in its terms of use that the AI is "not intended for use in the diagnosis or treatment of any health condition" -- but that doesn't seem to be deterring Americans on the hunt for accessible healthcare. According to a 2025 survey, a little more than a third (35%) of Americans already use AI to learn about and manage aspects of their health and wellness. Though relatively new, trust in AI is fairly high, with 63% finding it trustworthy for health information and guidance -- scoring higher in this area than social media (43%) and influencers (41%), but lower than doctors (93%) and even friends (82%). Americans also find that it's easier to ask AI specific questions versus going to a search engine (31%) and that it's more accessible than speaking to a health professional (27%). Recently, mental health experts have sounded the alarm about a growing phenomenon known as "ChatGPT psychosis" or "AI psychosis," where deep engagement with chatbots fuels severe psychological distress. Reports of dangerous behavior stemming from interactions with chatbots have prompted companies like OpenAI to implement mental health protections for users. "While it is a tool with much potential to provide a bridge between scientists and the nonacademic population, AI also carries the risk for promulgating decontextualized information," the report authors concluded. "It is highly unlikely that a medical expert would have mentioned sodium bromide when faced with a patient looking for a viable substitute for sodium chloride."
Share
Copy Link
A 60-year-old man experienced bromide poisoning and psychosis after following ChatGPT's suggestion to replace table salt with sodium bromide in his diet, highlighting potential risks of relying on AI for health advice.
A 60-year-old man in the United States experienced a rare case of bromide poisoning, or bromism, after following dietary advice allegedly provided by ChatGPT. The incident, detailed in a case report published in the Annals of Internal Medicine, highlights the potential dangers of relying on artificial intelligence for health-related information 1.
Source: HuffPost
The patient, with no prior psychiatric or medical history, arrived at a hospital emergency room claiming his neighbor had poisoned him. After being admitted, he exhibited increasing paranoia and hallucinations, leading to an involuntary psychiatric hold 2.
As the patient's condition stabilized, doctors uncovered the source of his symptoms. The man had been researching ways to eliminate chloride from his diet after reading about the potential negative health effects of salt (sodium chloride). He consulted ChatGPT, which allegedly suggested swapping chloride for bromide 3.
Acting on this advice, the patient purchased sodium bromide online and consumed it for three months, replacing all table salt in his diet. This led to a buildup of bromide in his system, resulting in bromism 4.
Bromide compounds were commonly used in the early 20th century to treat various health issues, including anxiety and insomnia. However, their toxicity in high or chronic doses was eventually recognized, leading to their removal from most drugs by the 1980s 5.
This case brings attention to the potential risks of using AI for health advice. While AI can bridge the gap between scientific knowledge and the general public, it also carries the risk of providing decontextualized information 1.
Source: Gizmodo
The patient's doctors attempted to recreate the scenario by asking ChatGPT 3.5 what chloride could be replaced with. The AI's response included bromide but did not provide specific health warnings or inquire about the context of the question 2.
OpenAI, the developer of ChatGPT, states in its terms of service that its products are not intended for use in diagnosing or treating health conditions. The company emphasizes that users should not rely on its output as a sole source of truth or as a substitute for professional advice 4.
Source: Economic Times
The patient's condition improved after a three-week hospital stay, during which he received intravenous fluids, electrolyte repletion, and antipsychotic medication. At a two-week follow-up appointment, he remained in stable condition 3.
This case serves as a cautionary tale about the limitations of AI in providing health advice. It underscores the importance of consulting healthcare professionals for medical guidance and highlights the need for critical evaluation of information obtained from AI sources 5.
NVIDIA CEO Jensen Huang confirms the development of the company's most advanced AI architecture, 'Rubin', with six new chips currently in trial production at TSMC.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
Databricks, a leading data and AI company, is set to acquire machine learning startup Tecton to bolster its AI agent offerings. This strategic move aims to improve real-time data processing and expand Databricks' suite of AI tools for enterprise customers.
3 Sources
Technology
17 hrs ago
3 Sources
Technology
17 hrs ago
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
9 hrs ago
3 Sources
Technology
9 hrs ago
Broadcom's stock rises as the company capitalizes on the AI boom, driven by massive investments from tech giants in data infrastructure. The chipmaker faces both opportunities and challenges in this rapidly evolving landscape.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
Apple is set to introduce new enterprise-focused AI tools, including ChatGPT configuration options and potential support for other AI providers, as part of its upcoming software updates.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago