Curated by THEOUTPOST
On Tue, 19 Nov, 8:02 AM UTC
6 Sources
[1]
Experts Alarmed by People Uploading Their Medical Scans to Elon Musk's Grok AI
To celebrate its new image-understanding capabilities, Elon Musk has encouraged his followers to share medical documents like MRI scans and x-rays with Grok, his AI chatbot integrated into X-formerly-Twitter. "This is still early stage, but it is already quite accurate and will become extremely good," Musk wrote in a tweet at the end of last month. "Let us know where Grok gets it right or needs work." Despite the many baffling privacy implications, many of his fans have done just that. In some cases, they've even shared their results publicly. Now, experts are warning against sharing such information with Grok -- echoing security concerns with chatbots at large, but also emphasizing some of the lack of transparency around Musk's companies. "This is very personal information, and you don't exactly know what Grok is going to do with it," Bradley Malin, a professor of biomedical informatics at Vanderbilt University, told The New York Times. People sharing their medical information with Musk's chatbot may be under the impression that it's protected by the Health Insurance Portability and Accountability Act, or HIPAA. But the protections enshrined by the federal law, which prevent your doctor from sharing your private health info, do not extend beyond the medical purview, the NYT notes. Once you put it out in the open, like on a social media site, it's fair game. This is in stark contrast to when tech companies have official partnerships with hospitals to obtain data, Malin said, which are stipulated with detailed agreements on how that information is stored, shared, and used. "Posting personal information to Grok is more like, 'Wheee! Let's throw this data out there, and hope the company is going to do what I want them to do,'" Mailin told the NYT. The risks of inaccurate answers may also put patients in danger. Grok, for instance, misidentified a broken clavicle for a dislocated shoulder, according to the report. Doctors responding to Musk's tweet also found that the chatbot failed to recognize a "textbook case" of tuberculosis, and in another case mistook a benign cyst for testicles. Then there are also concerns with how the chatbots themselves use the information, because their underlying large language models rely on the conversations they have to fine-tune their capabilities. That means that potentially anything you tell one could be used to train the chatbot, and considering their proclivity for hallucinating, the risk of one inadvertently blurting out sensitive information is not unfounded. To that end, the privacy policies of X and Grok developer xAI, are unsatisfying. The latter's, for example, claims that it will not sell user data to third parties, but that it does with "related companies," per the NYT. There's reason enough to doubt how faithfully this is enforced in practice, however, because Musk brazenly encouraged people to submit medical documents, even though xAI's policy states it "does not aim to collect sensitive personal information," including health and biometric data. Still, it's possible that Musk's companies may have explicit guardrails around health information shared with Grok that hasn't been shared publicly, according to Matthew McCoy, an assistant professor of medical ethics and health policy at the University of Pennsylvania. "But as an individual user, would I feel comfortable contributing health data? Absolutely not," McCoy told the NYT.
[2]
Elon Musk asked people to upload their health data, X users obliged
The decision to share information as sensitive as your colonoscopy results with an AI chatbot has alarmed some medical privacy experts.Over the past few weeks, users on X have been submitting X-rays, MRIs, CT scans and other medical images to Grok, the platform's artificial intelligence chatbot, asking for diagnoses. The reason: Elon Musk, X's owner, suggested it. "This is still early stage, but it is already quite accurate and will become extremely good," Musk said in a post. The hope is that if enough users feed the AI their scans, it will eventually get good at interpreting them accurately. Patients could get faster results without waiting for a portal message, or use Grok as a second opinion. Some users have shared Grok's misses, like a broken clavicle that was misidentified as a dislocated shoulder. Others praised it: "Had it check out my brain tumor, not bad at all," one user wrote alongside a brain scan. Some doctors have even played along, curious to test whether a chatbot could confirm their own findings. Although there's been no similar public callout from Google's Gemini or OpenAI's ChatGPT, people can submit medical images to those tools, too. The decision to share information as sensitive as your colonoscopy results with an AI chatbot has alarmed some medical privacy experts. "This is very personal information, and you don't exactly know what Grok is going to do with it," said Bradley Malin, a professor of biomedical informatics at Vanderbilt University who has studied machine learning in health care. Potential consequences When you share your medical information with doctors or on a patient portal, it is guarded by the Health Insurance Portability and Accountability Act, or HIPAA, the federal law that protects your personal health information from being shared without your consent. But it only applies to certain entities, like doctors' offices, hospitals and health insurers, as well as some companies they work with. In other words, what you post on a social media account or elsewhere isn't bound by HIPAA. It's like telling your lawyer that you committed a crime versus telling your dog-walker; one is bound by attorney-client privilege and the other can inform the whole neighborhood. When tech companies partner with a hospital to get data, by contrast, there are detailed agreements on how it is stored, shared and used, said Malin. "Posting personal information to Grok is more like, 'Wheee! Let's throw this data out there, and hope the company is going to do what I want them to do,'" Malin said. X did not respond to a request for comment. In its privacy policy, the company has said it will not sell user data to a third party but it does share it with "related companies." (Despite Musk's invitation to share medical images, the policy also says X does not aim to collect sensitive personal information, including health data.) Matthew McCoy, assistant professor of medical ethics and health policy at the University of Pennsylvania, noted that there may be very clear guardrails around health information uploaded to Grok that the company hasn't described publicly. "But as an individual user, would I feel comfortable contributing health data? Absolutely not." It's important to remember that bits of your online footprint get shared and sold -- which books you buy, for example, or how long you spend on a website. These are all pieces of a puzzle, fleshing out a picture of you that companies can use for various purposes, such as targeted marketing. Consider a PET scan that shows early signs of Alzheimer's disease becoming part of your online footprint, where future employers, insurance companies or even a homeowner's association could find it. Laws like the Americans with Disabilities Act and the Genetic Information Nondiscrimination Act can offer protection against discrimination based on certain health factors, but there are carve-outs for some entities, like long-term care insurance and life insurance plans. And experts noted that other forms of health-related discrimination still happen, even if they're not legal. The risk of inaccurate results Imperfect answers might be OK for people purely experimenting with the tool. But getting faulty health information could lead to tests or other costly care you don't actually need, said Suchi Saria, director of the machine learning and health care lab at Johns Hopkins University. Training an AI model to produce accurate results about a person's health takes high-quality and diverse data, and deep expertise in medicine, technology, product design and more, said Saria, who is also the founder of Bayesian Health, a company that develops AI tools for health care settings. Anything less than that, she said, "is a bit like a hobbyist chemist mixing ingredients in the kitchen sink." Still, AI holds promise when it comes to improving patient experiences and outcomes in health care. AI models are already able to read mammograms and analyze patient data to find candidates for clinical trials. Some curious people may know the privacy risks and still feel comfortable uploading their data to support that mission. Malin calls the practice "information altruism." "If you strongly believe the information should be out there, even if you have no protections, go ahead," he said. "But buyer beware."
[3]
Elon Musk asked people to upload their medical data to X so his AI company could learn to interpret MRIs and CT scans
In Elon Musk's world, AI is the new MD. The X CEO is encouraging users to upload their medical information -- such as CT and bone scans -- to the platform so that Grok, X's artificial intelligence chatbot, can learn how to interpret them efficiently. "Try submitting x-ray, PET, MRI or other medical images to Grok for analysis," Musk wrote on X last month. "This is still early stage, but it is already quite accurate and will become extremely good. Let us know where Grok gets it right or needs work." It turns out, Grok needs work. The AI successfully analyzed blood test results and identified breast cancer, according to some users. But it also grossly misinterpreted other pieces of information, according to physicians who responded to Musk's post. In one instance, Grok mistook a "textbook case" of tuberculosis for a herniated disk or spinal stenosis. In another, the bot mistook a mammogram of a benign breast cyst for an image of testicles. Musk has been interested in the relationship between healthcare and AI for years, launching the brain chip startup Neuralink in 2022. The company successfully implanted an electrode that allows a user to move a computer mouse with their mind, Musk claimed in February. xAI, Musk's tech startup that helped launch Grok, announced in May it raised a $6 billion investment funding round, giving Musk plenty of capital to invest in healthcare technologies, though it's uncertain how Grok will be further developed to address medical needs. "We know they have the technical capability," Dr. Laura Heacock, associate professor at the New York University Langone Health Department of Radiology wrote on X. "Whether or not they want to put in the time, data and [graphics processing units] to include medical imaging is up to them. For now, non-generative AI methods continue to outperform in medical imaging." X did not respond to Fortune's request for comment. Musk's lofty goal of training his AI to make medical diagnoses is also a risky one, experts said. While AI has increasingly been used as a means to make complicated science more accessible and create assistive technologies, teaching Grok to use data from a social media platform presents concerns for both Grok's accuracy and user privacy. Ryan Tarzy, CEO of health technology firm Avandra Imaging, said in an interview with Fast Company that asking users to directly input data, rather than source it from secure databases with de-identified patient data, is Musk's way of trying to accelerate Grok's development. The information also comes from a limited sample of whoever is willing to upload their images and tests -- meaning the AI is not gathering data from sources representative of the broader and more diverse medical landscape. Medical information shared on social media isn't bound by the Health Insurance Portability and Accountability Act (HIPAA), the federal law that protects patients' private information from being shared without their consent. That means there's less control over where the information goes after a user chooses to share it. "This approach has myriad risks, including the accidental sharing of patient identities," Tarzy said. "Personal health information is 'burned in' too many images, such as CT scans, and would inevitably be released in this plan." The privacy dangers Grok may present aren't fully known because X may have privacy protections not known by the public, according to Matthew McCoy, assistant professor of medical ethics and health policy at the University of Pennsylvania. He said users share medical information at their own risk.
[4]
Elon Musk Asked People to Upload Their Health Data. X Users Obliged
Over the past few weeks, users on X have been submitting X-rays, MRIs, CT scans and other medical images to Grok, the platform's artificial intelligence chatbot, asking for diagnoses. The reason: Elon Musk, X's owner, suggested it. "This is still early stage, but it is already quite accurate and will become extremely good," Musk said in a post. The hope is that if enough users feed the A.I. their scans, it will eventually get good at interpreting them accurately. Patients could get faster results without waiting for a portal message, or use Grok as a second opinion. Some users have shared Grok's misses, like a broken clavicle that was misindentified as a dislocated shoulder. Others praised it: "Had it check out my brain tumor, not bad at all," one user wrote alongside a brain scan. Some doctors have even played along, curious to test whether a chatbot could confirm their own findings. Although there's been no similar public callout from Google's Gemini or OpenAI's ChatGPT, people can submit medical images to those tools, too. The decision to share information as sensitive as your colonoscopy results with an A.I. chatbot has alarmed some medical privacy experts. "This is very personal information, and you don't exactly know what Grok is going to do with it," said Bradley Malin, a professor of biomedical informatics at Vanderbilt University who has studied machine learning in health care. The Potential Consequences of Sharing Health Information When you share your medical information with doctors or on a patient portal, it is guarded by the Health Insurance Portability and Accountability Act, or HIPAA, the federal law that protects your personal health information from being shared without your consent. But it only applies to certain entities, like doctors' offices, hospitals and health insurers, as well as some companies they work with. In other words, what you post on a social media account or elsewhere isn't bound by HIPAA. It's like telling your lawyer that you committed a crime versus telling your dog walker; one is bound by attorney-client privilege and the other can inform the whole neighborhood. When tech companies partner with a hospital to get data, by contrast, there are detailed agreements on how it is stored, shared and used, said Dr. Malin. "Posting personal information to Grok is more like, 'Wheee! Let's throw this data out there, and hope the company is going to do what I want them to do,'" Dr. Malin said. X did not respond to a request for comment. In its privacy policy, the company has said it will not sell user data to a third party but it does share it with "related companies." (Despite Musk's invitation to share medical images, the policy also says X does not aim to collect sensitive personal information, including health data.) Matthew McCoy, assistant professor of medical ethics and health policy at the University of Pennsylvania, noted that there may be very clear guardrails around health information uploaded to Grok that the company hasn't described publicly. "But as an individual user, would I feel comfortable contributing health data? Absolutely not." It's important to remember that bits of your online footprint get shared and sold -- which books you buy, for example, or how long you spend on a website. These are all pieces of a puzzle, fleshing out a picture of you that companies can use for various purposes, such as targeted marketing. Consider a PET scan that shows early signs of Alzheimer's disease becoming part of your online footprint, where future employers, insurance companies or even a homeowner's association could find it. Laws like the Americans with Disabilities Act and the Genetic Information Nondiscrimination Act can offer protection against discrimination based on certain health factors, but there are carve-outs for some entities, like long-term care insurance and life insurance plans. And experts noted that other forms of health-related discrimination still happen, even if they're not legal. The Risk of Inaccurate Results Imperfect answers might be OK for people purely experimenting with the tool. But getting faulty health information could lead to tests or other costly care you don't actually need, said Suchi Saria, director of the machine learning and health care lab at Johns Hopkins University. Training an A.I. model to produce accurate results about a person's health takes high-quality and diverse data, and deep expertise in medicine, technology, product design and more, said Dr. Saria, who is also the founder of Bayesian Health, a company that develops A.I. tools for health care settings. Anything less than that, she said, "is a bit like a hobbyist chemist mixing ingredients in the kitchen sink." Still, A.I. holds promise when it comes to improving patient experiences and outcomes in health care. A.I. models are already able to read mammograms and analyze patient data to find candidates for clinical trials. Some curious people may know the privacy risks and still feel comfortable uploading their data to support that mission. Dr. Malin calls the practice "information altruism." "If you strongly believe the information should be out there, even if you have no protections, go ahead," he said. "But buyer beware."
[5]
Is getting faster medical test results with Elon Musk's AI bot Grok safe? Doctors warn 'buyer beware'
Elon Musk's AI chatbot Grok has sparked debates as users submit sensitive medical images for analysis. While some view it as a leap forward in healthcare, experts are raising concerns about privacy and diagnostic reliability. Unlike traditional health data systems governed by laws like HIPAA, Grok lacks strict safeguards, risking exposure of personal information. Despite promises of innovation, experts warn against trusting such tools without accountability. Grok's future depends on balancing its potential with ethical data handling and accuracy.Elon Musk's AI chatbot, Grok, has gained attention as users upload medical scans, such as MRIs and X-rays, for analysis. Musk, via his platform X (formerly Twitter), encouraged users to test Grok's abilities, claiming the tool is in its early stages but showing promise. While some users report useful insights, others cite inaccurate diagnoses, highlighting risks in relying on experimental AI. The initiative has sparked discussions about the balance between technological innovation, accuracy, and user privacy. Musk urged users to "try submitting x-ray, PET, MRI, or other medical images to Grok for analysis," adding that the tool "is already quite accurate and will become extremely good." Many users responded, sharing Grok's feedback on brain scans, fractures, and more. "Had it check out my brain tumor, not bad at all," one user posted. However, not all experiences were positive. In one case, Grok misdiagnosed a fractured clavicle as a dislocated shoulder; in another, it mistook a benign breast cyst for testicles. Such mixed results underline the complexities of using general-purpose AI for medical diagnoses. Medical professionals like Suchi Saria, director of the machine learning and healthcare lab at Johns Hopkins University, stress that accurate AI in healthcare requires robust, high-quality, and diverse datasets. "Anything less," she warned, "is a bit like a hobbyist chemist mixing ingredients in the kitchen sink." A significant concern is the privacy implications of uploading sensitive health information to an AI chatbot. Unlike healthcare providers governed by laws like the Health Insurance Portability and Accountability Act (HIPAA), platforms like X operate without such safeguards. "This is very personal information, and you don't exactly know what Grok is going to do with it," said Bradley Malin, professor of biomedical informatics at Vanderbilt University. X's privacy policy states that while it doesn't sell user data to third parties, it shares information with "related companies." Even xAI, the company behind Grok, advises users against submitting personal or sensitive information in prompts. Yet, Musk's call to share medical scans contrasts with these warnings. "Posting personal information to Grok is more like, 'Wheee! Let's throw this data out there and hope the company is going to do what I want them to do,'" Malin added. Matthew McCoy, assistant professor of medical ethics at the University of Pennsylvania, echoed these concerns, saying, "As an individual user, would I feel comfortable contributing health data? Absolutely not." Grok is part of xAI, Musk's AI-focused venture launched in 2023, which describes its mission as advancing "our collective understanding of the universe." The platform positions itself as a conversational AI with fewer guardrails than competitors like OpenAI's ChatGPT, enabling broader applications but also raising ethical questions. In healthcare, AI is already transforming areas like radiology and patient data analysis. Specialized tools are used to detect cancer in mammograms and match patients with clinical trials. Musk's approach with Grok, however, bypasses traditional data collection methods, relying on user contributions without de-identification or structured safeguards. Ryan Tarzy, CEO of health tech startup Avandra Imaging, called this method risky, warning that "personal health information is 'burned in' to many images, such as CT scans, and would inevitably be released in this plan." Experts caution that inaccuracies in Grok's results could lead to unnecessary tests or missed critical conditions. One doctor testing the chatbot noted that it failed to identify a "textbook case" of spinal tuberculosis, while another found that Grok misinterpreted breast scans, missing clear signs of cancer. "Imperfect answers might be okay for people purely experimenting with the tool," said Saria, "but getting faulty health information could lead to tests or other costly care you don't actually need." Some users may knowingly share their medical data, believing in the potential benefits of advancing AI healthcare capabilities. Malin referred to this as "information altruism," where individuals contribute data to support a greater cause. However, he added, "If you strongly believe the information should be out there, even if you have no protections, go ahead. But buyer beware." Despite Musk's optimistic vision, experts urge caution, emphasizing the importance of secure systems and ethical implementation in medical AI. Laws like the Americans with Disabilities Act and the Genetic Information Nondiscrimination Act offer some protections, but loopholes exist. For example, certain insurance providers are exempt from these laws, leaving room for potential misuse of health data. Grok exemplifies the growing intersection of AI and healthcare, but its current implementation raises critical questions about privacy, ethics, and reliability. While the technology holds promise, users must weigh the risks of sharing sensitive medical information on public platforms. Experts recommend exercising extreme caution and prioritizing tools with clear safeguards and accountability. The success of AI in healthcare depends not just on innovation but on ensuring trust and transparency in its application.
[6]
PSA: You shouldn't upload your medical images to AI chatbots | TechCrunch
Here's a quick reminder before you get on with your day: Think twice before you upload your private medical data to an AI chatbot. Folks are frequently turning to generative AI chatbots, like OpenAI's ChatGPT and Google's Gemini, to ask questions about their medical concerns and to better understand their health. Some have relied on questionable apps that use AI to decide if someone's genitals are clear from disease, for example. And most recently, since October, users on social media site X have been encouraged to upload their X-rays, MRIs, and PET scans to the platform's AI chatbot Grok to help interpret their results. Medical data is a special category with federal protections that, for the most part, only you can choose to circumvent. But just because you can doesn't mean you should. Security and privacy advocates have long warned that any uploaded sensitive data can then be used to train AI models, and risks exposing your private and sensitive information down the line. Generative AI models are often trained on the data that they receive, under the premise that the uploaded data helps to build out the information and accuracy of the model's outputs. But it's not always clear how and for what purposes the uploaded data is being used, or whom the data is shared with -- and companies can change their minds. You must trust the companies largely at their word. People have found their own private medical records in AI training data sets -- and that means anybody else can, including healthcare providers, potential future employers, or government agencies. And, most consumer apps aren't covered under the U.S. healthcare privacy law HIPAA, offering no protections for your uploaded data. X owner Elon Musk, who in a post encouraged users to upload their medical imagery to Grok, conceded that the results from Grok are "still early stage," but that the AI model "will become extremely good." By asking users to submit their medical imagery to Grok, the aim is that the AI model will improve over time and become capable of interpreting medical scans with consistent accuracy. As for who has access to this Grok data isn't clear; as noted elsewhere, Grok's privacy policy says that X shares some users' personal information with an unspecified number of "related" companies. It's good to remember that what goes on the internet never leaves the internet.
Share
Share
Copy Link
Elon Musk encourages X users to share medical scans with Grok AI, sparking debates on privacy, accuracy, and ethical implications in healthcare AI.
Elon Musk, CEO of X (formerly Twitter), has sparked controversy by encouraging users to upload medical scans to Grok, the platform's AI chatbot, for analysis 1. This move, aimed at improving Grok's image interpretation capabilities, has raised significant concerns among medical privacy experts and healthcare professionals.
Many X users have obliged Musk's request, sharing various medical images including X-rays, MRIs, and CT scans 2. While some users reported positive experiences, others highlighted concerning inaccuracies in Grok's interpretations. For instance, the AI misidentified a broken clavicle as a dislocated shoulder and failed to recognize a "textbook case" of tuberculosis 3.
Medical privacy experts have expressed alarm over this practice. Bradley Malin, a professor at Vanderbilt University, emphasized the personal nature of the information being shared and the uncertainty surrounding Grok's data handling 4. Unlike traditional healthcare settings protected by HIPAA, information shared on social media platforms lacks such safeguards.
Sharing sensitive medical data on public platforms poses several risks:
Privacy breaches: Personal health information could become part of users' online footprints, potentially accessible to employers, insurers, or other entities 4.
Misdiagnosis: Inaccurate AI interpretations could lead to unnecessary tests or missed critical conditions 5.
Data misuse: The lack of clear guidelines on how Grok will use or share this information raises concerns about potential misuse 1.
Healthcare and AI experts urge caution. Suchi Saria from Johns Hopkins University likened the approach to "a hobbyist chemist mixing ingredients in the kitchen sink" 4. Matthew McCoy from the University of Pennsylvania advised against contributing personal health data to such platforms 3.
While AI shows promise in improving healthcare outcomes, experts stress the need for high-quality, diverse datasets and deep expertise in both medicine and technology for accurate results 5. The controversy surrounding Grok highlights the delicate balance between innovation and ethical considerations in AI-driven healthcare solutions.
As the debate continues, users are advised to exercise extreme caution when sharing medical information online, prioritizing tools with clear safeguards and accountability 5. The incident underscores the need for robust regulations and ethical guidelines in the rapidly evolving field of AI in healthcare.
Reference
[2]
[4]
Elon Musk encourages users to submit medical scans to his AI chatbot Grok for analysis, sparking debates on privacy, accuracy, and ethical concerns in AI-assisted medical diagnostics.
2 Sources
2 Sources
AI tools like ChatGPT are increasingly being used for medical diagnoses and health advice, with some users reporting significant improvements in chronic conditions. However, experts warn of the risks associated with relying solely on AI for medical interpretation.
2 Sources
2 Sources
Elon Musk's social media platform X is grappling with a surge of AI-generated deepfake images created by its Grok 2 chatbot. The situation raises concerns about misinformation and content moderation as the 2024 US election approaches.
6 Sources
6 Sources
Elon Musk's AI company xAI has released an image generation feature for its Grok chatbot, causing concern due to its ability to create explicit content and deepfakes without apparent restrictions.
14 Sources
14 Sources
Experts caution against sharing sensitive personal information with AI chatbots, highlighting potential risks and privacy concerns. The article explores what types of information should never be shared with AI and why.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved