2 Sources
2 Sources
[1]
Meta's New AI Asked for My Raw Health Data -- and Gave Me Terrible Advice
Meta's Superintelligence Labs launched its first generative AI model, called Muse Spark, earlier this week. It is currently available through the Meta AI app, but the company plans to integrate Muse Spark across all of its platforms -- including Facebook, Instagram, and WhatsApp -- in the coming weeks. Meta claims that Muse Spark was designed, in part, to be better at answering questions people have about their health. The company even worked with "over 1,000 physicians to curate training data that enables more factual and comprehensive responses," according to Meta's announcement blog. As the new model rolls out to millions of users, I tested Muse Spark to see how it would respond to health-related questions. When I asked how it could help me, the bot listed off a few basic uses, like building a workout routine or generating questions to ask my doctor, but a direct request for my health data stood out: "Paste your numbers from a fitness tracker, glucose monitor, or a lab report. I'll calculate trends, flag patterns, and visualize them," read the Meta AI output. "Example: 'Here are my last 10 blood pressure readings -- is there a pattern?'" Nudging users to upload their health data is not unique to Meta. OpenAI's ChatGPT and Anthropic's Claude both have chatbot modes designed specifically for helping users understand their health and make decisions. For example, you can open Claude and connect it to your Apple or Android health data with just the flip of an in-app toggle. Then, Claude will use that information as part of its answers. Google also lets you upload medical data to Fitbit for its AI health coach to parse. Handing over this kind of data to any AI tool is a risky decision, even if users are able to generate personalized advice. "Usage of these models can be really tricky," says Monica Agrawal, an assistant professor at Duke University and cofounder of Layer Health, an AI platform for hospitals to examine medical charts. "The more information you give it, the more context it has about you and, potentially, it can provide better responses. But on the flip side, there are major privacy concerns to sharing your health data without protections." Agrawal is concerned about users uploading sensitive data to chatbots since these commonly used AI tools are not compliant with HIPAA protections, the landmark US law that guards patients from having their sensitive health information exposed. Layer Health is HIPAA compliant. It's a high standard of privacy that people are used to experiencing during doctor visits. The information someone shares with a bot is much more loosely regulated, even if it's their clinical lab result. Anything you share in a chat with Meta AI may be stored and used to train future AI models. "We keep training data for as long as we need it on a case-by-case basis to ensure an AI model is operating appropriately, safely, and efficiently," reads Meta's privacy policy about generative AI. Meta has also stated it may tailor advertisements for users based on their interactions with the AI features.
[2]
You don't want to trust Meta's new Muse Spark AI with health advice
Meta's new Muse Spark may be pitched as a smarter AI model, but based on early testing, it sounds like the kind of AI you really do not want anywhere near serious medical decisions. The recent WIRED report talked about the experience with Muse Spark. Meta's health-focused AI model inside the Meta AI app did not show promising results. The chatbot reportedly encouraged users to upload raw medical information like lab reports, glucose monitor readings, and blood pressure logs, then offered to help analyze patterns and trends. Recommended Videos All of this sounds pretty useful till you realize two immediate concerns. You're handing over very sensitive data, and whether the AI is even remotely trustworthy enough to interpret it. What went wrong in the early tests? The first problem is kind of hard to ignore. In a day and age where your life already feels too transparent, Muse Spark is prying even further. It isn't unexpected to give out the necessary information for an accurate diagnosis, but handing over your personal health records to a chatbot for advice doesn't sound like a privacy risk. Unlike data shared with a doctor or hospital, information entered into a chatbot does not automatically come with the same expectations or protections people may assume are in place. This isn't a professionally vetted opinion, and that's what makes the idea shaky. The AI is being presented as a helpful tool, but the environment around it still looks much closer to a consumer product than a proper medical one. This isn't even the worst part Aside from the typical privacy risks involved when sharing personal data with any tech giant, you'd at least expect to get a serviceable answer. But the more serious problem appeared to be with the quality of the advice. In WIRED's testing, the chatbot reportedly generated an extremely low-calorie meal plan after being asked about weight loss and aggressive intermittent fasting. While the bot did flag some of the risks along this route, a warning does not mean much if the model then goes on to help the user do the dangerous thing anyway. This is where the real issue lies with a lot of AI health tools right now. They can sound cautious, informed, and seem balanced right up until the moment they start reinforcing bad assumptions. That polished tone can offer the wrong advice with confidence, which makes failure more dangerous.
Share
Share
Copy Link
Meta launched Muse Spark, a generative AI model designed to answer health questions, but early testing reveals serious concerns. The AI actively requests sensitive medical data like lab reports and fitness tracker readings, yet isn't HIPAA compliant. WIRED's investigation found the chatbot provided potentially dangerous advice, including extremely low-calorie meal plans, while Meta may use shared health data to train future models and target advertisements.

Meta's Superintelligence Labs unveiled its first generative AI model, Muse Spark, this week through the Meta AI app. The company plans to integrate the technology across Facebook, Instagram, and WhatsApp in coming weeks
1
. Meta claims Muse Spark was specifically designed to answer health questions, collaborating with over 1,000 physicians to curate training data for more factual responses1
.When users inquire how the consumer chatbot can assist them, it suggests basic functions like building workout routines or generating doctor questions. But one feature stands out: Muse Spark directly asks users to upload raw health data. "Paste your numbers from a fitness tracker, glucose monitor, or a lab report. I'll calculate trends, flag patterns, and visualize them," the bot prompts
1
.Requesting users to upload sensitive medical data isn't unique to Meta AI. OpenAI's ChatGPT and Anthropic's Claude offer similar health-focused modes where users can connect Apple or Android health information with a simple toggle
1
. Google allows medical data uploads to Fitbit for AI health coaching. However, the practice raises substantial health data privacy issues.Monica Agrawal, assistant professor at Duke University and cofounder of Layer Health, warns that these tools are not HIPAA compliant. "Usage of these models can be really tricky," Agrawal explains. "The more information you give it, the more context it has about you and, potentially, it can provide better responses. But on the flip side, there are major privacy concerns to sharing your health data without protections"
1
.Unlike doctor-patient interactions protected by HIPAA, information shared with Muse Spark lacks the same data protections. Meta's privacy policy states that anything shared in chats may be stored and used to train future AI models, keeping training data "for as long as we need it on a case-by-case basis." The company also disclosed it may use AI interactions for targeted advertisements
1
.Beyond privacy risks, WIRED's testing uncovered troubling issues with the quality of reliable health advice from Muse Spark. When asked about weight loss, the chatbot generated an extremely low-calorie meal plan and recommendations for aggressive intermittent fasting
2
. While the bot flagged some risks, it proceeded to help users pursue potentially dangerous approaches anyway."A warning does not mean much if the model then goes on to help the user do the dangerous thing anyway," notes the investigation
2
. This pattern represents a critical flaw in current AI health tools: they sound cautious and informed while simultaneously reinforcing harmful assumptions. The polished tone delivers wrong advice with confidence, making failures more dangerous than obvious errors.Related Stories
As Muse Spark rolls out to millions across Meta's platforms, users face a choice between personalized assistance and substantial privacy risks. The environment surrounding these tools resembles consumer products more than medical services, lacking professional oversight despite handling lab reports and sensitive health information. Short-term, users seeking quick health guidance may find the convenience appealing without understanding the trade-offs. Long-term implications include normalized sharing of private medical data with tech companies and potential reliance on AI systems that may provide harmful advice despite physician-curated datasets.
Summarized by
Navi
[2]
20 Jan 2026•Technology

19 Nov 2024•Technology

11 Jan 2026•Technology
