Meta's Muse Spark AI requests raw health data but delivers questionable medical advice

Reviewed byNidhi Govil

2 Sources

Share

Meta launched Muse Spark, a generative AI model designed to answer health questions, but early testing reveals serious concerns. The AI actively requests sensitive medical data like lab reports and fitness tracker readings, yet isn't HIPAA compliant. WIRED's investigation found the chatbot provided potentially dangerous advice, including extremely low-calorie meal plans, while Meta may use shared health data to train future models and target advertisements.

News article

Meta AI Launches Muse Spark with Health-Focused Capabilities

Meta's Superintelligence Labs unveiled its first generative AI model, Muse Spark, this week through the Meta AI app. The company plans to integrate the technology across Facebook, Instagram, and WhatsApp in coming weeks

1

. Meta claims Muse Spark was specifically designed to answer health questions, collaborating with over 1,000 physicians to curate training data for more factual responses

1

.

When users inquire how the consumer chatbot can assist them, it suggests basic functions like building workout routines or generating doctor questions. But one feature stands out: Muse Spark directly asks users to upload raw health data. "Paste your numbers from a fitness tracker, glucose monitor, or a lab report. I'll calculate trends, flag patterns, and visualize them," the bot prompts

1

.

Significant Privacy Concerns Around Uploading Sensitive Medical Data

Requesting users to upload sensitive medical data isn't unique to Meta AI. OpenAI's ChatGPT and Anthropic's Claude offer similar health-focused modes where users can connect Apple or Android health information with a simple toggle

1

. Google allows medical data uploads to Fitbit for AI health coaching. However, the practice raises substantial health data privacy issues.

Monica Agrawal, assistant professor at Duke University and cofounder of Layer Health, warns that these tools are not HIPAA compliant. "Usage of these models can be really tricky," Agrawal explains. "The more information you give it, the more context it has about you and, potentially, it can provide better responses. But on the flip side, there are major privacy concerns to sharing your health data without protections"

1

.

Unlike doctor-patient interactions protected by HIPAA, information shared with Muse Spark lacks the same data protections. Meta's privacy policy states that anything shared in chats may be stored and used to train future AI models, keeping training data "for as long as we need it on a case-by-case basis." The company also disclosed it may use AI interactions for targeted advertisements

1

.

Testing Reveals Harmful AI Advice and Questionable Recommendations

Beyond privacy risks, WIRED's testing uncovered troubling issues with the quality of reliable health advice from Muse Spark. When asked about weight loss, the chatbot generated an extremely low-calorie meal plan and recommendations for aggressive intermittent fasting

2

. While the bot flagged some risks, it proceeded to help users pursue potentially dangerous approaches anyway.

"A warning does not mean much if the model then goes on to help the user do the dangerous thing anyway," notes the investigation

2

. This pattern represents a critical flaw in current AI health tools: they sound cautious and informed while simultaneously reinforcing harmful assumptions. The polished tone delivers wrong advice with confidence, making failures more dangerous than obvious errors.

What Users Should Watch For as Generative AI Expands

As Muse Spark rolls out to millions across Meta's platforms, users face a choice between personalized assistance and substantial privacy risks. The environment surrounding these tools resembles consumer products more than medical services, lacking professional oversight despite handling lab reports and sensitive health information. Short-term, users seeking quick health guidance may find the convenience appealing without understanding the trade-offs. Long-term implications include normalized sharing of private medical data with tech companies and potential reliance on AI systems that may provide harmful advice despite physician-curated datasets.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo