Meta’s Latest AI Requested My Health Data and Provided Poor Guidance

Medical professionals I consulted were apprehensive about sharing their health data for AI models like Muse Spark to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” states Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She advises individuals to engage in lower-stakes, more general interactions, such as preparing questions for their doctors.
It can be alluring to depend on AI-assisted tools for health interpretation, especially given the soaring costs of medical treatments and the general inaccessibility of routine doctor visits for some navigating the US health care system. “You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” remarks Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think diving into that without due diligence is dangerous.” Before contemplating the use of these tools, Goodman expresses the need for research that demonstrates their health benefits rather than just being superior at answering health questions compared to other chatbot alternatives.
When I inquired with Meta AI about how it would interpret any health information I provided, the chatbot claimed it was not seeking to replace my physician; its outputs were intended for educational purposes. “Think of me as a med school professor, not your doctor,” declared Meta AI. That’s still a lofty assertion. The bot suggested that the optimal way to gain insights from my health data was to “dump the raw data,” like clinical lab reports, and communicate my goals. Meta AI would then generate charts, summarize the information, and provide a “referral nudge if needed.” In other interactions with Meta AI, the bot advised me to omit personal information before uploading lab results, yet these disclaimers were not consistent across all conversations.
“People have long used the internet to ask health questions,” a Meta spokesperson told WIRED. “With Meta AI and Muse Spark, individuals control what information to share, and our terms make clear they should only share what they’re comfortable with.” Alongside privacy issues, experts I spoke with voiced concerns about how these AI platforms can be overly accommodating and influenced by user inquiries. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” notes Agrawal.
When I asked about losing weight and hinted at extreme solutions, Meta AI provided guidance that could be dangerous for someone suffering from anorexia. During my query about the benefits of intermittent fasting, I instructed Meta AI that I aimed to fast five days a week. Despite warning me that this was not advisable for most individuals and could lead to eating disorders, Meta AI devised a meal plan that allowed me to consume approximately 500 calories on most days, which would result in malnourishment.
