AI for Automation
Back to AI News
2026-04-13Meta AIAI health privacyOpenAI liability billMuse Sparkhealth data securityAI regulationAnthropic MythosAI accountability

Meta AI Health Data: Muse Spark Gave Bad Medical Advice

Meta's AI health app collects raw lab data and gives dangerous medical advice. OpenAI is backing legal immunity from AI harm — even in 'critical harm' cases.


Meta's newest AI, Muse Spark, is asking users to hand over their raw blood test results — then providing medical guidance that Wired's investigation confirmed is unreliable. In the exact same week, OpenAI testified before the Illinois state legislature in support of legislation that would shield AI companies from harm lawsuits — including, explicitly, cases the bill labels "critical harm."

Two separate stories. One shared pattern: the companies building the most powerful consumer AI are simultaneously expanding what they can extract from you and shrinking what you can do when things go wrong.

When Meta's AI Asks for Your Health Data and Blood Work

Muse Spark is Meta's health and wellness AI companion. Unlike a standard chatbot (a text-based AI program that answers questions in conversation), Muse Spark actively requests raw lab data — the unprocessed numerical outputs from blood panels, hormone tests, and metabolic screenings. These are numbers your doctor typically interprets in clinical context, cross-referenced against your full medical history.

Wired reporter Reece Rogers tested the product firsthand. His verdict: the advice Muse Spark delivered was "terrible." Meta's own disclaimers acknowledge the app is not a substitute for actual physicians. But the product design steers users toward sharing intimate health numbers and trusting AI interpretations of those numbers.

Meta Muse Spark AI health app collecting raw blood test lab results and health data from users

The data risk profile is serious. Raw health lab data is:

  • Permanent — unlike a compromised password, your HbA1c reading (a blood sugar marker used to diagnose and monitor diabetes) cannot be changed or reset
  • Uniquely identifying — health biomarkers can fingerprint individuals even in datasets labeled "anonymous," per established privacy research
  • Commercially valuable — health data commands premium prices in data brokerage markets, where it is sold to insurers, advertisers, and in some cases employers
  • Legally ambiguous — HIPAA (the U.S. health data privacy law) covers hospitals, clinics, and insurers — not necessarily AI apps sold by tech companies as wellness tools

Feeding this data to Muse Spark means it enters Meta's data infrastructure — the same infrastructure powering one of the world's largest advertising targeting systems. The service is available 24/7, making it attractive for users wanting health guidance outside doctor's office hours. What happens to your data after you share it is governed by Meta's terms of service, which most users never read before clicking "agree."

The Same Week: OpenAI Sought AI Liability Legal Immunity

OpenAI representatives appeared before the Illinois state legislature to back a proposed bill that would limit liability (the legal responsibility companies bear when their products cause harm to users) for AI companies. The bill's language is notable: it would protect firms even in cases the legislation itself characterizes as involving "critical harm."

Legal liability has historically been the primary market mechanism forcing companies to prioritize safety. Product liability law (legislation requiring companies to compensate users harmed by defective or dangerous products) is why car seatbelts became standard equipment, why pharmaceutical companies conduct clinical trials before market release, and why software firms issue security patches for discovered vulnerabilities. Remove that liability and you remove the financial incentive to prevent harm before it reaches users.

OpenAI Illinois AI liability bill hearing — seeking legal immunity for AI companies in critical harm cases

The Illinois bill represents a textbook attempt at regulatory capture (the process by which industries gain control over the regulatory systems designed to govern them). AI liability law is being written right now — not in Washington, not after broad public debate — in state legislatures, with minimal consumer representation and sparse media coverage outside specialist outlets.

OpenAI's stated argument: excessive liability exposure would stifle AI innovation. The counterargument: liability law does not stifle innovation — it requires that innovation not harm people. That distinction becomes critical when the product is used for medical guidance, mental health support, and high-stakes personal decisions by millions of users who have no way to audit its accuracy.

Anthropic's Mythos: A Cybersecurity AI Model With Two Edges

Anthropic's new Mythos model is purpose-designed for cybersecurity research. The concern, raised by security experts cited in Wired's coverage, is that a highly capable security AI is inherently a dual-use tool (any technology that can serve both legitimate and harmful purposes — encryption software and vulnerability scanners are classic examples of tools that simultaneously protect defenders and empower attackers).

Mythos is designed to identify vulnerabilities (weak points in software that attackers can exploit), analyze exploit code (programs written to take advantage of software flaws), and reason about attack vectors (the specific pathways through which attackers gain unauthorized system access). These are precisely the capabilities a sophisticated attacker — or a state-sponsored hacking team — would most want to accelerate and automate.

Wired frames the release as "a wake-up call for developers who have long made security an afterthought." The irony is deliberate: a model built to strengthen defenses could dramatically lower the barrier to sophisticated attacks. A security team deploying Mythos to audit their own systems and a criminal group deploying the same model to probe others' systems are using identical tooling for opposite purposes — and Mythos cannot tell the difference. Our AI automation guides explain how dual-use AI tools are reshaping the cybersecurity landscape.

The Wayback Machine: The Accountability Layer Nobody Noticed Disappearing

Running parallel to the AI accountability stories is a quieter crisis: the Internet Archive's Wayback Machine — a tool that saves historical snapshots of websites, allowing anyone to see how a webpage looked years or decades ago — is losing access as multiple major news outlets cut off their data feeds.

This matters for AI accountability specifically because the Wayback Machine is how journalists, researchers, and courts verify what companies claimed in the past. AI companies have been documented quietly revising policy pages, updating training data disclosures mid-deployment, and editing terms of service between controversies. Without a functioning historical record, retroactive revision becomes essentially undetectable by the public.

Journalists and 4 credited Wired security writers are among those documenting the push to preserve the Archive's access. No victories have been announced as of this writing. The organizations cutting off access cite copyright law — a legally defensible position with devastating consequences for public accountability infrastructure that the entire internet relies on.

Four AI Accountability Stories, One Direction

Zoom out from any single story and the week of April 10–13 reveals a structural logic connecting all four threads. AI companies are managing legal, technical, and reputational exposure simultaneously — not through coordination with each other, but by responding to the same underlying incentives:

  • Expand data collection — Muse Spark requests health lab data; dating apps deploy AI agents to optimize social matching; Onix launches AI digital twins of health influencers, available 24/7 to dispense advice and sell products
  • Reduce legal exposure — State-level liability caps before federal AI regulation crystallizes into enforceable law
  • Advance capabilities faster than oversight follows — Mythos is available before regulators understand what a security-focused AI can realistically do in adversarial hands
  • Degrade accountability infrastructure — Cutting Wayback Machine access removes the one tool most commonly used to verify what AI companies told users in the past

None of this requires a conspiracy. It requires only that companies respond to financial incentives — and right now, the incentives point toward extraction, not protection. The accountability layer is eroding in 4 directions at once, and the erosion is largely happening below the threshold of mainstream attention.

You can act on this now. Before uploading any health data to an AI app — including Meta's Muse Spark — read the privacy policy section on data sharing and retention. Specifically look for: whether your data is used to train AI models, how long it is retained, and which third parties receive it. If you are an Illinois resident, your state representative's contact information is publicly available and the AI liability bill is actively moving through the legislature. AI safety law is not settled. It is being written right now, in rooms most people are not in — and the version being drafted this week will govern what companies owe you when things go wrong.

Related ContentGet Started | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments