AI for Automation
Back to AI News
2026-04-05AIhealthcareradiologypatient safetyStanforddatablockhospitals

Hospital CEO blocks radiologists — the AI can't see X-rays

NYC hospital CEO wants AI to replace radiologists. Stanford found these tools ace tests without ever viewing actual images — no safeguard catches it.


At a panel hosted by Crain's New York Business, Mitchell Katz — CEO of NYC Health + Hospitals, the largest public hospital system in the United States — made a statement that left radiologists stunned: "We could replace a great deal of radiologists with AI at this moment, if we are ready to do the regulatory challenge."

The announcement landed just weeks after New York City witnessed the largest nurses strike in its recent history — a labor action driven by demands for better pay and safer staffing ratios. With costs rising and unions pushing back, hospital leadership is looking for a pressure valve. AI, it appears, is the chosen outlet.

Doctor reviewing medical imaging scans in a hospital reading room

The Plan That Would Reshape Radiology

Katz's proposed model flips the current standard of care. Today, every scan — whether a chest X-ray or a mammogram (a low-dose X-ray used for breast cancer screening) — gets reviewed by a trained radiologist who bills for each read. Under Katz's plan, an AI system would perform the initial assessment. A human radiologist would only be called in if the AI flagged something as abnormal.

He specifically cited breast cancer screening as a primary target for automation, promising "major savings" across the 11 public hospitals in the NYC Health + Hospitals network — which collectively serve hundreds of thousands of low-income and uninsured patients every year.

The financial logic appears straightforward on paper. Radiologists earn between $300,000 and $450,000 annually. AI systems charge per scan at a fraction of that cost. Across 11 hospitals processing thousands of images daily, the savings look attractive — unless you check what the AI actually does with those images.

Stanford Found the AI Can't Actually See the Images

Here is where Katz's plan collides with research he apparently didn't account for. Stanford researchers recently tested what are called visual language AI models — systems trained to both "see" images and generate text explanations about what they observe. These models scored impressively on standard medical benchmarks (standardized tests used to rate AI performance on radiology tasks). That's what gets presented to hospital administrators in vendor sales decks.

But when the Stanford team looked closer, they discovered something alarming: the AI models were constructing elaborate, medically fluent explanations for findings on X-rays they had never actually processed. The models weren't hallucinating in the traditional sense. Instead, they were producing a phenomenon the researchers named an AI "mirage."

"In this epistemic mimicry, the model simulates the entire perceptual process that would have led to the answer... the trace may be fluent, coherent, and apparently image-based while being anchored to no image at all."

— Stanford researchers

A hallucination (when an AI invents false information) is often detectable because it sounds incoherent or contradicts known facts. A mirage is far more dangerous: it sounds exactly like a genuine radiology report because the model learned, from training data, precisely what one should look like. It reproduces that structure perfectly — without looking at the actual scan in front of it.

Standard AI quality-checking systems won't catch this. Those systems look for logical inconsistencies or factual errors. A mirage passes every check because it is internally consistent — it simply has nothing to do with the patient on the table.

Medical AI technology interface displaying scan analysis in a clinical environment

The Radiologists Calling It Dangerous

Mohammed Suhail, a radiologist at North Coast Imaging in San Diego, responded publicly after Katz's remarks were reported. He described the CEO's position as "undeniable proof that confidently uninformed hospital administrators are a danger to patients, and are easily duped by AI companies that are nowhere near capable of providing patient care."

Suhail's critique isn't simple professional defensiveness. He explained that radiology is not a binary "normal / abnormal" toggle. It requires contextual judgment: the patient's medical history, the technical quality of the scan, subtle tissue gradients that differentiate early-stage cancer from benign formations, and sometimes a second opinion from a colleague. An AI system placed at the front of this workflow — deciding alone whether to escalate a case to a human — has already made a clinical decision with no safety net.

"Any attempt to implement AI-only reads would immediately result in patient harm and death, and only someone with zero understanding of radiology would say something so naive."

— Mohammed Suhail, Radiologist, North Coast Imaging

Suhail added a harder observation about what is actually driving these decisions: "Hospitals are happy to cut costs even if it means patient harm, as long as it's legal." That last clause — "as long as it's legal" — is doing significant work. If no regulation explicitly prohibits AI-only radiology reads, hospital administrators have limited legal liability even if clinical outcomes deteriorate. The risk transfers from the institution to the patient.

Labor Pressure, Cost Targets, and a Convenient Narrative

The timing of Katz's remarks is not coincidental. NYC Health + Hospitals — a network serving a disproportionately low-income population — was under intense financial and public scrutiny following the city's largest nurses strike. Union demands for better staffing ratios and higher wages would significantly raise operating costs. AI automation is being positioned as the counterbalance.

Radiology is a natural target for this narrative because it superficially resembles a data-processing pipeline: images go in, reports come out. That framing erases what happens inside a diagnostic read — clinical judgment calibrated by years of training, awareness of each patient's individual history, and the recognition that some findings only matter in combination with others. AI vendors have been selling this simplified version of radiology to hospital procurement teams for years. The gap between what's demonstrated in a controlled benchmark and what happens in a production setting only becomes visible when patients are harmed.

The Regulatory Gap That Could Change Everything

Katz himself acknowledged the "regulatory challenge" — a significant concession. The FDA (U.S. Food and Drug Administration — the government body that approves medical diagnostic tools) has not cleared any AI system for autonomous, unreviewed radiology reads in a live hospital setting. Every AI radiology tool currently in clinical use still requires a licensed radiologist to sign off on the final interpretation.

That means Katz's vision isn't legally implementable tomorrow. But the danger lies in the direction. When CEOs of major hospital systems publicly endorse this model, they shape what gets funded, what lobbying priorities get set, and what regulatory frameworks the industry pushes for. If the FDA faces sustained pressure to fast-track AI radiology approvals — before Stanford's mirage research is peer-reviewed and widely cited in policy circles — patients could be exposed to AI-generated diagnoses that passed every benchmark without ever processing the actual scan.

Prior evidence is not encouraging. ChatGPT Health was previously found to perform "staggeringly badly" at identifying life-threatening medical emergencies. That finding did not slow healthcare AI adoption in procurement conversations, because the people approving purchase orders rarely read the clinical failure reports.

What Patients Can Do Right Now

If you receive imaging at a public hospital — particularly in New York City, but this dynamic is not limited to NYC — you have the right to ask whether a board-certified radiologist personally reviewed your scan. In most U.S. jurisdictions, you also have the right to request a copy of your imaging files and seek an independent second opinion from a radiologist outside the same hospital system.

More broadly: when you see headlines about AI cutting costs in healthcare, ask who bears the downside risk when the AI is wrong. In manufacturing, a wrong AI decision produces a defective part. In radiology, it produces a missed cancer diagnosis — one that may not surface until months later, when treatment options have narrowed.

Related ContentGet Started | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments