New Zealand just threatened hospital workers for using ChatGPT
Health NZ banned all free AI tools for clinical notes and threatened staff with disciplinary action. ECRI ranks AI chatbot misuse as the #1 health tech hazard of 2026.
Hospital workers in New Zealand have been told to stop using ChatGPT, Claude, and Gemini to write patient notes — or face formal disciplinary action. Health NZ issued the warning after discovering staff in mental health services were quietly using free AI tools to draft clinical documentation.
The ban doesn't just cover direct use. Even typing patient information into ChatGPT, copying the AI's output, and rewriting it by hand is now prohibited — even if you remove the patient's name first.
What Happened in Rotorua
A memo circulated this week to Mental Health and Addiction Services staff at Rotorua Lakes warned that "instances" had been identified where staff used AI drafting tools for clinical notes. The organization's response was blunt: this is "strictly prohibited" and violations could result in formal disciplinary action.
Sonny Taite, Health NZ's director of digital innovation and AI, explained that free AI tools present "risks to data security, privacy and accountability." Under the new policy, any AI tool used in healthcare must first be registered with the National Artificial Intelligence and Algorithm Expert Advisory Group (NAIAEAG).
What's banned: ChatGPT, Claude, Gemini, Copilot, and any other free AI chatbot for clinical notes
What's approved: Heidi, a purpose-built AI scribe being rolled out across emergency departments
Consequence: Formal disciplinary action for staff who violate the policy
Why Staff Were Using AI in the First Place
The union response cuts to the heart of the issue. Fleur Fitzsimons from the Public Service Association argued the memo's disciplinary tone is counterproductive:
"It will make staff afraid to ask questions or seek help."
Fitzsimons says healthcare workers are turning to unauthorized AI tools because they're under "enormous pressure" — understaffed, overworked, and drowning in administrative tasks. Rather than threatening punishment, she argues Health NZ should invest in proper training and approved AI alternatives.
It's a pattern playing out globally. When institutions don't provide good tools, workers find their own. And when those workarounds involve patient data flowing through commercial AI servers, the privacy risks are real.
This Isn't Just a New Zealand Problem
Health NZ's crackdown comes as the patient safety organization ECRI ranked AI chatbot misuse as the #1 health technology hazard for 2026 — above every other medical technology risk.
The core concern: ChatGPT and similar tools aren't medical devices. They haven't been validated for clinical use, they're not regulated by health authorities, and when a doctor pastes patient symptoms into a chatbot, that data hits commercial servers with no healthcare-grade privacy guarantees.
ECRI specifically flagged two dangerous scenarios:
Risk 1: Clinicians using chatbots to identify treatments — AI may suggest plausible-sounding options that are outdated, incomplete, or wrong for that specific patient.
Risk 2: AI-drafted clinical notes — Chatbot-generated summaries may omit critical details, use imprecise medical terminology, or introduce errors that go unnoticed.
The Approved Alternative Exists — But It's Not Everywhere Yet
Health NZ isn't anti-AI. The organization is actively rolling out Heidi, a purpose-built AI scribe tool, across emergency departments. Unlike ChatGPT, Heidi is designed specifically for healthcare — it's registered with the advisory group, processes data under healthcare privacy rules, and is trained to handle medical terminology correctly.
The problem is that Heidi isn't available to everyone yet. Mental health workers in Rotorua clearly needed documentation help now, and ChatGPT was the tool they could access immediately. The gap between what staff need and what's officially provided is where the real risk lives.
What This Means If You Use AI at Work
Healthcare is the canary in the coal mine, but this pattern applies everywhere. If your workplace hasn't set clear AI policies yet, it probably will soon. The questions to ask yourself:
Are you putting sensitive data into free AI tools? Customer information, internal documents, financial data — all carry the same kind of risk that patient notes do. Free-tier AI tools typically use your inputs for training unless you specifically opt out.
Does your employer know you're using AI? The Rotorua staff didn't think they were doing anything wrong. Neither did anyone — until the memo arrived.
New Zealand is also introducing new legislation in 2026 that will classify some AI tools as "software as a medical device" — requiring them to meet safety, quality, and performance standards before use in healthcare. Expect similar regulations worldwide.
Related Content — Get Started with Easy Claude Code | Free Learning Guides | More AI News
Stay updated on AI news
Simple explanations of the latest AI developments