AI for Automation
Back to AI News
2026-03-21ChatGPTAI privacyGeminidata protectionMalwarebytes survey

43% of ChatGPT users just quit — privacy is the reason

A new Malwarebytes survey found 43% abandoned ChatGPT and 42% left Gemini over data fears. 90% worry about AI using their info without consent.


A new survey from cybersecurity firm Malwarebytes reveals a striking backlash against AI chatbots: 43% of users have quit ChatGPT and 42% have abandoned Google's Gemini — all because of privacy concerns. The numbers suggest that the rapid spread of AI assistants may be hitting a wall as everyday users start asking a simple question: what happens to my data?

AI privacy concerns illustration showing data and AI chatbot interaction

The numbers behind the AI privacy rebellion

Malwarebytes surveyed users about their attitudes toward AI data collection, and the results paint a picture of deep distrust:

🔒 90% worry about AI using their data without consent

🚫 88% refuse to freely share personal information with AI chatbots

🏥 84% won't share health-related information with any AI tool

🛡️ 82% actively opt out of data collection wherever they can

The privacy revolt goes beyond just AI tools. Among the survey respondents, 44% have stopped using Instagram and 37% quit Facebook — likely driven by concerns over Meta using social media posts to train its AI models.

People aren't just worried — they're fighting back

The survey found that users are taking concrete steps to protect their data:

71% use ad blockers — the most common privacy measure
46% use VPNs (tools that hide your internet activity from your provider)
Many enter fake data — deliberately giving AI tools wrong names, emails, or details

As the Malwarebytes report noted: "Many people are unsure of exactly how AI is being used for their benefit and the privacy implications, which lead to distrust and confusion."

Why this matters more than a typical survey

These aren't early adopters getting cold feet — this is a mass departure happening while AI companies are racing to add more users. ChatGPT recently crossed 900 million weekly users, and OpenAI, Google, and Anthropic are all expanding aggressively into enterprise and consumer markets.

But growth numbers don't tell the whole story if nearly half the people who try these tools eventually leave. The survey suggests a growing gap between AI companies racing forward and users pulling back.

The trust problem AI companies need to solve

The core issue isn't whether AI is useful — most people agree it is. The problem is transparency. Users don't know:

  • Whether their conversations are stored permanently
  • Whether their data trains future AI models
  • Who else can access what they've typed
  • Whether opting out actually works

Until AI companies provide clear, simple answers to these questions, the privacy rebellion is likely to grow. For now, the message from users is loud and clear: convenience isn't worth it if you can't trust where your data goes.

What you can do right now

If you're worried about privacy but still want to use AI tools, here are practical steps:

Check your settings — Both ChatGPT and Gemini have options to disable chat history and training data usage. Look for "Data Controls" in ChatGPT settings or "Activity" in Google's settings.
Use local AI instead — Tools like Ollama let you run AI models on your own computer, so your data never leaves your machine.
Never share sensitive info — Don't enter passwords, health details, financial data, or private business information into any cloud-based AI tool.

Related ContentGet Started with Easy Claude Code | Free Learning Guides | More AI News

Stay updated on AI news

Simple explanations of the latest AI developments