AI for Automation
Back to AI News
2026-04-25AI trust crisisGen Z AI angerChatGPTSam AltmanMeta layoffs 2026DeepSeek V4AI automationOpenAI

Gen Z Anger at AI Up 41%: ChatGPT's 900M-User Trust Crisis

31% of Gen Z are angry at AI — up 41% in one year. ChatGPT hits 900M users as public trust falls. Sam Altman admits the AI trust crisis is real.


ChatGPT crossed 900 million weekly active users this April — roughly one in nine people on Earth. At the same moment, new survey data confirmed that Gen Z anger at AI climbed to 31%, up from 22% the year before. AI automation tools are everywhere. The resentment is growing. And the industry's best answer so far has been to spend more on advertising.

OpenAI CEO Sam Altman finally said what the data already showed: "If AI were a political candidate, it would be the least popular political candidate in history."

When 900 Million Users Isn't a Win for AI Trust

For most industries, 900 million weekly users would represent total market dominance. For AI, it represents a paradox. The same week that figure was reported, separate polling showed that 50%+ of Americans believe AI will cause more harm than good — a number that has held steady despite billions in industry investment and roughly $200 million in advertising spent by OpenAI and its peers trying to shift it.

More than 80% of Americans describe themselves as very or somewhat concerned about AI. Only 35% of the US general population reports feeling excited about where the technology is heading. These numbers don't describe a public that needs more explanation. They describe a public that has formed a view — based on direct, daily experience — and isn't moving.

ChatGPT AI automation interface illustrating the platform's 900 million weekly active users amid a growing public trust crisis

Satya Nadella (CEO of Microsoft, which has invested heavily in OpenAI) acknowledged this week that public approval isn't a given:

"At the end of the day, I think this industry, to which I belong, needs to earn the social permission to consume energy because we're doing good in the world."

"Earn social permission" is not the language of a CEO who believes the case is already made. It's the language of someone who recognizes the work hasn't started yet.

Gen Z: The Most Fluent AI Users, the Most Angry

The sharpest data point belongs to the generation that grew up with smartphones and adopted AI tools faster than any other demographic. Gen Z (people born roughly between 1997 and 2012) leads all age groups in AI adoption — they use ChatGPT for essays and research, image generators for design, and AI assistants as everyday productivity tools. They understand these tools better than most people alive.

They are also the angriest about them. Here is what the 2026 survey data actually shows:

  • 31% of Gen Z report anger at AI — up from 22% the prior year, a 41% increase in a single year
  • Only 18% feel hopeful about AI — down from 27%, a 33% decline
  • Hope and anger are moving in opposite directions simultaneously, and the gap is widening
  • Both shifts accelerated during a year of intense AI expansion into everyday search, social feeds, and workplace tools

This is not anger generated by news coverage or policy debates. Gen Z is reacting to lived experience: AI-generated slop content (low-quality, algorithmically-produced material that floods social feeds with filler) crowding their platforms, AI-written work polluting academic environments, and watching executives openly discuss which entry-level jobs (the first professional roles new graduates typically fill) will be automated first.

Dario Amodei, CEO of Anthropic — the company behind the Claude AI assistant — made the stakes explicit this week:

"Entry-level jobs in areas like finance, consulting, tech and many other areas — entry-level white-collar work — I worry that those things are going to be first augmented, but before long replaced by AI systems. We may indeed have a serious employment crisis on our hands."

When the CEO of a leading AI safety company warns publicly about a crisis driven by his own products, and the generation most affected reports record resentment, this is no longer a communications gap. It is a structural one.

Why $200 Million in Advertising Won't Close the AI Trust Gap

The standard industry response to declining public trust has been more messaging — more demos, more stage announcements, more ad spend. Nilay Patel (editor-in-chief of The Verge, one of the most-read technology publications globally) published a sharp counter-argument this week:

"AI doesn't have a marketing problem. People experience these tools every single day! ChatGPT has 900 million weekly users, trending to a billion, and everyone has seen AI Overviews in Google Search... You can't advertise people out of reacting to their own experiences."

This is the core insight: the problem is not that people misunderstand AI. The problem is that they understand it quite well — and what they understand generates concern rather than excitement. Messaging that contradicts their own lived experience does not build trust. It deepens skepticism.

Critics have described tech leadership as operating under a "software brain" worldview — the belief (common among engineers and executives who have succeeded by treating systems as optimizable databases) that social problems can be solved with the same tools that solve technical ones: better data, tighter algorithms, cleaner messaging. It works on code. It is not working on people.

Meta's AI Investment Math Problem

No company illustrates this tension more starkly than Meta (the company behind Facebook, Instagram, and WhatsApp). This week, Meta announced it is cutting 10% of its global workforce — approximately 8,000 employees — in May 2026, while closing 6,000 open job positions simultaneously. The stated reason: funding the AI buildout.

Meta's planned capital expenditure (the money a company invests in infrastructure and equipment) for 2026 stands at $115–135 billion, up from $72.22 billion in 2025 — a 59% increase in one year. Human roles are being eliminated to fund an infrastructure that may eventually replace more of them. For a generation watching this play out in real time, no press release softens what the math says.

AI data center infrastructure representing Meta's $115 billion AI automation investment and the 8,000 employee layoffs in 2026

Four Other Fault Lines Running This Week

The AI trust crisis does not exist in isolation. Several other developments this week add context to why public confidence is under pressure:

Elon Musk vs. OpenAI: The fraud lawsuit Musk filed against OpenAI goes to trial on April 27, 2026, in Oakland, California. Musk departed OpenAI's board in 2019 following a dispute over the company's direction. Whatever the legal outcome, the case is a public reminder that AI's trajectory is largely being shaped by personal power disputes between a small group of billionaires — not by the broader public that will live with the consequences.

DeepSeek V4: China's latest AI model released this week with major coding improvements, explicitly designed for compatibility with Huawei (a Chinese chipmaker) hardware. This is a direct engineering response to US export controls on NVIDIA chips (the primary supplier of high-performance AI processing hardware). The model directly challenges Anthropic, Google, and OpenAI on benchmark performance — and it runs on hardware that bypasses American restrictions entirely.

Project Maven on Record: A new book — Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare by journalist Katrina Manson — documents the US military conducting more than 1,000 targeted strikes in Iran using Maven, an autonomous targeting acceleration system (a tool that speeds up military decisions about engaging targets) that began in 2017. Google's original involvement triggered major internal employee protests and a contract cancellation. The documented scale of deployment adds dimensions to the public trust problem that no advertising campaign will reach.

Anthropic's Mythos Breach: Anthropic's Claude Mythos cybersecurity model was under security embargo (restricted from public access while being evaluated for safety risks) before its planned wide release. Reports this week confirmed that a "small group of unauthorized users" accessed it anyway. For a company whose entire market position rests on AI safety credibility, this is a significant and poorly-timed public failure.

What to Actually Watch for If You Use AI Automation at Work

The trust gap has practical consequences that extend well beyond sentiment surveys. If you use AI tools for writing, research, analysis, or customer-facing work, the current environment shapes what comes next in direct ways:

  • Public concern creates regulatory pressure. When 80%+ of Americans report concern and 50%+ expect AI to cause net harm, that data is what legislators read before writing bills. Which tools stay available — and under what conditions — will be shaped by these numbers far more than by product announcements.
  • Gen Z skepticism will reshape workplaces within a decade. As the generation that is simultaneously the most AI-literate and the most resentful enters professional environments in large numbers, expect more internal pushback on AI mandates, more demands for transparency in hiring and evaluation processes, and more friction around AI-driven workflows.
  • The capability-trust gap is widening actively. Models improve every month. Public trust is not keeping pace. Organizations that deploy AI automation without addressing the trust dimension will face escalating friction from employees, customers, and regulators — often at the same time.

You can explore how to implement AI automation in ways that build rather than erode trust in our practical AI automation guides — covering real deployment cases where transparency and clear benefit framing made the difference between adoption and rejection.

The practical move right now is to use AI in ways that address the concerns driving backlash: being transparent when AI is involved in outputs, keeping humans in review loops for consequential decisions, and being honest about what these tools can and genuinely cannot do. Building trust is slower than building capability. In 2026, that is where the real work is — and where the competitive gap will open between organizations that get this right and those that don't.

Related ContentGet Started with AI Automation | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments