AI for Automation
Back to AI News
2026-04-24OpenAIChatGPTartificial intelligenceGen ZAI trustAI adoptionpublic opinionAI automation

OpenAI $200M Ads: Gen Z AI Trust Drops to 18% in One Year

Gen Z AI trust fell from 27% to 18% in one year. 50%+ of ChatGPT users think AI does harm. OpenAI's $200M ad strategy is solving the wrong problem.


OpenAI is spending $200 million on podcast advertising to shift public opinion on AI. The strategy assumes the problem is messaging. The data suggests otherwise: 900 million people use ChatGPT every week, and more than half of Americans believe AI will do more harm than good.

New polling data compiled by The Verge reveals a credibility crisis that no advertising campaign can fix — because the people who distrust AI are already using it every single day. The gap between adoption and acceptance is now one of the most consequential stories in technology.

AI Adoption vs. Trust: Numbers Silicon Valley Won't Discuss

The adoption-versus-trust mismatch is now one of the largest in consumer technology history:

  • 900 million weekly users — ChatGPT is one of the most rapidly adopted software products ever built
  • 67% of Americans used ChatGPT or Microsoft Copilot (Microsoft's AI writing and productivity assistant built into Office and Windows) in the past month
  • 50%+ believe AI does more harm than good — a majority verdict, despite or because of active daily usage
  • 80%+ express serious concern about AI technology — the highest tracked category of negative sentiment
  • Only 35% are genuinely excited about AI, even as the industry posts record investment numbers

NBC News polling now places AI's public favorability below ICE (Immigration and Customs Enforcement, the federal agency that consistently ranks among the least popular U.S. government institutions in polls). Sam Altman, OpenAI's CEO, acknowledged this directly: "If AI were a political candidate, it would be the least popular political candidate in history. And given the amazing things AI can do, I think there's got to be better marketing for AI."

That last phrase — "better marketing" — is the tell. His diagnosis determines his treatment. The polling data suggests the diagnosis is wrong.

AI public trust poll 2026: ChatGPT and OpenAI favorability below federal agencies in polling data

Gen Z AI Trust Collapse: From Hope to Anger in One Year

If the overall American polling is troubling for the AI industry, the Gen Z (people born roughly 1997–2012, the first cohort to grow up with smartphones from childhood) numbers are structurally worse:

  • Hope about AI: fell from 27% to 18% — a 9 percentage point drop in just 12 months
  • Anger toward AI: climbed from 22% to 31% — a 9 percentage point rise in the same period
  • The swing is directionally symmetrical: every single point of hope lost became a point of anger gained

For context: social media trust among young people declined over roughly 6–8 years following the 2016 election controversies. AI is producing an equivalent sentiment shift in a single year. The pace of deterioration is the real story, not the absolute numbers.

This generation didn't grow up trusting tech companies. Gen Z entered adulthood watching Facebook run internal studies on teen psychological vulnerability without parental consent, watching "move fast and break things" (Silicon Valley's guiding philosophy of deploying products before understanding their social consequences) cause documented harm in real time, and watching platform promises transform into data extraction economics. They were already skeptical before AI arrived. AI isn't reversing that pattern — it's accelerating it on a compressed timeline.

OpenAI's $200 Million Marketing Miscalculation

OpenAI's investment in TBPN (The Big Pod Network, a podcast production company targeting business and technology audiences) reflects a specific theory: reach opinion-forming listeners with compelling narratives about AI capability, and public sentiment will follow. The theory fails on one critical data point — the people most negative about AI are already its most active users.

David Pierce, writing for The Verge, identifies the structural flaw directly:

"You can't advertise people out of reacting to their own experiences. This is a fundamental disconnect between how tech people with software brains see the world and how regular people are living their lives."

Pierce calls this the "Software Brain" (the tendency among tech professionals to model the world as a system of databases and processes that can be optimized through code, rather than as a collection of human experiences that resist automation). Software Brain logic reads 900 million users as a success metric. It structurally cannot process that those same 900 million users encounter AI-generated misinformation in their news feeds, factual errors in AI-written summaries, and automated performance management in their workplaces — and that these lived experiences are generating the 50%+ harm sentiment.

The DOGE project (Elon Musk's 2025 attempt to apply Software Brain logic to federal government operations, treating agencies as databases that could be restructured through code) ended in what analysts described as a spectacular public failure — because government systems and people are not automatable software loops. Every high-profile AI stumble reaches users who are already primed to see it as confirmation.

Gen Z AI trust trend 2026: ChatGPT era hope drops from 27% to 18%, anger rises from 22% to 31% in one year

The Smart Home Warning Nobody Wants to Revisit

AI's current sentiment crisis has a direct historical parallel. Smart home technology — connected devices (Amazon Echo, Google Nest, Apple HomePod) that automate household tasks via voice commands and environmental sensors — received over a decade of sustained investment from three of the world's most valuable companies. All three failed to achieve mainstream adoption despite massive, repeated marketing campaigns.

The technology worked. People didn't want it. Not because of poor messaging — but because the product didn't match how regular people wanted to live. Product-reality gap, not marketing gap, was the deciding factor. That precedent matters here.

Dario Amodei, CEO of Anthropic (OpenAI's primary competitor and the company behind the Claude AI assistant), is now publicly validating the fears driving Gen Z anger: "Entry-level jobs in areas like finance, consulting, tech and many other areas — entry-level white-collar work — I worry that those things are going to be first augmented, but before long replaced by AI systems. We may indeed have a serious employment crisis on our hands."

An industry CEO predicting a "serious employment crisis" caused by his own products is not describing a marketing problem. Large consulting firms are now using AI to generate the justification slides for layoff announcements — not to study efficiency, but to automate the documentation of workforce cuts. When users observe this happening, their negative sentiment is not a misunderstanding that better podcast advertising will correct. It is a rational response to a correctly understood phenomenon.

High AI Usage, Low Trust: What the Data Predicts

The most significant finding in the combined polling data isn't negative sentiment alone — it's the combination of high usage with high distrust. Two-thirds of Americans used ChatGPT or Copilot last month. Over half believe AI causes harm. These are the same people. That combination creates compounding risk:

  • When AI errors occur, users primed for distrust interpret them as confirmation of intentional deception rather than technical limitation
  • When companies deploy AI to automate performance management or generate layoff documentation, users already skeptical of tech motives read it as exploitation — because that's what they're watching happen
  • When politicians supporting data center buildouts in their communities are voted out of office — and when opposition moves from online complaints to direct action — the tech industry loses the infrastructural permission it needs to operate

Satya Nadella, Microsoft's CEO (the company holding a major stake in OpenAI), captured the stakes in a single phrase during an industry discussion: "This industry needs to earn the social permission to consume energy because we're doing good in the world." Social permission — not enthusiasm, not trust, but basic operating permission — is now the bar being articulated by the CEO of one of the world's most valuable companies. That framing alone describes how significantly the narrative has shifted since 2023.

The Gen Z anger metric (now at 31%, up 9 percentage points in 12 months) is the number to track for the next year. If it reaches 40%, the industry will face something $200 million in podcast advertising cannot solve: a generation that entered working life already angry at the technology, carrying that anger into their votes, their purchasing decisions, and their regulatory expectations for the next four decades. If you want to understand how to use AI tools in ways that actually deliver value rather than extract frustration, AI automation practical guides focus exactly on that distinction. Because the 18% question isn't whether OpenAI can buy better perception. It's whether the industry treats Gen Z anger as product feedback — before that number becomes structurally irreversible.

Related ContentGet Started | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments