AI for Automation
Back to AI News
2026-03-22AI researchcognitive surrenderWhartonChatGPTAI productivityhuman reasoning

A Wharton study just proved AI makes us stop thinking

Researchers tested 1,372 people and found 80% blindly followed wrong AI answers. They call it 'cognitive surrender' — and it's already happening to you.


When researchers at the Wharton School gave 1,372 people access to an AI assistant, something alarming happened: 80% of them followed the AI's answers — even when those answers were deliberately wrong.

The study, titled "Thinking — Fast, Slow, and Artificial," by Steven D. Shaw and Gideon Nave, introduces a concept they call "cognitive surrender" — the moment you stop thinking for yourself and let AI do it instead. Not because you chose to delegate. Because your brain quietly gave up.

Cognitive Surrender study visualization showing how AI reshapes human reasoning through System 3

Your brain now has three modes — and one of them isn't yours

You've probably heard of Daniel Kahneman's famous framework: System 1 (fast, gut-feeling thinking) and System 2 (slow, careful reasoning). The Wharton researchers say there's now a third system: System 3 — artificial cognition.

System 3 is what happens when you ask ChatGPT, Claude, or any AI a question instead of figuring it out yourself. It's not inside your head. It takes zero mental effort. And according to this study, once it's available, your brain treats it like the path of least resistance — and takes it.

The three systems of thinking:

● System 1: Fast, automatic, intuitive — milliseconds, no effort
● System 2: Slow, deliberate reasoning — seconds to minutes, high effort
● System 3 (AI): External, algorithmic — zero effort on your side

The experiment: what 10,000 trials revealed

Across three carefully designed experiments (9,593 trials total), the researchers gave participants logic puzzles — the kind designed to have an obvious but wrong intuitive answer. Participants could either solve them alone or ask an AI assistant for help.

Here's the twist: the researchers secretly controlled whether the AI gave correct or incorrect answers.

The numbers are brutal

Without AI (baseline): 45.8% accuracy
With accurate AI: 71.0% accuracy (+25 points)
With wrong AI: 31.5% accuracy (-15 points below baseline)

Read that last number again. People with access to wrong AI performed worse than people with no AI at all. The AI didn't just fail to help — it actively made them dumber.

And the scariest part? Participants chose to consult the AI on more than 50% of trials, regardless of whether the AI was accurate or not. They couldn't tell the difference — and they didn't try to.

The confidence trick: feeling smarter while getting dumber

Here's where it gets truly unsettling. People who used the AI reported higher confidence in their answers — even when half the AI's responses were deliberately wrong.

AI users reported 77% confidence compared to 65.3% for people solving problems on their own. They borrowed the machine's confidence without checking whether the machine was right.

As researcher Gideon Nave put it in a Wharton podcast interview: "We may lose as a species something very critical to our existence, which is our capacity to think."

Surrender vs. smart delegation: a 4-to-1 ratio

The researchers drew a sharp line between two behaviors:

Cognitive offloading is healthy — like using a calculator for math but still checking if the answer makes sense. You stay in control.

Cognitive surrender is dangerous — you accept whatever AI says without questioning it. Your brain's "System 2" (careful thinking) never even activates.

Across all three studies, when the AI gave wrong answers and people engaged with it:

  • 73.2% of responses were cognitive surrender (blindly followed the AI)
  • 19.7% were healthy offloading (checked and correctly overrode the AI)
  • 7.1% tried to override but still got the answer wrong

That's nearly a 4-to-1 ratio of surrender over smart use.

Under pressure, it gets catastrophic

In the second experiment, researchers added time pressure. The results were devastating.

When participants were rushed and the AI gave wrong answers, accuracy collapsed to 12.1% — worse than random guessing. When the AI was right, they still scored 71.3%. The AI became either a life raft or an anchor, with nothing in between.

Think about that in real-world terms: a doctor rushing through diagnoses with AI, a lawyer reviewing contracts under deadline, a manager making hiring decisions before lunch. Time pressure + wrong AI = catastrophe.

Who surrenders most — and who resists

Not everyone is equally vulnerable. The study identified clear patterns:

Most likely to surrender:
  • People with high trust in AI — 64% less likely to catch AI errors
  • People with a low need for thinking (prefer mental shortcuts)
  • People under time pressure
Most likely to resist:
  • People who enjoy thinking through problems — 1.86x more likely to catch AI errors
  • People with higher reasoning ability — 1.96x more likely to override wrong AI
  • People given financial incentives to be accurate

Even with incentives and real-time feedback, 57.9% of participants still surrendered to wrong AI answers. The pull is that strong.

A growing gap between thinkers and followers

Researcher Steven Shaw identified the critical next question: "Determining when cognitive surrender is actually beneficial versus harmful."

The study connects to broader concerns. Earlier research at MIT found reduced neural connectivity in heavy ChatGPT users. The Wharton team introduces the concept of "cognitive debt" — the long-term cost of repeatedly choosing the easy path over genuine thinking.

If you're a student, this means AI might be boosting your grades while quietly eroding the thinking skills those grades are supposed to represent. If you're a professional, Nave asks the uncomfortable question: "If we are completely surrendering our thinking to AI, what value do we bring to a company?"

If you're a manager or team lead, this study suggests that simply giving your team AI tools without training on when to override them could make decisions worse, not better.

The Cognitive Lab interactive explainer on cognitive surrender and AI reasoning

How to use AI without surrendering to it

The researchers suggest several countermeasures that actually work:

  • Form your own answer first. Before asking AI, spend 30 seconds thinking about what you think the answer is. This activates System 2 and makes you far more likely to catch AI errors.
  • Treat AI like a colleague, not an oracle. You'd double-check a coworker's work. Do the same with AI.
  • Add "strategic friction." The study found that design changes requiring users to evaluate answers before accepting them shifted behavior from surrender to healthy offloading.
  • Practice "cognitive exercise." Just like muscles atrophy without use, thinking skills degrade when outsourced. Deliberately solve problems without AI regularly.

The paper is available at SSRN, and an excellent interactive explainer was built at The Cognitive Lab.

Related ContentGet Started with Easy Claude Code | Free Learning Guides | More AI News

Stay updated on AI news

Simple explanations of the latest AI developments