AI for Automation
Back to AI News
2026-03-18AI ResearchYann LeCunAutonomous LearningAI LimitationsArtificial Intelligence

Neither ChatGPT Nor Claude Actually 'Learns' Anything — A Diagnosis from Turing Award Winner Yann LeCun

Three leading AI scholars, including Turing Award winner Yann LeCun, argue that 'current AI doesn't truly learn' and propose a three-stage autonomous learning framework modeled after the human brain. They estimate it could take decades to achieve fully autonomous learning.


Ask ChatGPT anything and it fires back an answer. Tell Claude to write code and it delivers on the spot. Everyone can feel that these AIs are 'smart,' but are they actually learning? Three leading AI scholars, including Turing Award winner Yann LeCun, have delivered a striking answer — "Current AI has never once learned anything on its own."

Key Takeaways
  • Once deployed, current AI stops learning anything new — it remains frozen in the state it was trained on using human-curated data
  • A baby decides on its own what to learn, when to observe, and when to act, but AI lacks this ability entirely
  • The researchers proposed an autonomous learning blueprint combining three systems — observation, action, and orchestration — but estimate it will take decades to fully realize

"It Didn't Learn — It Memorized"

On March 16, 2026, Yann LeCun — Meta's Chief AI Scientist and recipient of the 2018 Turing Award (often called the Nobel Prize of computer science) — along with cognitive scientist Emmanuel Dupoux and UC Berkeley computer vision expert Jitendra Malik, published a joint paper titled "Why AI Systems Don't Learn and What to Do About It".

The paper's central argument is straightforward: Current AI merely digests data that humans have pre-selected — it cannot decide what to learn on its own. Once ChatGPT is trained on trillions of text tokens and released, it stops learning anything new from that point on. No matter how many conversations users have with it, the model itself doesn't get any smarter.

Think of it this way: current AI is like a student who has perfectly memorized the textbook for the exam. It scores brilliantly on anything within the exam's scope, but when faced with a new problem outside the textbook, it doesn't know how to go to the library and study on its own. Humans, by contrast, observe their surroundings from the moment they're born, touch things, fail, and continuously adjust their own learning strategies.

Diagram showing the difference between current AI (left) and autonomous learning AI (right). Current AI relies on humans to curate data and decide training methods, while autonomous learning AI interacts with and learns from the environment on its own.

▲ Current AI (left) requires humans to curate data and define training methods. The autonomous learning AI proposed by the researchers (right) interacts with and learns from the environment on its own. (Source: Paper, Figure 1)

The Future of AI, Inspired by Babies — Three Systems

The researchers' proposed solution comes from a perhaps surprising place: the human brain — specifically, the way babies learn about the world. Babies instinctively pay attention to faces and voices (observation), pick up and throw objects to grasp the laws of physics (action), and fluidly switch between observing and acting depending on the situation (orchestration). The researchers organized this into three systems.

System A — Learning by Observation
This system identifies patterns by watching videos or listening to sounds. Just as a baby learns the concept of 'face' by repeatedly seeing their mother's face, the AI observes data to understand the rules of the world.
System B — Learning by Action
This system learns by doing. Just as a baby learns about gravity by stacking blocks and watching them topple, the AI interacts with its environment to discover cause and effect.
System M — The Brain's Orchestrator
This is the most critical piece. It acts as a 'conductor' that decides when to observe and when to act. It detects prediction errors (the gap between expectations and reality) and uncertainty, then determines whether now is the time to sit back and watch or to jump in and try something.
Full architecture diagram of the autonomous learning AI proposed by the researchers. System M (orchestration) connects and coordinates System A (observation), System B (action), and memory.

▲ The proposed autonomous learning architecture. System M (top left) orchestrates observation (System A), action (System B), and memory (Episodic Memory), all connected to the external world. (Source: Paper, Figure 6)

Why Current AI Can't Do This

The paper identifies three fundamental limitations of today's AI.

First, it can't choose its own training data. A baby naturally turns its gaze toward whatever interests it, but AI only processes text that humans have collected from the internet. As the paper notes, "Babies instinctively pay attention to faces and voices — a kind of hardwired data-filtering mechanism that jumpstarts social and language learning."

Second, it can't switch between observation and action. A person reads a recipe (observation), tries cooking it (action), and when it fails, goes back to check the recipe again (returning to observation). Current AI is incapable of this kind of flexible switching.

Third, it doesn't know what it doesn't know. Humans have an internal sense of "I'm not sure about this," which drives them to study more or ask questions. AI lacks this self-assessment ability, which is exactly why it answers confidently even when it's wrong. The recent issue of AI flipping its answers when you simply say "Are you sure?" stems from this same root cause.

"Decades Until Fully Autonomous Learning" — An Honest Outlook

The researchers are refreshingly candid that their proposal is not something that can be built today. In the paper's conclusion, they state: "Fully autonomous, broad-scope learning systems will likely take decades to achieve."

They also lay out specific real-world barriers standing in the way.

  • The simulator problem — Virtual environments where AI can safely learn through trial and error are still not realistic enough
  • No evaluation standards — Current benchmarks (standardized tests for AI) only measure performance on specific tasks; they can't measure learning speed, such as "how quickly can it pick up a new video game"
  • Ethical concerns — AI that learns on its own may be harder to control, and if it develops signals analogous to pain, questions about its moral status could arise
Diagram showing various interaction modes between System A (observation) and System B (action) — including self-practice, observational learning, and action replay.

▲ Various ways observation (System A) and action (System B) can combine: self-practice, learning by watching others, mental simulation, and more — applying patterns from human learning to AI. (Source: Paper, Figure 3)

What AI Users Need to Know

This paper won't change how ChatGPT or Claude performs tomorrow. But it offers important takeaways for anyone who uses AI in daily life.

Don't Expect AI to Gain 'Experience'
AI doesn't get smarter through conversations. This is exactly why it can't remember information you shared yesterday. You need to provide important context fresh every time.
"If AI Sounds Confident, It Must Be Right" Is a Dangerous Assumption
Not knowing what it doesn't know means AI answers in a confident tone even when it's wrong. Always verify with a human when making important decisions.
"AI Will Replace Humans Soon" Is Still a Distant Prospect
The person who wrote this paper is Meta's head of AI. When the person building AI says "it'll take decades more," that's a powerful reality check against today's AI hype.

Why This Paper Matters

The AI industry has been dominated by the belief that "just add more data and bigger computers" — the so-called 'scaling hypothesis' (the idea that simply increasing model size and data solves everything). But now, one of the people who has built more AI than almost anyone — Yann LeCun — is pushing back head-on, saying "scaling alone won't get us there."

The paper warns that "we are hitting a wall with high-quality text data," arguing there's a fundamental structural problem that no amount of data can fix. This connects to the recent phenomenon of AI scoring 93% on existing tests but only 13% on new ones.

When a Turing Award winner and world-class academics officially declare that "the current direction won't work," it's a signal that could shift the trajectory of the entire AI industry.

Related ContentGet Started with AI | Free Learning Guide | More AI News

Stay updated on AI news

Simple explanations of the latest AI developments