AI for Automation
Back to AI News
2026-05-11ai-automationartificial-intelligenceai-singularityeconomic-growthai-governancehardware-automationai-researchneural-networks

13% AI Automation Triggers Explosive Growth, Economists Find

New research finds 13% AI automation triggers explosive economic growth. Hardware R&D delivers 5× more impact than software. Singularity scenario: 6 years.


Economists at Forethought, Columbia University, and the University of Virginia have put a number on the AI automation tipping point: automate just 13% of tasks across all sectors of the economy, and growth becomes self-reinforcing — potentially impossible to stop or reverse. That calculation, synthesized in Import AI #456 by Jack Clark (co-founder of Anthropic, the AI safety company behind Claude), arrives alongside two other findings that together suggest the decisions being made right now about chips, regulation, and computing architecture will shape outcomes for decades.

The Number Economists Didn't Expect to Find

The research comes from a working paper titled "When Does Automating AI Research Produce Explosive Growth?" — a study of recursive self-improvement (the process where AI systems improve their own research capabilities, creating better AI, which improves research further in a loop). Economists modeled what happens to output when automation is applied at different scales and to different sectors.

Three critical thresholds emerged:

  • 13% automation across all sectors of the economy — the minimum to push into what the paper calls an "explosive growth regime" (a phase where economic growth accelerates beyond what traditional forecasting models can track)
  • 17% automation when only software and hardware research sectors are automated, leaving the rest of the economy untouched
  • 20% hardware-only — automating chip design research alone crosses the explosive growth threshold even with no other sector automated at all

In the "baseline stylized simulation" (a mathematical model built with idealized assumptions to isolate specific dynamics — not a dated prediction), full automation of software R&D combined with just 5% automation across the broader economy causes a singularity — an economic state where growth accelerates faster than conventional models can project — in roughly 6 years.

Software automation sits at what the authors call a "knife-edge" (a precise boundary condition where tiny changes in inputs produce radically different outputs): automating software research alone barely crosses the explosive growth threshold. Fall slightly short and you get normal growth. Cross it and the feedback loop starts building.

"In our baseline stylized simulation, an 'automation shock' involving full automation of software R&D and just 5% automation across the rest of the economy causes the singularity to arrive in roughly six years." — RSI paper, Forethought / Columbia / University of Virginia

Chart showing the 13% AI automation threshold that triggers explosive economic growth, based on Forethought, Columbia University, and University of Virginia research

Hardware Beats Software by 5x — and Most Coverage Misses This Entirely

The most counterintuitive finding is about where to automate. Conventional AI coverage focuses on software: chatbots, coding tools, content generators. The economic modeling says hardware is where the real leverage lives.

Automating hardware R&D (engineering the physical chips and processors that run all computing — the silicon components inside every server, phone, and AI data center) delivers 5 times the economic return of automating software research. Compared to aggregate total factor productivity improvements (a broad economic measure of how efficiently an entire economy converts inputs into output), hardware automation delivers 10 times the return.

Three implications that follow directly:

  • Companies designing their own chips — Google (TPUs), Apple (M-series), Amazon (Trainium), Meta (MTIA) — are accumulating leverage at the highest-return point of the automation curve, whether or not they modeled it this way
  • Governments funding chip research programs (the US CHIPS Act, EU Chips Act) may be doing more for economic trajectory than any software-focused AI policy
  • AI tools that help engineers design chips have disproportionately large downstream consequences — far more than AI tools that help programmers write code

Jack Clark flagged the model's sensitivity to assumptions: the paper's co-authors, including Anton Korinek (who has collaborated directly with Anthropic), are careful to note results shift when calibration parameters change. The 6-year figure is a scenario tied to specific mathematical choices about research return rates — not a confidence-interval forecast.

One Neural Network. No Operating System. No App Store.

The second major research thread in Import AI #456 comes from a joint Meta and KAIST (Korea Advanced Institute of Science and Technology, one of Asia's most prestigious technical research institutions) team — and it asks a question that sounds like philosophy but is quickly becoming engineering: can a neural network replace the operating system entirely?

Today, computers run in layers. An operating system (software that manages hardware and basic functions — Windows, macOS, Linux) sits underneath applications, which sit underneath user interfaces. AI models currently live on top of this stack. They're just another application, dependent on the traditional OS below.

The Neural Computers (NC) paper proposes inverting this. A single neural network — the paper calls the end-state a Completely Neural Computer (CNC) — would drive pixels on screen, respond to keyboard input, execute system actions, and manage resources directly. One model to do everything, with no traditional OS underneath it.

"Neural computers point toward a machine form in which a single latent runtime state acts as the computer itself, driving pixels, text, and actions while subsuming what operating systems and interfaces handle today." — Neural Computers paper

The team ran a prototype using Wan 2.1 — a generative video model (an AI system trained to produce realistic, frame-by-frame visual output) — on both CLI (command-line interfaces: the text-based terminals developers use to run programs by typing commands) and GUI (graphical user interfaces: the point-and-click desktops most people use every day). Early results were encouraging on simple tasks but broke down at the edges — the system "often stays aligned with terminal buffer" but struggles with unusual or adversarial input.

Juergen Schmidhubert — a co-author and one of the architects of the LSTM architecture (Long Short-Term Memory: a memory mechanism that gave early AI the ability to process sequences of data, foundational to the first wave of language models) — speculated the mature form of a neural computer would require a substrate of 10T–1000T parameters (a parameter is a single tunable number inside a neural network; GPT-4 has roughly 1.7T). That's 6x to 600x larger than today's largest deployed models.

Clark's framing: "wright brothers before takeoff." The principle is demonstrated at small scale. The engineering gap to practical deployment remains enormous. But the direction of travel is now set — and it points toward AI that doesn't just run on your computer, but is your computer.

Computer chips and circuit boards representing hardware AI automation, which delivers 5x more economic leverage than software automation in AI-driven growth models

Governing AI When the Rulebook Hasn't Been Written Yet

The third thread in Import AI #456 confronts a problem that falls directly out of the economic modeling: if AI capability growth could be nonlinear and faster than expected, how do governments write rules for a technology they can't yet fully characterize?

The Institute for Law & AI's proposed answer is a framework called "radical optionality" — designed specifically to preserve governments' ability to decide later, rather than forcing premature commitments to specific regulations that might turn out to be wrong.

"Radical optionality is about preserving democratic governments' ability to make good decisions about how to govern transformative AI systems as circumstances evolve," the paper states.

The proposed policy toolkit includes:

  • Transparency requirements — AI developers must disclose what their systems can do and how widely they're deployed
  • Reporting requirements — mandatory incident reporting and capability disclosure to designated regulatory bodies
  • Auditing regimes — independent technical audits of high-capability systems before deployment, not after incidents occur
  • Whistleblower protections — legal safeguards for employees who report internal safety concerns without facing retaliation
  • Flexible "if-then" commitments — regulatory triggers that automatically activate when specific, measurable conditions are met rather than fixed, static rules

The framework recommends governments fund AISI (the UK's AI Safety Institute) and CAISI (the US equivalent) not as advisory think tanks but as genuine technical intelligence agencies — organizations with the engineering expertise to actually understand what they're governing at the model level.

Clark offered a pointed pushback on one claim in the paper: the authors argue their proposed authorities "don't lend themselves to abuse." Clark disagreed directly: sufficiently motivated governments could use the same flexible mechanisms to expand regulatory scope dramatically beyond the original intent. The same optionality that lets good governments respond wisely to new AI capabilities could, in other hands, become a tool for overreach or surveillance.

"The cost of implementing these policies is modest, relative to the potential benefits. The cost of failing to act, by contrast, is potentially catastrophic."

Clark has separately estimated the probability of substantially automated AI R&D at 60%+ by 2029 — meaning more than a coin-flip chance that within 3 years, AI systems are driving most of the research that produces better AI. If that estimate holds, the window for building regulatory institutions isn't measured in decades. It's measured in the next few budget cycles.

Three Converging Bets on 6 Years

Taken together, Import AI #456 is less a newsletter and more a coherent argument: economic modeling, computing architecture research, and regulatory frameworks are all converging on roughly the same time horizon. The singularity scenario needs 6 years. Neural computers are "wright brothers" stage today. Regulatory institutions need to be built before the capability arrives, not after.

The practical takeaway for anyone not working in AI research: watch hardware. The economic leverage in chip design automation is 5x higher than software automation — and the companies and governments investing in chip R&D tooling now may be setting the trajectory long before software-focused AI tools become the story. For ongoing AI automation news and research coverage, our team tracks these developments weekly. If any of these threads interest you, Import AI is free and available via weekly email — one of the few newsletters that synthesizes research across economics, engineering, and governance in a single weekly read.

Subscribe:  https://jack-clark.net/
RSS feed:   https://jack-clark.net/feed/
Archive:    https://importai.substack.com/

Related ContentGet Started | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments