Browser AI lost 75% of its users — coding AI just won
ChatGPT Agent dropped from 4M to under 1M weekly users. Google is shutting Project Mariner. The AI industry just pivoted hard to coding agents.
The dream of AI that browses the web for you is quietly dying. ChatGPT Agent — OpenAI's browser automation feature — launched to 4 million weekly paying users. Within months, that number dropped below 1 million, a 75% collapse that signals something fundamental about where AI is actually useful.
Now Google is following suit, restructuring the team behind Project Mariner, its Chrome-based AI agent that could navigate websites, fill out forms, and complete tasks autonomously. Employees are being moved to higher-priority projects. The browser AI era, barely a year old, appears to be ending before it truly began.
Why browser AI agents failed
The idea was compelling: tell AI to book a flight, buy groceries, or fill out a government form — and watch it operate your browser like a human would. In practice, it hit walls everywhere.
Websites are messy, unpredictable environments. A random popup, a CAPTCHA, a site redesign, or a slow-loading page could derail an entire task. Browser agents relied on taking screenshots and sending them to AI in the cloud for processing — an approach that was both slow and brittle. Even Google's Project Mariner, which cost users $249.99/month on the AI Ultra plan, couldn't overcome these fundamental challenges.
As one industry analysis noted, there remains "a noticeable gap between early pilot projects and full-scale deployment" — organizations couldn't trust browser agents with real work.
- ChatGPT Agent: 4M → under 1M weekly paying users in months
- Google Project Mariner: team restructured, employees reassigned
- OpenAI pivoted to specialized shopping agents instead of general browser AI
- Google said the expertise will "feed into other products, including the Gemini Agent"
Coding AI filled the vacuum
While browser agents struggled, a different category of AI exploded. Coding agents — AI that writes, tests, and deploys software — became the industry's new obsession. Tools like Claude Code, OpenClaw, and Cursor took off because code is a controlled environment: predictable syntax, testable output, and clear success metrics.
Anthropic's freshly published 2026 Agentic Coding Trends Report captures the shift. Key findings:
- Developers use AI in 60% of their work, but can only "fully delegate" 0–20% of tasks — meaning AI is a powerful collaborator, not a replacement
- Tasks that once took weeks of cross-team coordination now become focused working sessions lasting hours
- Engineers are becoming "full-stack" across frontend, backend, databases, and infrastructure because AI fills their knowledge gaps
- Onboarding to a new codebase is collapsing from weeks to hours
Anthropic now openly describes its coding agents as "future all-purpose assistants" — suggesting the path to general AI assistance runs through code, not through clicking buttons on websites.
Why code works where browsers don't
The difference comes down to predictability. A website can change its layout overnight. But Python is still Python. JavaScript is still JavaScript. Code has rules that AI can learn and follow reliably.
Browser agents vs. coding agents — why one works:
| Factor | Browser agents | Coding agents |
| Environment | Unpredictable (popups, CAPTCHAs, redesigns) | Structured (syntax rules, test suites) |
| Success measurement | Hard to verify (did it click the right thing?) | Clear (does the code run and pass tests?) |
| Error recovery | Often fails silently | Error messages guide next attempt |
| Cost efficiency | Screenshot-heavy, token-expensive | Text-based, much cheaper per task |
What this means if you're not a developer
If you were waiting for AI to do your web browsing, the technology isn't ready yet. Even at $250/month, Google's most advanced browser agent couldn't deliver reliably enough to keep its team intact.
If you're curious about AI productivity, the real gains right now are in specialized tools: AI that writes emails, AI that summarizes documents, AI that generates presentations. These work because they operate on structured data — not the chaos of the open web.
If you're learning to code with AI, you're riding the wave that the entire industry just bet on. Coding agents are the fastest-growing AI category, and companies from Anthropic to OpenAI are pouring resources into making them better.
The bigger picture
This isn't the end of browser automation — it's a course correction. Google said Project Mariner's expertise will flow into the Gemini Agent. OpenAI shifted to targeted shopping agents instead of general-purpose browsing. The industry learned that AI works best in controlled environments with clear rules, not in the messy, ever-changing web.
The lesson for anyone watching AI: don't bet on the flashiest demo. Bet on the tool that works reliably, day after day, in a specific domain. Right now, that's coding agents — and the numbers prove it.
Related Content — Get Started with Easy Claude Code | Free Learning Guides | More AI News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments