Vibe Coding Works — A Journalist Wishes It Didn't
A veteran tech journalist built a real app with AI automation — then admitted 'I wish it didn't.' The most honest vibe coding verdict in 2026.
The sentence that stopped readers cold: "Vibe coding works. I wish it didn't. But it does, well enough."
Thomas Claburn, a veteran technology journalist at The Register, published those words on April 12, 2026, after building a feed reading web application (a tool that collects and displays content from multiple news websites in one place) entirely using AI assistance. He didn't write a glowing product review. He didn't declare a revolution. He wrote something rarer — an honest admission of ambivalence from someone with no reason to flatter the technology. And that ambivalence matters more than any benchmark score.
The Moment Vibe Coding Stopped Being a Joke
"Vibe coding" — the practice of describing what you want in plain English and letting AI generate the actual code, rather than writing it line by line yourself — was coined only months ago. It already has critics and evangelists. What it lacked, until recently, was credibility with the skeptics: people who have written software for decades and are not easily impressed by demos that work under controlled conditions.
Claburn is precisely that kind of observer. As a senior technology writer covering enterprise software, cloud infrastructure, and developer tooling at The Register, he has watched hundreds of "game-changing" tools appear and quietly disappear. His conclusion — "it works" — carries the weight of that skeptical baseline. His immediate follow-up — "I wish it didn't" — signals that the conclusion wasn't comfortable to reach.
What He Built — and Why a Non-Engineer Building It Changes the Calculation
The project was a feed reading web application. It aggregates RSS feeds (a standardized format that websites use to automatically publish new articles) into a browsable interface — fetching data from external sources, parsing structured content, and rendering it dynamically. This is not a toy project. It involves real network requests, error handling, and content parsing. A competent developer typically needs 1 to 3 days to build this from scratch.
Claburn built it. As a journalist, not a software engineer.
Three elements make this account different from the AI coding demos you have seen before:
- Real use case — not a landing page or a counter app, but software with genuine data ingestion and rendering logic
- Non-specialist builder — Claburn is not a professional developer, which makes the result harder to dismiss as "just what engineers now do faster"
- No commercial angle — no affiliate links, no vendor relationship, just an experienced journalist's unfiltered account
Why "I Wish It Didn't Work" Is the Most Honest Sentence in Tech This Month
There is a specific emotional register that emerges when a powerful tool removes a barrier you thought was structural. A professional photographer watching auto-enhancement do in seconds what took hours of editing. A chef reaching for a pre-made sauce that is genuinely better than what they would make from scratch. The discomfort is not about the tool failing. It is about what the tool's success implies.
Claburn's full quote captures this with unusual precision:
"Vibe coding works. I wish it didn't. But it does, well enough. And barring some revolution that overturns the new world disorder, machine learning cannot be undone."
That final clause — "machine learning cannot be undone" — is where the article's real weight lives. He is not expressing temporary hesitation or performative concern. He is stating that a capability threshold has been crossed, that the crossing is permanent, and that the direction of travel is locked in. That is a different kind of statement from "this is impressive." It is an acceptance that the category of work called "writing software" has structurally changed.
The AI Automation Backdrop That Makes April 2026 a Turning Point
Claburn's experience did not arrive in isolation. The Register's coverage from April 10–12, 2026 reveals an AI coding ecosystem that has quietly reached critical mass across multiple fronts:
- Claude Code at enterprise scale — Anthropic's AI coding assistant (a tool that reads your codebase and writes or modifies code through a conversational chat interface) is now embedded in $30B+ worth of active enterprise AI contracts, handling production codebases spanning hundreds of thousands of lines
- GitHub Copilot expanding to CLI — Microsoft's coding AI is moving into command-line interfaces (the text-based terminal environment that professional developers use for running builds, deployments, and system tasks), meeting developers inside their most productive existing workflows
- Google Colab's AI tutor mode — Google added an interactive AI instruction layer to Colab (a free, browser-based coding environment widely used for data science and machine learning experiments), making it possible to learn and build simultaneously in one window
- ChatGPT Pro subscription price cuts — Competition between AI coding platforms has become intense enough that OpenAI is reducing Pro subscription rates in half, signaling a mainstream pricing war has officially begun
Simultaneously, $100M+ in AI security investment is flowing into tools designed to protect the code these AI systems generate — an implicit industry acknowledgment that AI-written code is now prevalent enough to require its own dedicated security infrastructure.
What Vibe Coding Actually Can — and Cannot — Do in 2026
Claburn's experience validates AI-assisted development for certain classes of work. It does not validate it for everything. Based on his account and the broader pattern across the industry, here is an honest breakdown:
- Works reliably — standard CRUD operations (Create, Read, Update, Delete — the basic building blocks of most web applications), UI component generation, boilerplate scaffolding, data format conversion, and wiring together common library integrations
- Requires human judgment — system architecture decisions, security-sensitive code paths, performance-critical sections, and integrations between multiple external services with unpredictable real-world behavior
- The precision trap — "vibe coding" has a precision limit. The less specifically you can describe what you want, the more correction iterations it takes. Knowing what you want turns out to be a substantial skill in itself — AI amplifies clarity, it cannot substitute for it
- The review burden remains — AI-generated code still needs to be read and understood before shipping. Developers who cannot review AI output are exposed to bugs and security vulnerabilities they may never catch
The floor has risen — the minimum viable effort required to produce functional software is dramatically lower than it was 24 months ago. That is the structural shift Claburn is describing when he says he wishes it had not happened. The ramp that used to take years now takes days.
Try It Before Someone Else Explains It to You
You can stay skeptical, or you can try it and arrive at your own version of Claburn's conclusion. The fastest path to an honest answer is a real project — not a tutorial, not a "hello world" — but something you actually want that would normally require hiring someone or learning for months. Here is a practical starting framework:
- Choose an AI coding tool you already have access to — Claude, GitHub Copilot, and Cursor all support conversational code generation without setup expertise
- Describe your project in plain English, then ask the AI to begin. Do not overthink the first prompt — specificity improves through iteration, not upfront perfection
- When something breaks (and it will), paste the exact error message verbatim into the chat and ask the AI to fix it. Let it debug its own output
- Pay attention to how you feel when it works. Claburn noticed his own ambivalence immediately. That reaction — whatever yours turns out to be — is data worth having
Q2 2026 will bring further capability jumps as Anthropic, OpenAI, and Google each push the next generation of coding models toward general release. The window for informed skepticism is narrowing. Thomas Claburn has now built his evidence. The only way to know which side of this moment you are on is to build yours.
Related Content — Get Started | Guides | More News
Stay updated on AI news
Simple explanations of the latest AI developments