AI for Automation
Back to AI News
2026-03-17AI codingCursor AIcode qualitytechnical debtvibe codingAI developer toolsCMU studysoftware development

Cursor AI Makes You Slower After 2 Months — CMU Study of 806 Projects

Carnegie Mellon University analyzed 806 projects and found that after adopting the AI coding tool Cursor, code output surged 281% in the first month — only to vanish within two months. A 41% increase in code complexity confirmed a vicious cycle of technical debt.


Does using the AI coding tool Cursor really make you code faster? A Carnegie Mellon University (CMU) research team tracked and analyzed 806 GitHub projects over a year and a half — and the answer was surprising. When you first adopt Cursor, code output surges by 281% in the first month. But that effect disappears within two months, while degraded code quality and technical debt permanently remain, ultimately dragging development speed back down.

First Month After Adopting Cursor: 281% Output Surge, Then a Sharp Decline

The research team analyzed 806 projects that adopted Cursor alongside 1,380 control projects that did not, spanning January 2024 through August 2025 — a total of 14,755 project-months of data.

Graph showing the number of projects adopting Cursor AI over time — adoption rates surging from August 2024

▲ Number of projects adopting Cursor over time. Adoption surged sharply from August 2024. (Source: CMU paper)

The results were dramatic.

Month 1: Code output (lines added) +281.3%, commits +55.4%

Month 2: Code output +48.4%, commits +14.5%

Month 3 and beyond: Speed improvement no longer statistically significant — effectively gone

In simple terms, when you first start using an AI coding tool, it feels like the world has changed — but after two months, that effect has almost entirely vanished.

The Speed Boost Fades, but Code Quality Degradation Is Permanent

What's more concerning is code quality. While the speed boost was temporary, the quality degradation was permanent.

Chart comparing code quality before and after Cursor AI adoption — commits and output decline while warnings, duplication, and complexity persist

▲ Changes in each metric before and after adoption. The left two (commits, code output) lose their effect over time, while the right three (warnings, duplication, complexity) remain persistently elevated. (Source: CMU paper)

Code complexity (how difficult code is to read) — increased by +41.64%

Static analysis warnings (bugs and security issues caught by automated checks) — increased by +30.26%

Code duplication rate — increased by +7.03%

All three showed no decline even 6 months after Cursor adoption.

Technical Debt Accumulation Actually Slows Development Down

The research team took it a step further, analyzing how accumulated code quality issues affect development speed over time.

Diagram showing the speed-quality vicious cycle of AI coding tools — speed gains cause quality degradation which in turn reduces speed

▲ The impact structure of AI coding tools. They boost speed while degrading quality, and that degraded quality in turn erodes speed — a vicious cycle. (Source: CMU paper)

The results were clear.

When code complexity doubles, subsequent development speed drops by 64.5%

When warnings double, subsequent development speed drops by 50.3%

This is the vicious cycle of 'technical debt'. The code that AI quickly generates ends up holding you back later, completely negating the initial speed gains. The research team calculated that when warnings increase roughly 5x and complexity roughly 3x, the speed benefits are entirely offset.

Why AI Coding Tools Must Be Used with Quality Controls

The research team's conclusion is not 'don't use AI coding tools.' The key message is that 'you must manage quality alongside them.' This applies whether you're doing vibe coding with AI or professional development.

If you're a developer coding with AI:

• Make it a habit to always read and review AI-generated code yourself

• Set up linters (automated code analysis tools) so AI-generated code must pass the same quality standards

• Instructing AI to 'make it clean and readable' rather than 'just make it fast' produces better results in the long run

If you're a non-developer doing vibe coding:

• Just because AI-generated results work doesn't mean you're done. Problems can arise later when you need to modify or add features

• As your project grows, it becomes crucial to ask AI to establish good structure from the start. Check the free vibe coding learning guide for how to design project architecture

Limitations of the CMU Study and the Future of AI Coding

This study analyzed only open-source projects, so results may differ for enterprise development. It also examined only Cursor, making it difficult to directly apply the findings to other AI coding tools like GitHub Copilot or Claude Code.

But the core message is clear. AI coding tools are not a 'magic wand' — they're more like a 'powerful power tool.' Power tools make you faster, but using them without safety gear leads to injuries. Those who can capture both speed and quality are the true experts of the AI era.

This paper will be formally presented at the International Conference on Mining Software Repositories (MSR '26) in Rio de Janeiro, Brazil in April 2026. The full text is available for free on arXiv.

If you want to learn AI and vibe coding properly from the ground up, check out the Free Learning Guide.

Related ContentAI Tools | Free Learning Guide

Stay updated on AI news

Simple explanations of the latest AI developments