An open source veteran just exposed how AI bots pick their targets
Andrew Nesbitt's satirical guide reveals JavaScript repos get 3.8x more AI bot spam. Here's how to protect your projects — or attract more bots.
A satirical blog post about AI bots flooding open source projects just hit the front page of Hacker News — and the data inside it is anything but a joke.
Andrew Nesbitt, who built Libraries.io (a platform tracking 11 million packages and 260 million repositories) and worked at GitHub, published a tongue-in-cheek "field guide" to attracting AI-generated pull requests. The twist? The numbers he cites paint a real picture of how automated bots are degrading open source software quality.
The 10-step recipe for bot chaos
Nesbitt's guide reads like a reverse instruction manual. Among the highlights:
🎯 JavaScript repositories receive 3.8x more AI-authored pull requests than the next most targeted language. If you write JavaScript, bots are already watching.
📦 Commit your node_modules folder and one colleague received "forty-seven pull requests in a single week" from a single AI agent trying to "improve" the 30,000+ files inside.
🔓 Disable branch protection — remove code reviews, skip CI checks, and watch the floodgates open.
Other tips include writing vague issues ("single sentence with no code references"), shipping packages with known security holes to attract fix-bots, and adding a .github/copilot-instructions.md file that signals to AI agents: "we're open for business."
The numbers behind the satire
The article cites an industry benchmark of 3:1 "slop density" — meaning for every human-written contribution, three come from bots. Nesbitt's target metric? 4.7 AI-authored pull requests per month for any repository with more than 500 stars on GitHub (the platform where developers share and collaborate on code).
If you follow all ten steps, he claims, expect a 400% increase in weekly PR volume and "self-sustaining chains of seven or eight dependent PRs from different bots" — where one bot's change triggers another bot to submit a follow-up fix.
The awesome-mcp-servers repo on GitHub — where over half of all pull requests were found to be bot-generated.
Who is Andrew Nesbitt?
This isn't a random complaint. Nesbitt has spent a decade building tools that track every open source package on the planet. His current project, Ecosyste.ms, monitors 11 million packages, 260 million repositories, and 22 billion dependency connections. He's also contributed to OpenSSF (the Open Source Security Foundation) and co-hosts The Manifest podcast about package management.
In other words: when he says AI bots are degrading open source, the data is right there in his day job.
How to actually protect your project
Reading the satire in reverse gives you a real defense playbook:
✅ Enable branch protection — require at least one human review before any change merges
✅ Write specific, well-scoped issues — vague issues are bot magnets
✅ Add type annotations and tests — they act as implicit specifications that bots struggle to match
✅ Keep dependencies updated — old packages with CVEs (known security flaws) attract automated fix-bots
✅ Don't commit generated files — node_modules, build outputs, and lockfiles create massive surface area for noise
The meta twist
Perhaps the most fitting detail: the article itself was originally written by Claude (via developer Mauro Pompilio) and submitted as a pull request to Nesbitt's blog. An AI wrote a satirical guide about AI spam — and a human decided it was good enough to publish.
That says everything about where we are in 2026. The line between useful AI contribution and automated noise isn't technical — it's about whether a human is in the loop to judge quality.
The post gathered 111+ points on Hacker News and sparked a broader conversation about the sustainability of open source when bots can submit changes faster than humans can review them.
Related Content — Get Started with Easy Claude Code | Free Learning Guides | More AI News
Stay updated on AI news
Simple explanations of the latest AI developments