AI for Automation
Back to AI News
2026-03-24AI codingDSPyAI toolsStanfordopen sourcedeveloper tools

A dev just proved every AI team rebuilds the same thing

A viral post maps the 7 stages every AI team goes through before accidentally reinventing DSPy — a free Stanford tool with 33K GitHub stars used by JetBlue and Databricks.


A blog post titled "If DSPy is so great, why isn't anyone using it?" just hit 208 points and 117 comments on Hacker News — and exposed a pattern almost every team building AI apps falls into without realizing it.

The core insight, dubbed "Khattab's Law" by author Skylar Payne: "Any sufficiently complicated AI system contains an ad hoc, informally-specified, bug-ridden implementation of half of DSPy." In plain English: every team building AI apps ends up reinventing the same solutions from scratch — badly.

DSPy logo — the Stanford framework for programming AI systems

The 7 stages every AI team goes through

Payne maps a progression that will feel painfully familiar to anyone who's built an AI-powered feature:

Stage 1 — "Ship it"
You call the OpenAI API with a basic prompt. It works. You celebrate.

Stage 2 — Prompt flexibility
You move prompts into a database so you can tweak them without redeploying code.

Stage 3 — Format control
AI responses come back messy, so you add structured output rules (typed schemas that force the AI to respond in a specific format).

Stage 4 — Resilience
API calls fail randomly, so you add retry logic with increasing wait times between attempts.

Stage 5 — Context retrieval
The AI doesn't know enough, so you add RAG (a system that fetches relevant documents and feeds them to the AI before it answers).

Stage 6 — Measurement
You realize you have no idea if the AI is actually getting better, so you build testing infrastructure.

Stage 7 — Model switching
You want to try Claude instead of GPT (or vice versa), so you refactor everything to work with multiple AI providers.

By Stage 7, Payne argues, your team has spent months building "a worse version of DSPy" through accumulated complexity.

What DSPy actually does — without the jargon

DSPy is a free, open-source tool from Stanford NLP with 33,100 GitHub stars and 4.7 million monthly downloads. Instead of writing and endlessly tweaking AI prompts by hand, you describe what you want in structured code — and DSPy automatically finds the best way to prompt the AI for you.

It does three things:

Signatures — You define what goes in and what comes out (like a contract between you and the AI).

Modules — You snap together reusable building blocks instead of writing everything from scratch.

Optimizers — Algorithms that automatically improve your prompts and find the best approach — no manual tuning needed.

JetBlue, Databricks, and Sephora already use it

Despite its adoption gap compared to LangChain (222 million monthly downloads), DSPy has a growing list of production users: JetBlue, Databricks, Sephora, Replit, VMware, and Zoro UK. These companies report faster model testing, better maintainability, and less time spent on "plumbing" — the boring infrastructure work that doesn't directly improve the product.

Why the gap between quality and adoption?

The 117-comment Hacker News thread surfaced three real barriers:

1. It requires thinking differently. DSPy asks you to design your AI system upfront instead of hacking prompts together. That's harder when you're under pressure to ship.

2. Python only. Multiple commenters noted their companies had to "stand up an entirely new repo in Python" despite running .NET or TypeScript codebases. That's a dealbreaker for many teams.

3. The best feature is buried. Several developers pointed out that DSPy's killer feature — automatic prompt optimization (it finds better prompts for you through testing) — isn't prominently explained. One commenter: "The biggest differentiator of DSPy is prompt optimization. Yet the article doesn't mention that at all?"

Alternatives the community mentioned

The HN discussion also surfaced lighter alternatives for teams not ready for DSPy's full approach:

  • Pydantic AI — if you already use Python and want structured outputs without a new paradigm
  • LiteLLM — a thin wrapper for switching between AI providers (solves Stage 7 alone)
  • BAML and TensorZero — for TypeScript teams who can't adopt Python-only tools
  • LangGraph — a more established option for building AI agent workflows

Try it yourself

If you're building anything with AI APIs and recognize yourself in those 7 stages, DSPy is free and takes one command to install:

pip install -U dspy

It works with OpenAI, Anthropic (Claude), Google Gemini, and local models via Ollama. The official tutorials walk through building classifiers, RAG pipelines, and multi-tool agents step by step.

The real takeaway from Payne's post isn't "use DSPy" — it's that whether you adopt the framework or not, the patterns it enforces (typed inputs/outputs, composable building blocks, automated testing) are not optional. Every team ends up needing them. The only question is whether you build them deliberately or discover them painfully.

Related ContentGet Started with Easy Claude Code | Free Learning Guides | More AI News

Stay updated on AI news

Simple explanations of the latest AI developments