AI for Automation
Back to AI News
2026-04-06AI coding agentClaude Code alternativeAI automationvibe codingopen source AIGoose AIfree AI toolsAI code editor

Goose: Free AI Coding Agent vs Claude Code (37,400 Stars)

Block's free Goose AI coder hits 37,400 GitHub stars. One-command install, 25+ models, works offline. Compare free vs Claude Code's $20–$200/month.


Jack Dorsey's company Block quietly shipped a free alternative to one of the priciest AI coding subscriptions on the market — and 37,400 developers have already taken notice. Goose, Block's open-source AI coding agent, is built for AI automation workflows and the new wave of vibe coding, and has crossed 37,400 GitHub stars since launching in January 2025. Claude Code, Anthropic's competing product, starts at $20/month and climbs to $200/month for heavy users. For developers watching their budgets, that gap is hard to ignore.

Goose open-source AI coding agent by Block — free Claude Code alternative for AI automation

The Free AI Coding Tool Block Built in Rust — While Everyone Watched Claude

Goose was released on January 28, 2025, by Block's Open Source Program Office — the same company behind Square, Cash App, and Afterpay. The project is written primarily in Rust (58.3%) and TypeScript (34.1%). Rust is a programming language prized for speed and memory safety (it prevents the kind of crashes and security holes that plague older systems code), which makes Goose fast even when managing complex, multi-step coding workflows.

By the numbers, as of April 2026:

  • 37,400+ GitHub stars (a rough measure of developer interest)
  • 3,600+ forks (developers actively building on top of it)
  • 4,081 code commits
  • 126 versioned releases — the latest, v1.29.1, shipped April 3, 2026

Block CTO Dhanji Prasanna framed the open-source launch simply: "Making goose open source creates a framework for new heights of invention and growth."

Why 37,400 Developers Switched: Model Freedom

The biggest reason to pick Goose over Claude Code isn't just price — it's model flexibility. Claude Code is locked exclusively to Anthropic's Claude models. Goose works with 25+ AI providers, including:

  • Cloud models: OpenAI (GPT-5), Anthropic Claude, Google Gemini, Azure OpenAI, Amazon Bedrock, GitHub Copilot, OpenRouter (200+ models)
  • Local models: Ollama, LM Studio, llama.cpp, Docker Model Runner — entirely offline, no API costs, no source code leaving your machine

Goose also connects to over 3,000 MCP servers. MCP (Model Context Protocol) is an open standard — think of it like USB for AI tools — that lets coding agents plug into GitHub, Jira, Slack, Google Drive, Docker, Kubernetes, and thousands of other services with minimal setup. Claude Code supports MCP too, but Goose's ecosystem is considerably broader.

Install in One Command — No Subscription Required

On macOS or Linux, paste this into your terminal:

curl -fsSL https://github.com/block/goose/releases/download/stable/download_cli.sh | bash

Then configure your AI provider of choice:

goose configure

Launch an interactive coding session inside any project folder:

cd your-project && goose session

Mac users can also install the full desktop app: brew install --cask block-goose. No credit card. No waitlist. No subscription.

Goose also supports Recipes — reusable YAML files (plain-text configuration) that package multi-step workflows. Define a Recipe for "create a React app with tests and CI pipeline" once, and replay it across any project with a single command. This is the kind of team-scale AI automation workflow that moves Goose beyond a solo developer toy.

The Claude Code vs Goose Benchmark Gap Nobody's Talking About

Here's where Goose fans need to face a hard number. On SWE-bench Verified — the industry-standard test that measures how accurately an AI can resolve real, unmodified GitHub issues from popular open-source projects — the gap between Goose and Claude Code is significant:

  • Goose + Claude Sonnet (Anthropic's mid-tier model): ~45%
  • Claude Code + Claude Sonnet: 72.7%
  • Claude Code + Claude Opus 4.5: 80.9% (current peak)

The critical detail: the first two comparisons use the exact same underlying AI model. The 27.7-percentage-point gap exists because Claude Code uses Anthropic's proprietary agent loop — the reasoning-and-planning engine that decides how the AI breaks tasks into sub-steps, calls tools, and verifies results. Anthropic has specifically fine-tuned Claude's behavior to work with Claude Code's orchestration patterns in ways a general-purpose framework like Goose simply cannot replicate.

The gap narrows with more powerful models. Pairing Goose with Claude Opus 4.5 via API brings results to roughly comparable territory — but Opus 4.5 API costs can reach $500+/month for heavy use, eliminating the cost advantage entirely.

Claude Code AI coding terminal by Anthropic — SWE-bench benchmark comparison against Goose AI

What Free Goose AI Actually Costs vs Claude Code Per Month

Goose the software costs nothing. Your AI model bills are a different story. Here's the honest breakdown:

  • Local models (Ollama, llama.cpp): $0/month. Fully offline. Performance is limited by your hardware — fine for refactoring, weaker for complex multi-file reasoning.
  • Mid-tier cloud models (Claude Haiku, GPT-4o mini): Roughly $5–$20/month for typical developer use.
  • Frontier models (Claude Sonnet, GPT-5, Gemini Ultra): $30–$100/month for moderate use; $500+ for power users running long agentic sessions.

Claude Code's flat-rate pricing is simpler to budget: $20/month (Pro tier) for standard terminal access, $100/month (Max 5x) or $200/month (Max 20x) for high-volume users. For developers who need frontier model performance every day, Claude Code's all-in subscription can actually be cheaper than separate API bills — while delivering 28+ percentage points better benchmark accuracy.

Goose or Claude Code: A Practical Decision

Goose is the better pick if:

  1. You need fully private, offline AI coding (no code sent to any cloud server)
  2. You're already paying for GPT-5 or Gemini API access and want to reuse it for coding
  3. Your security policy prohibits sending source code to Anthropic's servers
  4. You want custom automation Recipes on open-source infrastructure you fully control
  5. Model flexibility and vendor independence matter more than peak benchmark accuracy

Claude Code is the better pick if:

  1. SWE-bench accuracy (80.9%) is non-negotiable — nothing currently matches it
  2. You want native IDE extensions for VS Code, JetBrains, or Cursor with zero configuration
  3. You need scheduled tasks that run on Anthropic's servers when your laptop is off
  4. GitHub Actions and GitLab CI/CD integration out of the box is a requirement
  5. Predictable flat-rate billing ($20–$200/month) beats variable API costs

The honest summary: 37,400 GitHub stars tell you Goose is real competition. The 27-point SWE-bench gap tells you it's not a replacement yet. For privacy-focused teams, local model enthusiasts, and developers who want to escape vendor lock-in, Goose is a serious tool worth installing today. For teams shipping production code who need the highest accuracy, Claude Code still sets the benchmark.

Related ContentSet Up Your AI Coding Environment | AI Automation Guides | Latest AI Tools News

Stay updated on AI news

Simple explanations of the latest AI developments