AI for Automation
Back to AI News
2026-05-01AI automationopen source AIBun JavaScript runtimeZig programming languageAnthropicLLM toolsAI policyJavaScript runtime

Zig AI Ban Leaves Bun's 4x Compile Speed Gain Stranded

Zig bans all AI contributions. Anthropic's Bun fork hit a 4x compile speedup — but can't upstream it. Inside open source's widening AI automation divide.


In December 2025, Anthropic — the company behind Claude — quietly acquired Bun, a high-performance JavaScript runtime built on Zig (a low-level systems programming language similar to C, prized for speed and memory safety). Within months, Bun's engineering team achieved a 4x compile-time speedup through parallel semantic analysis (a technique that checks multiple sections of code simultaneously instead of sequentially). There's one problem: Zig bans all AI-assisted contributions, and Bun relies heavily on AI automation tools. That 4x improvement is now stranded in a private fork.

This isn't a hypothetical conflict. It's the most concrete example yet of what happens when AI adoption collides head-on with open-source craft philosophy — and it has direct consequences for millions of JavaScript developers running Bun today.

Bun JavaScript runtime Zig fork with 4x AI-assisted compile speed improvement by Anthropic

The AI Policy That Split the Zig Fork

Zig's Code of Conduct is unusually specific: "No LLMs for issues. No LLMs for pull requests. No LLMs for comments on the bug tracker, including translation." Every contribution pathway is explicitly blocked — including using AI just to translate a comment from another language. No exceptions are listed.

The reasoning comes from Loris Cro, VP of Community at the Zig Software Foundation, who introduced the concept of Contributor Poker: the idea that open-source maintainers should "play the person, not the cards." Under this framework, a contribution's value isn't just the code itself — it's the relationship with the human who created it.

Cro's argument: "The time the Zig team spends reviewing your work does nothing to help them add new, confident, trustworthy contributors to their overall project." When an LLM (large language model — an AI system trained on vast datasets to generate code and writing) submits a pull request, reviewer time gets spent but no human learns, grows, or becomes a trusted maintainer. The pipeline for developing skilled contributors gets bypassed entirely.

The philosophy is coherent and principled. It's also now carrying a measurable cost.

Bun's 4x Compile Speed Win Locked in a Private Fork

Bun's team implemented parallel semantic analysis (a compiler optimization where multiple code modules are type-checked and analyzed simultaneously rather than one after another) into their internal Zig fork. The result was a 4x improvement in compile times — a breakthrough for a runtime already celebrated as the fastest in the JavaScript ecosystem.

Normally, a performance gain this significant would be contributed back to the upstream project (Zig itself), benefiting the entire community of Zig developers. But Bun's AI-integrated development process means those contributions can't be submitted under Zig's policy, regardless of technical quality.

Two factors compound the problem:

  • Architectural depth: Parallel semantic analysis has language-level implications that require careful integration — making it nontrivial to upstream even without the policy barrier
  • Ownership structure: Bun is now owned by Anthropic, a company whose core product is AI assistance — making compliance with an anti-AI contribution policy structurally incompatible

The result is a permanent fork in capability: Bun users get 4x faster builds. Zig's ecosystem doesn't. Two communities that once shared a codebase now operate with diverging performance baselines, and the gap will widen over time.

LLM 0.32: An AI Automation Tool Built to Outlast Model Churn

While infrastructure communities debate AI policy, individual developers are solving a different problem: how do you build tooling that doesn't go stale every few months as new model capabilities land?

Simon Willison's answer is the LLM library — a Python library and command-line tool (a program you run directly in your terminal) that launched in April 2023 as a simple text-in, text-out utility. Version 0.32a0 is a major backward-compatible refactor (a rewrite that preserves all existing behavior while rebuilding the internal structure) that brings LLM in line with how modern models actually work.

The core upgrade: LLM now uses a messages-based API (a conversation format that organizes input as a list of role-labeled messages — "user," "assistant," "system") that mirrors OpenAI's chat completions format. This makes it possible to feed existing conversation history into a model call without depending on SQLite (a lightweight local database) as an intermediary — a limitation that made earlier versions cumbersome for stateless applications.

New capabilities in LLM 0.32:

  • Attachments — pass images, audio, or video directly alongside a text prompt
  • Streaming parts — receive mixed content types (text, reasoning tokens, tool call results) in a single streaming response
  • Structured output — use schemas (predefined JSON templates) to guarantee a consistent, parseable output format from any supported model
  • Tool calls — let models invoke external functions mid-conversation and incorporate the results before responding

Getting started requires one command:

pip install llm
llm "What is parallel semantic analysis?"

Our AI automation setup guide walks through configuring LLM with OpenAI, Anthropic, and local model providers step by step.

Or in Python, to switch between models with minimal code changes:

import llm

model = llm.get_model("gpt-4o")
response = model.prompt("Explain Zig's Contributor Poker philosophy")
print(response.text())
Simon Willison LLM 0.32 Python library for AI automation across OpenAI, Anthropic, and local models

AI Automation vs. Open Source Identity: What's Really Driving the Divide

What makes the Bun/Zig split unusual is that it isn't a disagreement about whether AI-generated code is good or bad. It's about institutional identity. Zig's founders believe the process of developing human contributors is the actual product of open-source maintainership — and AI short-circuits that process in ways no code review can fix. Anthropic believes AI assistance is a categorical net positive for software development. Both positions are internally consistent.

When they collide in a shared codebase, technical quality becomes irrelevant. The split is ideological, and it will persist.

Three trends worth watching as this plays out through the rest of 2026:

  1. Fork proliferation. More projects will adopt explicit AI contribution bans as AI-generated PRs increase. Expect permanent capability divergences between AI-assisted forks and upstream projects — the Bun/Zig situation will not be unique.
  2. Anthropic's infrastructure footprint. The Bun acquisition signals Anthropic is building beyond model APIs. Millions of JavaScript developers now run on Anthropic's runtime without necessarily knowing it.
  3. Abstraction tools as the durable layer. Libraries like Willison's LLM succeed precisely because they insulate developers from the instability of individual model APIs. The projects that build stable interfaces over a fast-changing model landscape will prove the most enduring.

If you build with Python and want a unified interface across multiple AI providers, explore AI automation tools and guides — covering the LLM library with OpenAI, Anthropic, and dozens of local models through plugins. If you contribute to open-source projects, Zig's Contributor Poker framework is worth reading before the first AI-generated PR lands in your inbox.

Related ContentGet Started | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments