AI for Automation
Back to AI News
2026-04-26linuxopen-sourceai-automationkernelsecurity

AI bots silently killed Linux ISDN — 138,000 lines deleted

AI tools auto-filing fake bug reports forced Linux 7.1 to delete 138,000 lines of code, wiping the entire ISDN subsystem. GCC is now forming a committee on...


The Linux 7.1 kernel (the core software managing hardware for most of the world’s servers, Android phones, and cloud infrastructure) shipped this month with 138,000 fewer lines of code than its predecessor — not because engineers optimized it, but because AI tools generated so much automated noise that human maintainers had no choice but to delete entire subsystems. This is not routine cleanup: it is a documented case of AI automation creating enough chaos in a foundational open-source project that the humans maintaining it had to perform emergency surgery.

The story comes from Phoronix, the 19-year-old Linux hardware benchmarking and open-source news site that covers kernel development more closely than almost any other publication. While mainstream tech news was focused elsewhere, Phoronix documented a pattern that affects billions of devices worldwide.

The AI Noise Problem That Forced 138,000 Lines Gone

The deleted code belonged to the ISDN (Integrated Services Digital Network — a telephony standard for dial-up data connections popular in the 1990s and early 2000s) subsystem. ISDN hardware has not been relevant in consumer or enterprise computing for over two decades, but the drivers sat in the Linux kernel anyway — maintained just enough to avoid breaking things, mostly ignored.

Then AI and LLM (Large Language Model — automated systems that generate text by predicting patterns in training data) tools began interacting with the Linux bug tracker. These tools scan codebases, identify patterns that look like potential issues, and file reports automatically. They do not understand context. They do not know that ISDN is obsolete. They simply see code that matches their bug-detection patterns and file reports — at scale, without any human judgment applied.

The volume of these auto-generated reports against the ISDN subsystem became unmanageable. Rather than spending maintainer time triaging noise indefinitely, the Linux kernel team made a decisive call: remove the subsystem entirely. All 138,000 lines. The same kernel update also removed drivers for obsolete PCMCIA (Personal Computer Memory Card International Association — a card expansion format used in 1990s laptops) hardware, following identical logic.

The practical result is that Linux 7.1 is leaner — but the path to that result is a direct warning about how AI tools interact with human-managed infrastructure projects at scale.

Phoronix — premier Linux benchmarking and open-source news site

GCC Responds: A Working Group for the AI Era

Linux is not the only foundational open-source project scrambling to establish rules. GCC (the GNU Compiler Collection — the software that translates human-written source code into the machine instructions your computer actually runs) has established a working group specifically to determine its policy on AI and LLM-generated contributions.

This matters because GCC is infrastructure-level software. A subtle bug introduced by an AI-generated patch does not stay contained — it propagates into every program compiled with that version of GCC, potentially affecting millions of deployments simultaneously. The stakes for getting the policy wrong are asymmetric: the benefit of faster AI contributions is incremental, but the cost of a single corrupted compilation toolchain can be catastrophic.

The working group is expected to address three core tensions:

  • Copyright ambiguity: AI-generated code exists in a legal gray zone. When a model trained on GPL (General Public License — a software license requiring that derivatives remain open-source) code produces new code, ownership and licensing obligations remain genuinely unclear in most jurisdictions.
  • Review burden versus contribution volume: If AI tools submit patches faster than humans can verify them, the bottleneck shifts from writing code to reviewing it — potentially making maintainers’ jobs harder, not easier.
  • Regression risk: Regressions (new bugs introduced while attempting to fix existing ones) in a compiler are especially dangerous because the compiler is used to build every other piece of software on the system.

The fact that GCC needs a formal committee to answer these questions — and that the Linux kernel already took drastic action — suggests the industry’s assumptions about AI simply accelerating open-source need a much harder look at the actual evidence.

Ubuntu 26.04 LTS and Microsoft’s Quiet Pivot to Fedora

Amid these tensions, the Linux ecosystem shipped two significant milestones this week.

Ubuntu 26.04 LTS officially released on April 23, 2026. The LTS designation (Long-Term Support — a release backed by 5 years of security patches and updates) makes this the version enterprise teams, cloud providers, and risk-averse system administrators will deploy in production environments. It ships on the Linux 7.0 kernel, meaning the 138,000-line AI-noise cleanup arrives in a later Ubuntu release.

Fedora 44 is releasing next week — and its timing is notable because reports indicate that Microsoft is rebasing Azure Linux (its proprietary in-house Linux distribution for running Azure cloud infrastructure) onto the Fedora codebase. If confirmed, this represents Microsoft’s acknowledgment that maintaining an entirely independent Linux fork carries diminishing returns, and that standardizing on a community-driven base like Fedora is more sustainable long-term.

For Linux users, a Microsoft-Fedora alignment has a counterintuitive upside: enterprise investment flowing into Fedora’s upstream quality could accelerate improvements that eventually reach all Fedora-based systems, including Red Hat Enterprise Linux and its derivatives.

Tux — the Linux mascot representing the kernel running billions of devices worldwide

Benchmarks: AMD’s $899 Processor and Two Years of Hardware Catch-Up

Phoronix’s hardware coverage this week includes over 300 benchmarks on the AMD Ryzen 9 9950X3D2 Dual Edition, priced at $899 USD. The “X3D” designation indicates AMD’s 3D V-Cache technology (an additional layer of high-speed cache memory physically stacked directly on top of the processor die, dramatically reducing the time needed to access frequently used data). Early results indicate strong gains in memory-intensive workloads where cache size matters more than raw clock speed.

On the Apple Silicon front, the Asahi Linux project — which reverse-engineers support for Apple chips on Linux — has brought its Apple M3 support to near-alpha quality, approaching the level of stability the M1 had at its initial release. The project also shipped its first updated installer in nearly 2 years. Separately, the legacy NVIDIA xf86-video-nv driver (the older open-source display driver for pre-modern NVIDIA graphics cards) received its first update in over 2 years.

For developers wanting to run standardized Linux performance benchmarks, the Phoronix Test Suite — a free, cross-platform automated benchmarking framework (software for running reproducible, standardized performance tests and comparing results across hardware) — is available through most package managers. You can explore how it fits into your workflow via the AI Automation Guides:

# Ubuntu/Debian:
sudo apt install phoronix-test-suite

# RedHat/Fedora:
sudo dnf install phoronix-test-suite

# Or from source:
git clone https://github.com/phoronix-test-suite/phoronix-test-suite
cd phoronix-test-suite
./install-phoronix-test-suite.sh

The Real Lesson from the ISDN Deletion

The 138,000-line deletion is a concrete data point in a debate that has mostly been theoretical: does AI tooling add net value to open-source maintenance, or does it shift burden in ways that are not immediately visible to the people promoting these tools?

The ISDN case makes the cost visible. Here is what the pattern actually demonstrates:

  • Zero-cost actions change incentive structures: When filing a bug report costs an AI tool nothing, volume becomes effectively unlimited. Human reviewers still operate under the same fixed time constraints as always.
  • Dormant code becomes a liability: Code that was “harmless but unused” becomes a high-noise target for automated systems that flag it repeatedly, forcing cleanup that might have happened in 5 years to happen now — under crisis pressure.
  • Governance rules are being written mid-crisis: Neither the Linux kernel nor GCC had explicit AI contribution policies before this became an operational problem. Both projects are now writing those rules while actively managing the damage.

For developers and teams using AI tools that interact with external repositories, issue trackers, or open-source projects: verify whether the project has a published AI contribution policy before automating any interactions. The Linux kernel maintainers deleted 138,000 lines of code as their response to the noise. Other projects are still deciding how they will respond — and some will be far less forgiving. The AI Automation Guides cover which external interactions are safe to automate and which carry risks worth understanding first.

Related ContentGet Started | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments