Google, Anthropic & OpenAI Join Forces — $12.5M Investment to Find and Fix Open Source Security Vulnerabilities with AI
Google, Anthropic, OpenAI, Amazon, Microsoft, and two other tech giants are pooling $12.5 million into the Linux Foundation. AI tools that automatically discover and patch decades-old security vulnerabilities will also be released as open source.
The backbone of the internet, smartphone apps, and cloud services we use every day is open source software (publicly available programs anyone can use for free). But what happens when that backbone has security holes? On March 17, 2026, seven leading AI companies announced a combined $12.5 million investment to tackle exactly this problem.
Why Seven Competing AI Giants Are Teaming Up
The companies involved are Google, Google DeepMind, Anthropic (makers of Claude), OpenAI (makers of ChatGPT), Amazon Web Services (AWS), Microsoft, and GitHub. These fierce competitors in the AI market have set aside their rivalries to tackle a shared challenge: securing the open source software the entire internet depends on.
The funds will be managed by the Alpha-Omega Project and the OpenSSF (Open Source Security Foundation), both under the Linux Foundation. Alpha-Omega has already distributed over $20 million across more than 70 security grants to open source projects worldwide.
- Seven major AI companies invest a combined $12.5 million in open source security
- Google DeepMind's AI tool Big Sleep uncovered a security vulnerability that had been hiding for 20 years
- AI is evolving beyond just finding bugs — it can now automatically generate fixes as well
AI Uncovered a Security Hole That Had Been Hiding for 20 Years
Why is this investment necessary? The track record of Google's existing AI security tools tells the story.
Big Sleep is an AI security tool built by Google DeepMind. Just like a human developer reading through code to find bugs, Big Sleep analyzes source code to automatically detect security vulnerabilities that hackers could exploit. It already found a critical, exploitable bug in SQLite — one of the most widely used database engines in the world.
A companion tool called CodeMender takes it a step further: it automatically generates patches to fix the bugs that Big Sleep finds. Google confirmed both tools are already being used to harden the Chrome browser.
Google's OSS-Fuzz (an automated open source security scanning tool) was enhanced with AI, delivering these results:
• Expanded test coverage by over 370,000 lines of code across 272 C/C++ open source projects
• Discovered 26 new security vulnerabilities
• One of them was a bug in the cryptography library OpenSSL that had gone undetected for 20 years (CVE-2024-9143) — something no human-written test had ever caught
Source: Google Security Blog
The Core Problem: AI Finds Bugs Faster Than Humans Can Fix Them
Paradoxically, AI security tools working too well is itself a problem. The sheer volume of security reports AI generates far exceeds the capacity of open source developers — most of whom are unpaid volunteers — to review and act on them.
Solving exactly this bottleneck is a central goal of the investment:
• AI security tooling — going beyond detection to automatically generating fix patches
• Direct support for open source developers — providing tools and training to efficiently handle AI-generated security reports
• Expanding Sec-Gemini — making Google's security-focused AI model available for free to open source projects
Alpha-Omega co-founder Michael Winser described it as "giving every one of the hundreds of thousands of open source projects their own AI security expert." AWS Director Mark Ryland added: "We're not just writing a check — we're committed to delivering the tools and expertise these projects actually need."
What Does This Mean for You?
Even if you never touch open source software directly, everyone who uses the internet benefits from this investment. The Chrome browser, Android phones, and nearly every web service you use are built on open source code underneath.
What It Means for You Specifically
If you're a developer — You can express interest in the Sec-Gemini research program to get free access to AI security tools for your own open source projects.
If you're a general user — No action needed on your end. The open source libraries powering Chrome, Android, and other services will be significantly more secure thanks to AI. Bugs that hid for over 20 years — like the OpenSSL vulnerability found in 2024 — will increasingly be discovered and fixed automatically.
If you're a corporate security professional — The open source components in your stack will receive security patches faster than ever before, at no additional cost to you. Upstream security just got a major upgrade.
The AI Security Race: Defense Is Pulling Ahead
What makes this announcement significant is what it represents: seven companies that build AI are declaring they will use AI to defend against the new threats AI itself has enabled. With hackers already leveraging AI to craft attack code, the defensive side is now matching that pace with AI of its own.
Google noted that "Big Sleep and CodeMender have already delivered remarkable results internally" and that "expanding this technology across the entire open source ecosystem is the natural next step." In an era where AI writes code, a structure is taking shape where AI also takes responsibility for keeping that code safe.
Related Content — Get Started with AI | Free Learning Guide | More AI News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments