A credential-stealing virus just hit one of AI's most popular tools
LiteLLM, used by 40K+ developers including Stripe and Google, was infected with malware that silently stole API keys. Here's what happened and what to do.
If you use AI through Python, you may already be affected. LiteLLM — a widely-used open-source tool that lets developers connect to 100+ AI models (Claude, GPT, Gemini, and more) through a single interface — was just caught distributing credential-stealing malware through its official Python package.
The compromised versions, 1.82.7 and 1.82.8, contained a hidden file that silently harvested API keys for OpenAI, Anthropic, and other AI services, then sent them to an external server. The project has 40,200 GitHub stars and is used by organizations including Stripe, Google, Netflix, and the OpenAI Agents SDK.
How the attack worked
The malware was hidden inside a file called litellm_init.pth — a special Python file that runs automatically every time Python starts, not just when you import LiteLLM. This means the malicious code executed silently in the background, even if you weren't actively using the tool.
Here's what the malware did, step by step:
1. Collected all environment variables on your system — including API keys for OpenAI, Anthropic, and other services
2. Encrypted the stolen data using military-grade AES-256 encryption
3. Sent everything to models.litellm.cloud disguised as a normal file transfer
4. Ran silently with no visible output — no errors, no warnings, nothing
Who's behind it — and how it happened
According to the GitHub security reports filed on March 24, 2026, the attack appears to be a PyPI supply chain compromise — meaning the attacker likely gained access to the maintainer's account on PyPI (Python's official package directory, similar to an app store for code) and uploaded poisoned versions of the package.
One of the security issues (Issue #24514) explicitly states: "Maintainer Account is compromised." This suggests it wasn't an inside job — someone broke into the maintainer's account and uploaded the infected versions.
The malware was first discovered when a developer using a related tool called nanobot-ai (which depends on LiteLLM) noticed suspicious network activity. The community quickly identified the malicious file and raised the alarm.
Who's affected
Anyone who installed or updated LiteLLM to version 1.82.7 or 1.82.8 via pip (Python's package installer) in the last few days could be compromised. The impact is especially serious because:
- LiteLLM is everywhere — it's the standard middleware for companies running multiple AI models
- API keys mean money — stolen keys can be used to run up thousands of dollars in AI usage charges on your account
- The malware runs on every Python start — not just when you use LiteLLM, making it harder to detect
What you should do right now
If you use LiteLLM, take these steps immediately:
1. Check your installed version:
pip show litellm
2. If you're on version 1.82.7 or 1.82.8, uninstall immediately:
pip uninstall litellm
3. Check for the malicious file:
find $(python -c "import site; print(site.getsitepackages()[0])") -name "litellm_init.pth"
4. Rotate ALL your API keys — OpenAI, Anthropic, Google, Cohere, and any other AI service keys stored in your environment variables. Do this even if you're not sure you were affected.
5. Check your AI billing dashboards for any unexpected usage spikes.
A growing pattern in AI tools
This isn't the first time a popular AI-related package has been targeted. Supply chain attacks — where hackers compromise the distribution channel rather than the software itself — are becoming more common as AI tools become critical infrastructure. Earlier this week, the Trivy GitHub Actions tool was also compromised in a similar attack.
The LiteLLM incident is particularly alarming because of the tool's scale: with 40,000+ stars and adoption by major enterprises, a single compromised package version could expose API keys worth millions of dollars across the industry.
The LiteLLM team has not yet issued a formal public response, but the affected versions have been flagged on GitHub. If you manage AI infrastructure, this is a good day to audit your dependencies.
Related Content — Get Started with Easy Claude Code | Free Learning Guides | More AI News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments