AI for Automation
Back to AI News
2026-04-29AI agent securitycrypto miningAI agentssupply chain attackAI automationcybersecuritymalwareAI marketplace

AI Agents Hijacked as Crypto Miners: 30 Skills Undetected

30 AI agent skills silently mined crypto on user machines — bypassing every antivirus. Learn how the supply chain attack worked and what to audit.


Thirty skills published in the ClawHub AI marketplace have been secretly drafting users' AI agents into cryptocurrency mining swarms — and not a single antivirus or security scanner raised an alarm. All 30 were created by one author, all published through normal marketplace channels, and all exploiting the same structural gap: they never installed malware. They used the AI agent's own permissions to do the mining instead.

This is the new shape of supply chain attacks (incidents where malicious code enters systems through trusted software channels rather than direct hacking). Only now, the trusted channel is an AI skill marketplace — and the attack surface is every AI agent running in the background on developer and enterprise machines worldwide.

AI agent marketplace security vulnerability exposing silent crypto mining supply chain attack

The AI Agent Crypto Attack No Antivirus Was Designed to Catch

Classic malware (software designed to harm or exploit a system) works by doing something the operating system wasn't asked to do: writing unexpected files, spawning unauthorized processes, or making suspicious network calls. Modern antivirus tools are built around detecting exactly these behaviors. ClawHub's 30 malicious skills bypassed all of it by doing none of it.

Instead, each skill issued its instructions through the AI agent runtime — the software layer that receives skill commands and executes them using resources already granted to the AI agent by the user. When the runtime received a crypto mining instruction, the operating system saw what looked like normal AI processing activity: elevated CPU use, outbound network connections, memory allocation. These are expected behaviors for a working AI agent handling a complex task. The only unexpected element was the destination: a third-party cryptocurrency mining pool (a coordinated network of machines solving cryptographic math problems in exchange for digital coin rewards) controlled by the skill author.

The result: from a security scanner's point of view, nothing unusual happened. From the skill author's perspective, every user who installed any of the 30 skills became an unwilling contributor to a distributed mining operation. With enough installations, the effect compounds into a swarm — dozens or hundreds of machines contributing small amounts of compute that collectively generate meaningful crypto revenue for the attacker, around the clock, without any victim noticing.

How AI Agent Runtimes Became an Attacker's Shortcut

The fundamental vulnerability is a trust architecture designed for controlled research environments, then inherited by commercial AI marketplaces without adequately scaling the security review to match.

When you install a smartphone app, the mobile operating system enforces granular permissions — specific approvals required before the app can touch your camera, location, or contacts. The app must ask, the user must approve, and the system enforces limits. AI agent skills work differently. Many AI agent frameworks (software environments that coordinate AI models with external tools and capabilities) grant skills broad runtime access upfront to make them more useful. Skills that run inside that context inherit the agent's existing access without needing to request permissions again.

The Gap Between App Stores and AI Agent Skill Marketplaces

Apple's App Store and Google Play both run automated and human review processes explicitly screening for policy violations like unauthorized background processes or hidden data collection. AI skill marketplaces have generally launched without equivalent infrastructure. ClawHub's malicious batch — 30 skills from one account — was apparently active and accumulating installs before any detection flagged the behavior.

Security researchers have drawn comparisons to early npm (Node Package Manager — the most widely used repository for JavaScript code packages) vulnerabilities, where malicious packages would accumulate thousands of downloads before removal. The critical difference: AI agent skills have direct, pre-granted access to system resources in ways that traditional software packages typically do not. There is no permission dialog between the skill and the compute it wants to use.

AI agent crypto mining swarm attack diagram — AI automation compromised by supply chain exploit

30 AI Agent Skills, One Author, Invisible to Every Security Tool

All 30 compromised skills were created by a single ClawHub account. Each skill presented a surface-level function — likely productivity tools, file formatters, or integration helpers — while embedding instructions to redirect CPU cycles to an external mining pool. The approach mirrors typosquatting attacks in software package registries (where malicious packages use names nearly identical to legitimate ones), but adapted for AI skill discovery: each skill only needs to appear useful enough to be installed once.

For developers running AI agents on cloud infrastructure (paid computing services billed by resource consumption), unauthorized crypto mining creates a direct financial impact. Cloud providers charge per CPU-hour and per gigabyte of outbound network traffic — both of which spike during mining operations. Affected users would see unexplained cost increases with no obvious cause, since the activity would register as elevated AI processing load. For individuals running local AI agents, the impact manifests as slowdowns, battery drain, and sustained hardware stress from continuous high-CPU operation.

The attack has a second layer of invisibility: behavioral masking. Cryptocurrency mining workloads and heavy AI inference workloads (the computational process of generating a response from a large language model) produce similar CPU and memory signatures. A security team watching process-level resource usage would find it difficult to distinguish between "agent processing a complex request" and "agent mining cryptocurrency." Standard monitoring thresholds — deliberately set high to accommodate the resource demands of legitimate AI work — pass both scenarios without alerting.

April 2026's Widening AI Security Crisis

The ClawHub incident sits at the center of a cluster of AI-adjacent infrastructure failures surfacing in April 2026. It is not an outlier — it is a pattern:

  • Vect ransomware revealed as a wiper — researchers discovered that Vect, marketed as ransomware (software that encrypts files and demands payment for decryption keys), is actually a destructive data wiper. It permanently destroys all files larger than 128KB. There is no recovery key — because full recovery is technically impossible. Researchers stated plainly: "Full recovery is impossible for anyone, including the attacker." Victims who pay receive nothing.
  • Pitney Bowes breach: 8.2 million emails exposed — the logistics company suffered a large-scale data exposure attributed to the ShinyHunters group, one of the most active data brokers in underground markets. Email addresses at this scale enable targeted phishing campaigns and credential stuffing attacks.
  • SAP restricts third-party AI integrations — SAP implemented a new clause in its API (Application Programming Interface — the technical connection point that allows software systems to communicate) policy, restricting integration with AI tools it has not endorsed. Critics compare the move to the kind of vendor lock-in strategies that trapped enterprises in proprietary cloud ecosystems throughout the 2010s.
  • GoDaddy transfers 27-year domain without verification — a domain registered for 27 years was transferred to another customer without authentication or document verification, requiring 32 phone calls and 17 emails to resolve. The incident revealed an absence of even baseline transfer security standards at one of the world's largest domain registrars.

The thread connecting these incidents: deployment speed is consistently outpacing the security frameworks built to contain risk. AI skill marketplaces, enterprise API ecosystems, and domain registrar processes are all moving fast enough to leave exploitable gaps — gaps that in this case required no sophisticated technical skill to walk through.

What Developers and IT Teams Can Do Before the Next 30 Skills Appear

Until AI skill marketplaces implement review infrastructure equivalent to major app stores, the responsibility falls on developers and enterprise IT teams to enforce their own boundaries. Several immediate steps apply:

  • Audit installed skills now, not later. Enumerate every active AI skill in your environment. Prioritize accounts with multiple published skills, no verified identity, and no public history. A single author publishing 30 skills rapidly is a specific red flag this incident has now established.
  • Monitor outbound network connections at the process level. Set alerts for outbound connections originating from AI agent processes that reach non-whitelisted external domains. Mining pools have distinctive connection patterns: regular, high-frequency calls to a small set of fixed external addresses.
  • Isolate AI agent runtimes in containers. Containerized environments (sandboxed computing environments that restrict what a process can access on the host machine) with explicit egress rules (outbound network connection policies) limit what a compromised skill can actually reach. A skill that cannot reach an external mining pool cannot mine.
  • Apply software supply chain scrutiny to every AI skill. The same diligence applied to open-source code dependencies — checking maintainer reputation, reviewing publication history, flagging sudden bulk publications from one account — should apply before installing any AI skill in a production environment.
  • Ask marketplace operators direct security questions. Before adopting any skill marketplace, ask: What automated security scans run on submitted skills? What is the takedown process and average response time when a skill is flagged? Are author identities verified before publication is permitted?

The ClawHub case is almost certainly not the last of its kind. As AI agent adoption accelerates — with IBM, Amazon, and Microsoft all releasing commercial agent platforms in Q1–Q2 2026 — the value of hijacking agent compute will only increase. The next attacker running 300 skills instead of 30 will be harder to detect and slower to remove. Explore AI automation security guides to understand what guardrails matter most before your next agent deployment.

Related ContentGet Started | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments