AI for Automation
Back to AI News
2026-04-09cybercrime 2025AI-powered attacksquantum encryptionSteamGPTpost-quantum cryptographyFBI cybercrime reportAI fraudinternet security

AI Attacks: $21B Cybercrime Losses in 2025 — FBI Report

FBI: $21B in cybercrime losses in 2025. AI-powered attacks hit $893M — fastest-growing threat. Plus Valve's SteamGPT and quantum encryption risks.


Americans lost $21 billion to internet crime in 2025 — and for the first time, AI-powered attacks are now a named, measured category in the federal record. The FBI counted over 1 million victims last year, with criminals using AI automation tools to commit fraud at a scale never seen before.

Three separate developments this week connect into one unsettling picture: Valve quietly built an internal AI bot to fight gaming fraud, Go programming language maintainers issued a code-red warning about encryption that could become globally worthless, and new federal statistics reveal just how much money is already gone.

Where the $21 Billion in Cybercrime Losses Went

The 2025 cybercrime breakdown shows a stark pattern — the fastest-growing threats are also the hardest to trace:

  • $11 billion — Cryptocurrency theft (52% of total losses)
  • $8.6 billion — Investment scams (fake trading platforms, Ponzi schemes)
  • $893 million — AI-enabled attacks (deepfakes, automated phishing, voice cloning)

That $893 million AI-attack figure sounds smaller than the others — but it represents the fastest-growing category, and is almost certainly undercounted. Attribution is notoriously difficult: many AI-assisted crimes get filed under "phishing" or "wire fraud" rather than the AI-specific bucket. This is effectively year one of AI crime being tracked as its own category.

To put it in scale: $893 million is roughly the entire annual cybersecurity budget of a mid-size country. And that's just what investigators could confirm was AI-powered.

Valve SteamGPT AI automation fraud detection bot discovered in Steam client files — AI cybercrime defense in action

Valve's SteamGPT: The AI Bot Nobody Announced

On April 9, 2026, reverse-engineers digging through Steam client files found evidence of an internal AI chatbot Valve is calling SteamGPT. Valve has made no public announcement — everything known comes from file-level analysis of the Steam desktop client, which means features could change or never ship publicly.

Two use cases are visible from the discovered files:

  1. Customer support automation — handling the enormous backlog of refund requests, account appeals, and billing disputes that come with 120+ million monthly active Steam users
  2. Counter-Strike 2 anti-cheat — using pattern recognition (spotting repeated suspicious behaviors in gameplay data) to flag cheaters faster than any human review team

This matters far beyond gaming. Valve is one of the largest digital storefronts on the internet, processing millions of transactions every day. An in-house AI designed to detect fraud in real time — rather than relying on player reports or manual review — represents exactly the kind of defensive AI deployment that the $21B cybercrime figure says the industry desperately needs more of.

The fact that Valve built SteamGPT internally, rather than licensing an off-the-shelf solution, suggests that gaming-specific fraud patterns (account takeovers, item theft, chargeback scams) are complex enough that generic tools simply don't cut it.

Quantum Encryption Crisis: The Code-Red Warning Every Developer Must Read

Go programming language post-quantum cryptography warning — developers urged to migrate to NIST PQC standards before quantum computers break current encryption

The third story has the longest fuse but potentially the largest impact: maintainers of the Go programming language — the language used to build Docker, Kubernetes, and much of modern cloud infrastructure — joined a growing chorus of cryptography experts warning that quantum computers will eventually break the encryption protecting today's internet.

Here's what that means, jargon-free:

  • Encryption is the math that scrambles your bank password, medical records, and private messages before they travel across the internet — making them unreadable to anyone without the right key.
  • Current encryption relies on math problems so hard that even the fastest classical computers would take billions of years to solve them by brute force — making unauthorized decryption practically impossible today.
  • Quantum computers (machines that use quantum physics principles to process information in fundamentally different ways from normal computers) can potentially solve those same problems in hours or days instead of billions of years.
  • Post-quantum cryptography (PQC) refers to new encryption algorithms specifically designed to resist quantum attacks. NIST — the US National Institute of Standards and Technology — finalized the first official PQC standards in 2024.

The Go maintainers' message: start migrating to PQC now, before quantum computers reach the required capability threshold. One developer quoted by Tom's Hardware described failure to migrate as a path toward "worldwide disaster" — not hyperbole when the same encryption standards protect power grids, financial settlement networks, and government communications globally.

The "Harvest Now, Decrypt Later" Threat

The urgency is real even if quantum computers powerful enough to break encryption are still years away. Adversaries are already collecting encrypted internet traffic today with the explicit plan to decrypt it once quantum capability matures. Medical records, intellectual property, and classified data harvested in 2026 will be vulnerable the moment quantum decryption becomes viable — which means migration cannot wait for the threat to become immediate.

AI Automation vs. Cybercrime: One Pattern Behind Three Stories

What connects SteamGPT, $21B in losses, and the quantum encryption alarm? All three expose the same structural gap: defense is consistently lagging behind offense.

  • Criminals adopted AI attack tools faster than platforms deployed AI defenses — and $893M reflects only what investigators could confirm as AI-powered.
  • Quantum computing research is advancing faster than cryptography migration plans — most organizations haven't started implementing NIST's 2024 PQC standards.
  • Valve building SteamGPT internally confirms that even a well-resourced tech giant found off-the-shelf tools inadequate for their specific fraud environment.

What You Can Actually Do Today

You don't need to understand post-quantum cryptography to reduce your exposure right now. Here's what's actionable at each level:

  • Individuals: Enable two-factor authentication (2FA — a second login step beyond your password) on every financial, email, and crypto account. AI-powered phishing harvests passwords at scale; 2FA adds a barrier that requires physical access to your device.
  • Developers: Check whether your cryptographic libraries support NIST's new PQC standards — specifically CRYSTALS-Kyber for key exchange and CRYSTALS-Dilithium for digital signatures. Migration now takes hours; migration under regulatory pressure takes months.
  • Organizations: Audit which data has a long shelf life. Anything sensitive that must remain private for 10+ years is already at risk from "harvest now, decrypt later" collection happening today.

The $893 million in AI-attack losses will grow. The quantum clock is ticking. And Valve — a company handling more digital transactions than most banks — apparently decided it couldn't wait for someone else to build a solution. Explore how AI automation is reshaping cybersecurity in our AI automation guides.

Related ContentGet Started | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments