AI for Automation
Back to AI News
2026-05-01China AI theftPLA military AIUS-China AI competitionAI cybersecurityChatGPT ban Chinamilitary AI weaponsAI national securityCSET Georgetown

China Stole U.S. AI Through 14 Secret PLA Military Contests

China publicly banned ChatGPT, then ran 14 secret PLA military contests to steal U.S. AI. CSET testified to Congress: the theft is already happening at scale.


China publicly banned American AI models including ChatGPT — then secretly ran at least 14 military competitions designed to replicate and steal them. That's the central finding Georgetown University's CSET (Center for Security and Emerging Technology, a nonpartisan policy research institute that briefs Congress on AI threats) brought to legislators on April 30, 2026. The AI national security implications are serious: U.S. government subsidies for AI infrastructure may end up accelerating the military capabilities of America's primary strategic competitor.

China AI Theft: Banned Publicly, Stolen Privately — The Evidence

CSET Senior Fellow Andrew Lohn — who previously served as Director for Emerging Technology on the National Security Council under an IPA agreement (an Intergovernmental Personnel Act arrangement where researchers temporarily staff federal agencies) — testified before the U.S.-China Economic and Security Review Commission (USCC, a federal body that monitors national security implications of U.S.-China trade and economic relationships) on April 30, 2026. His subject: "China's Expanding Strategy for Data Dominance."

His core finding was blunt:

"China's incentives to steal American AI technologies are mixed. Despite banning the models and restricting chip purchases, they are certainly taking active steps to acquire the technology. That includes activities to acquire expertise, hardware, and models."
— Andrew Lohn, CSET Senior Fellow

China's domestic AI ban is not a retreat from AI competition. It's a calculated move: restrict U.S. AI influence inside Chinese borders while aggressively acquiring the underlying technology through other means. The Chinese government bans the product. Its military funds the replication.

14 PLA Military AI War Games — Decoded From Public Records

CSET analyzed 14 People's Liberation Army (PLA — China's military) technology challenges published between January 2023 and December 2024. These competitions are China's military equivalent of government-sponsored hackathons, where teams from universities, defense contractors, and civilian companies compete to solve specific warfare problems — with winning solutions feeding directly into PLA development programs.

CSET analysis of 14 PLA military AI technology challenges used to replicate and steal U.S. AI models, 2023–2024

What those 14 competitions revealed:

  • 10 of 14 challenges spanned at least two operational domains simultaneously — combining cyberspace, air operations, and information warfare in a single contest
  • 5 of 14 challenges explicitly targeted UAV (unmanned aerial vehicle, i.e., drone) offense-defense applications
  • 100+ teams competed in a single Nanjing/Qingdao challenge targeting electronic and underwater acoustic targeting systems
  • Over 50% of all related nationwide challenges focused specifically on UAVs and counter-UAV systems — not abstract research, but live problem-solving under competitive pressure
  • 18 total nationwide external technology challenges documented between 2023 and 2025

One standout event: "The Game of Huashan" (2023) tasked hundreds of competing teams with developing UAV swarm countermeasures — technology to detect and neutralize coordinated drone attacks involving dozens or hundreds of simultaneous UAVs — in realistic simulated combat conditions. Winning solutions do not stay academic. They enter PLA procurement pipelines.

Military-Civil Fusion: Who Is Actually Building These Weapons

Western coverage often portrays China's military AI development as a closed-door government program. CSET's analysis reveals a more systematic structure: military-civil fusion (MCF — a formal Chinese government policy that deliberately merges civilian research institutions with PLA development priorities, accelerating weapons modernization while distributing development costs across the civilian economy).

Organizations confirmed participating in PLA challenges include:

  • China State Shipbuilding Corporation — the world's largest shipbuilder, now solving naval AI targeting problems for the PLA alongside commercial contracts
  • Academy of Military Sciences — China's primary military research institution
  • Peking University and Northwestern Polytechnical University — top civilian universities directly contributing to weapons development programs
  • National University of Defense Technology — China's equivalent of MIT specifically for military AI and systems research

MCF means a computer science graduate student at Peking University may contribute to PLA drone warfare capabilities through what appears to be an academic competition. The structure is systematic — not incidental. Lohn's analysis of 18 nationwide external technology challenges confirms this pattern runs across defense, civilian technology, and academic sectors simultaneously, not in isolation.

CSET Senior Fellow Andrew Lohn testifying on China AI theft and PLA military AI contests before the U.S.-China Economic and Security Review Commission, April 30, 2026

AI Is Reshaping the Cyber Battlefield — and Defenders Are Losing Ground

Beyond the competition data, Lohn's testimony addressed a broader and more alarming question: is AI making cyberattacks or cyberdefenses stronger? The historical track record sets a troubling baseline:

  • Titan Rain (2003–2006): Chinese state-sponsored campaign that compromised U.S. defense contractors and government networks for years before detection — an early proof of concept for sustained AI-assisted intrusion
  • Operation Aurora (2009): Attributed to Chinese actors, it breached Google, Adobe, and 30+ major companies — establishing the techniques refined into today's AI-assisted attack playbooks
"It is not clear yet who benefits between attackers and defenders. The real-world evidence so far shows offense mostly experimenting, while defenders are starting to be overwhelmed from too much help that could potentially be turned against them."
— Andrew Lohn, CSET Senior Fellow

Translation for non-specialists: AI-powered security tools are generating so many automated alerts and responses that security teams are drowning in noise. More AI help is producing more confusion. Meanwhile, attackers are still in the experimenting phase — meaning the asymmetry between offense and defense has not yet locked in. The window to strengthen U.S. defenses is open, but it will not stay open indefinitely.

What Congress Heard About China AI Threats — and What's Next for AI-Deploying Companies

Lohn's policy recommendations are already in motion toward becoming NIST (National Institute of Standards and Technology — the federal agency that sets technical security baselines across U.S. industries) guidelines and federal procurement requirements. For developers, product teams, and enterprise buyers deploying AI tools:

  • Federal cyber standards: Mandatory minimum security requirements for AI systems handling sensitive or regulated data
  • Talent retention policies: Expanded visa pathways and competitive federal salaries to keep top AI researchers in the U.S. — CSET's "Keeping Top AI Talent in the United States" paper earned 125 Hacker News upvotes, among the highest engagement scores for any CSET publication
  • Distillation controls: Export rules specifically targeting model distillation (a technique where a smaller AI learns to mimic a larger one — letting China reproduce U.S. AI systems without needing the original training data or model weights, effectively getting the product without the price)
  • Hardware flow enforcement: Closing loopholes that allow advanced semiconductors to reach China despite existing chip export restrictions

His closing argument to Congress carried the sharpest edge of the entire testimony:

"If America is to reallocate energy and water infrastructure, to finance or backstop corporations and datacenter buildouts, and to provide regulatory relief… the benefits should not accrue to China. Congress, and the American taxpayer, should demand assurances that AI developers can protect the technology as a precondition to receiving our support."
— Andrew Lohn, CSET

In plain terms: before any U.S. government subsidy reaches an AI company building a new data center, legislators should require verifiable proof that the AI trained there cannot be stolen and handed to the PLA. Per Lohn's testimony, the theft is already happening — through expertise recruitment, hardware procurement, and model distillation at documented scale across 14 confirmed competitions and 18 nationwide challenges.

How to Use This Research — and What to Watch in the Next 18 Months

CSET's PLA competition analysis demonstrates something important for anyone working in tech: China's military AI ambitions are legible from public records if you know where to look. All 14 challenges were publicly announced by Chinese institutions. CSET had the analytical capacity to aggregate, translate, and decode them across 6 research domains: geopolitical competition, workforce, cyber-AI, biotechnology, AI governance, and military applications of AI.

The limitation CSET openly acknowledges: this analysis covers only publicly announced challenges. Classified PLA procurement and undocumented initiatives remain invisible. The 14 documented competitions represent "a small fraction of all Chinese military procurement documents published during that period" — meaning the full program is substantially larger than what public analysis can see.

If you build AI automation tools for enterprise, critical infrastructure, or government clients — or work at a company that does — watch for new AI automation security requirements and procurement standards tied to model security in the next 12–18 months. Verify your vendors can answer basic questions about model weight protection and distillation safeguards. Read CSET's published research at cset.georgetown.edu, or explore our AI automation security guides to understand which tools already align with emerging security standards before they become mandatory.

Related ContentGet Started | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments