AI for Automation
Back to AI News
2026-04-23distillation attackChina AIAI securityUS China tech warAI chip exportHelen TonerGeorgetown CSETAI regulation

Distillation Attacks: China Secretly Copied U.S. AI Models

China secretly copied U.S. AI models for 2+ years via distillation attacks. Georgetown's CSET presented evidence to the U.S. Senate on April 22, 2026.


China's AI companies have spent at least two years secretly copying American AI models — and on April 22, 2026, Georgetown University researcher Helen Toner brought the evidence directly to the U.S. Senate Judiciary Committee. The technique is called a "distillation attack" (a method where a rival company trains its own AI on the outputs of another company's AI, without authorization), and it may be the most cost-effective form of technology transfer in modern history.

Helen Toner CSET Director testifying on China AI distillation attacks before U.S. Senate Judiciary Committee April 22 2026

How a Distillation Attack Actually Works

Training a large AI model from scratch takes years and costs hundreds of millions of dollars. Researchers spend years curating datasets — sometimes hundreds of billions of text samples — and running compute-intensive training runs on thousands of GPUs (graphics processing units repurposed for AI training). A distillation attack sidesteps this entire process.

The method works like this: a Chinese AI company sends millions of queries to an American AI system like ChatGPT or Claude, collects all the responses, and uses that data to train their own competing model. The American model's knowledge — everything it absorbed from years of expensive training — is effectively transferred to the rival system at a tiny fraction of the original cost. No advanced hardware required. No unauthorized data breach needed. Just the AI's own outputs, used against its creators.

In her Senate testimony, Toner stated: "There is strong evidence that some Chinese AI companies are employing distillation techniques in order to use American AI models to advance their own research and development."

CSET (Center for Security and Emerging Technology — Georgetown University's flagship research center for AI policy and national security) recommended a specific counter-strategy: rather than trying to block distillation technically, the U.S. should focus on sharing threat intelligence and helping AI companies monitor for model misuse. The reasoning: distillation is one of many threat vectors, and broad defenses address multiple threats simultaneously.

Two Years of PLA Documents — What Georgetown Uncovered

Distillation attacks are only part of the story. CSET researchers spent two full years analyzing thousands of People's Liberation Army (PLA — China's combined military forces) procurement documents, covering January 2023 through December 2024. These are official Chinese military purchasing records available through open-source intelligence (publicly accessible information, as opposed to classified espionage), giving CSET a rare window into how China is deploying AI at a military scale.

The findings challenge a widely held assumption in Washington: that the U.S. has a decisive and growing technological lead over China in military AI. CSET senior fellow Emelia Probasco, who specializes in military AI and autonomous weapons systems, offered a sharply cautious assessment of the Pentagon's prized Maven Smart System.

The Maven Smart System is the U.S. military's flagship AI platform — a tool that integrates data streams from across combat operations to accelerate targeting decisions and support commanders across all combatant commands. Defense officials frequently cite it as evidence of clear American military AI superiority. Probasco's assessment was more measured: "The claims about Maven's abilities might be overstated and much of the American advantage came from the scale of data flowing in and the skills of the people using it."

She added: "It's not rocket science. I suspect that China already has something like it."

In other words, the U.S. advantage in military AI may rest primarily on data volume and operator skill — not on any inherent technical superiority in the AI itself. That is an advantage China is steadily narrowing through distillation attacks and its own military AI programs.

CSET Georgetown research revealing China PLA military AI procurement and distillation attack strategy 2023 to 2024

Congress Blocks, Trump Sells: Washington's Contradictory AI Policy

If China is copying American AI at scale, the natural response would be to restrict what they can access. Congress has proposed dozens of bills to limit China's access to advanced AI chips and semiconductor manufacturing equipment. As of April 2026, none have passed.

Meanwhile, the Trump administration moved in the opposite direction. In December 2025, the White House opened licensing pathways for Nvidia H200 GPU (graphics processing unit — among the most powerful chips currently available for AI model training) exports to China. Potential AI technology negotiations are also on the agenda for a planned Trump-Xi summit in Beijing, which would further loosen restrictions.

The enforcement gap is compounding the problem. The Bureau of Industry and Security (BIS — the Commerce Department agency responsible for drafting export control rules and processing chip export licenses) has faced significant staffing cuts. Fewer staff means slower license processing, larger backlogs, and reduced capacity to revise rules as AI technology advances. The result is a growing gap between policy intent and enforcement reality. Here is where things stand today:

  • Dozens of proposed bills in Congress to restrict AI chip access for China — none have been enacted into law
  • Nvidia H200 GPU exports to China now have a licensing pathway, opened by the Trump administration in December 2025
  • BIS staffing cuts are creating backlogs in license processing and rule revision, reducing enforcement capacity across the board
  • Distillation attacks require no chips at all — Chinese companies can copy U.S. AI models using commercially available American AI services, entirely bypassing hardware export controls

This last point is critical. Even a perfect chip export control regime would not stop distillation attacks. Any Chinese user with an account on OpenAI or Anthropic's platforms can collect outputs and feed them into a domestic training pipeline. The hardware controls that Congress is debating address only one dimension of the threat.

The Talent Race That Matters More Than Chips

Toner closed her Senate testimony with a pointed exchange with Senator Josh Hawley about who is actually winning the global AI race: "Right now… the winner of any AI race between the U.S. and China is the AI. We need to be working to make sure that is not the case."

CSET's separate report on retaining top AI talent in the United States drew significant attention when it circulated online — it scored 125 points on Hacker News and generated 105 reader comments — arguing that keeping skilled researchers in America may matter more in the long run than controlling semiconductor shipments. If the researchers leave, or if their work is systematically stolen through distillation, export controls on Nvidia chips offer limited protection.

Two CSET senior fellows, Matthias Oschinski and Mina Narayanan, captured the core policy tension in a Newsweek op-ed: "The real debate is not innovation versus regulation. It is innovation guided by purpose versus innovation left to default incentives."

The implication for AI governance: the U.S. needs a coherent framework that treats talent retention, model security, and chip controls as a single integrated strategy — not three separate debates happening in different congressional committees.

What to Watch — and What You Can Do

For most people using AI tools professionally, the immediate implication is this: when you use ChatGPT, Claude, Gemini, or any U.S.-based AI service, your queries and the system's responses may be part of a data pattern that adversarial actors are collecting at scale. You are not the target — your AI provider is. But the cumulative effect of millions of queries from bad actors contributes to the distillation pipeline that CSET just described to the Senate.

CSET's recommendations focus primarily at the provider level. AI companies like OpenAI, Anthropic, and Google DeepMind need to build detection systems that flag unusual query patterns consistent with large-scale distillation attempts — and share that threat intelligence across the industry rather than treating it as proprietary competitive information. Individual users can support this by using AI services through authenticated accounts (which makes bulk anonymous abuse harder) and reporting unusual access patterns when visible.

You can follow CSET's ongoing research directly at cset.georgetown.edu, or track our AI policy and security news as Congress continues to debate chip export legislation and the Senate weighs next steps on distillation attack protections. The April 22 hearing was the opening move — the legislation fight is just beginning.

Related ContentGet Started | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments