AI for Automation
Back to AI News
2026-04-20ByteDanceAI automationAI job replacementChina tech layoffsworkforce automationlabor rightsdigital laborAI training data

ByteDance Workers Train AI Replacements — Then Lose the Job

ByteDance and Alibaba workers must document their expertise to train AI replacements — zero extra pay, no data rights, and rising resistance.


The request seemed routine at first: log your decisions, document your reasoning, help the team build better internal tools. But for tens of thousands of employees across China's biggest tech companies, a more unsettling reality has emerged — they are being asked to systematically train the AI automation systems that will replace them.

At ByteDance (the Beijing-based company behind TikTok and Douyin), Alibaba, and Kuaishou, workers report structured programs requiring them to record voice samples, annotate their own decisions, and document workflows step by step. That data feeds directly into large language models (AI systems trained on vast amounts of text and human judgment to replicate expert decision-making) being built to perform those same tasks autonomously.

What 'Training Your AI Replacement' Actually Looks Like

Inside Chinese tech circles, the practice has acquired a name: building a 数字分身 (digital double — a software replica of a worker's skills, patterns, and judgment encoded in an AI model). Unlike earlier automation that displaced factory floor workers, digital doubles target cognitive work: moderating content, writing advertising copy, screening job applicants, advising on financial products.

These programs are rarely announced as replacement efforts. They're framed as AI adoption initiatives, digital transformation projects, or knowledge management exercises. That framing matters because it affects whether employees feel they can refuse — and what actually happens when they do.

ByteDance employees at computers participating in AI automation knowledge-capture programs that extract worker expertise to build AI job replacement systems at Chinese tech firms

ByteDance and Alibaba AI Automation: The Scale of Tech Workforce Restructuring

ByteDance employs more than 150,000 people globally. Its consumer AI product, Doubao (a conversational AI assistant competing with ChatGPT in the Chinese market), runs on models trained partly on internal company knowledge. Industry analysts tracking ByteDance's operational strategy report that the company has set aggressive targets to reduce human involvement in content moderation — a workforce numbering in the tens of thousands — by deploying AI systems trained on years of moderator decisions.

The financial logic is stark. Training a large language model costs tens of millions of dollars upfront, but a deployed model runs at a fraction of an equivalent human team's ongoing cost. For ByteDance, which generated an estimated $110 billion in revenue in 2024, even a 30% reduction in headcount across specific departments translates to hundreds of millions in annual savings.

Alibaba is pursuing a different model. Its Qwen series (open-weight language models — AI systems whose underlying code and parameters are shared publicly, allowing third-party customization) is being deployed across internal departments as a broad-purpose assistant. Alibaba's official framing: AI augments workers rather than replacing them. Workers' counterargument: reduced hiring targets produce the same long-term outcome as layoffs, just more slowly and with less accountability.

Two Competing Approaches

  • ByteDance (direct replacement model): Role-specific AI trained on employee decisions, with explicit headcount reduction targets in content and operations teams
  • Alibaba (augmentation framing): General-purpose AI assistants deployed across all departments, reducing headcount through attrition and frozen hiring rather than direct cuts
  • Kuaishou (partial exception): Retrained 3,000 content moderators as AI oversight reviewers — humans who check AI decisions — rather than cutting them outright

Why Workers Are Resisting — and the Four Things They're Angry About

The resistance isn't organized like a traditional labor action. It's quieter and more distributed — playing out on Maimai (a Chinese professional social network similar to LinkedIn), in encrypted group chats, and in anonymous forums where tech workers share advice on how to navigate "knowledge transfer" requests without accelerating their own displacement.

The grievances workers cite are concrete and consistent:

  • Zero pay for training data: Workers typically receive no additional compensation for the behavioral and decision data they generate during AI training programs. The data is classified as standard work product owned by the employer.
  • No data deletion rights: China's PIPL (Personal Information Protection Law, enacted 2021 — regulations governing how companies collect and use personal data) includes consent requirements, but employment contracts at major tech firms typically contain sweeping IP clauses that courts have interpreted to cover documented workflows and communications patterns.
  • No visibility into model deployment: Employees rarely know when or how their documented decision-making patterns appear inside a commercially deployed AI model.
  • No job security tied to participation: Helping train an AI system does not protect a worker's role from elimination once the model is live.

One anonymous post on Maimai in early 2026 described the sequence plainly: "Six weeks of documenting how I think through problems, step by step. Then half my team was cut. My documentation stayed. I didn't." The post attracted hundreds of replies from workers at different firms describing the same arc.

AI automation neural network visualization representing how ByteDance and Alibaba encode workforce expertise into AI job replacement systems

AI Labor Rights: A Legal Framework That Hasn't Caught Up

Chinese labor law offers stronger wrongful-termination protections than many Western jurisdictions. But digital labor rights — the idea that workers should hold some economic claim over professional expertise extracted and monetized through AI training — remain almost entirely unaddressed in current statute.

The PIPL requires informed consent for collecting personal information, but the "work product" exception in standard employment contracts is broad enough to cover most AI training scenarios at large tech employers. Legal scholars at major Chinese universities have noted that China has not yet established a "digital labor" category that would entitle employees to compensation when their behavioral data is used to build and sell commercial AI systems.

The European Union has moved further in this direction through the EU AI Act (regulations that took full effect in 2024, requiring transparency about AI systems making consequential decisions about workers). China's regulatory framework for workplace AI remains less prescriptive — though regulators have shown interest in the question, and enforcement priorities could shift quickly.

What Workers Everywhere Can Do Right Now

The pattern is not exclusive to China. Amazon, Salesforce, and major consulting firms have run structurally similar knowledge-capture programs. The differences in China are scale, speed, and the relative absence of institutional recourse. But the underlying dynamic — employer extracts expertise, encodes it in software, reduces headcount — is global.

For anyone navigating this environment, the practical steps are unglamorous but important:

  • Keep your own records of what you're asked to document, when, and under what stated purpose
  • Ask explicitly: does participating in AI training programs affect your role evaluation or future employment status?
  • Read your employment contract's IP clause — most were written before generative AI existed, and their scope is actively being tested in courts
  • Learn to distinguish AI systems built to assist your work from those built to replicate it — the difference becomes visible when headcount targets are announced
  • If your company has launched an "AI readiness" initiative, understanding how these AI automation programs work before you're three weeks into one is genuinely worth your time

The Kuaishou exception — retrain people as AI supervisors rather than simply cutting them — costs more. It also produces workers who understand the systems that manage them, which creates a more durable workforce. It remains, as of 2026, the exception rather than the rule.

If you're in a tech-adjacent role and considering how to respond to these pressures, getting fluent with AI tools yourself changes your position: you understand the system instead of just being subject to it. That's a different kind of leverage than any labor protection currently on the books.

The workers quietly refusing in China are not Luddites. They've watched the sequence play out: six weeks of knowledge transfer, then a restructuring announcement. The AI keeps the expertise. The worker gets a severance package. That is not transformation. That is extraction — and it deserves to be called what it is.

Related ContentGet Started | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments