AI for Automation
Back to AI News
2026-04-23Meta AIworkplace surveillanceAI training dataAnthropic Mythosemployee monitoringGoogle Meet AIAI automationworkplace privacy

Meta Logs Worker Keystrokes for AI Training — No Opt-Out

Meta secretly logs all US employee keystrokes for AI training with no opt-out. Anthropic's Mythos AI breached. Google Meet now records any room.


Meta has installed a tool on the computers of every US-based employee that records mouse movements, clicks, keystrokes, and occasional screenshots — and workers were not given the option to refuse. At the same time, Anthropic's Mythos (a cybersecurity model powerful enough to find and exploit vulnerabilities in every major operating system and web browser) was accessed by unauthorized users through a contractor's login. And Google's AI began taking notes in physical conference rooms without a calendar invite. Three expansions. Zero public debates. One week.

The Tool Running Silently on Every US Meta Employee's Computer

Meta's "Model Capability Initiative" — known internally as MCI — was quietly deployed across US-based employee devices. It records mouse movements, clicks, keystrokes, and periodic screenshots (frozen images of the screen taken at timed intervals) to generate training data (information used to teach AI systems how humans interact with software).

Meta's official statement: data from MCI "won't be used for performance assessments." But employees were not given an opt-out. The program was installed, not offered. You do not consent to MCI by clicking "agree." You consent by being employed at Meta.

This matters beyond Meta's campus for three reasons:

  • Scale: Meta employs tens of thousands of workers across the US. Every one of them is now a live data source for AI training — recording every workday, continuously.
  • Consent structure: Workplace monitoring framed as AI training blurs the legal and ethical line between voluntary participation and employment pressure. When saying "no" could affect your standing at work, consent is not truly voluntary.
  • Industry precedent: Companies across tech, finance, and retail are actively building AI agents — software designed to perform computer tasks exactly the way a human does. MCI is a blueprint for capturing that training data at scale, and others will follow.
Meta workplace AI surveillance — Meta's Model Capability Initiative (MCI) records employee keystrokes, clicks, and screenshots as AI training data

The framing of "AI training" versus "performance monitoring" deserves scrutiny. Meta says MCI captures how employees interact with software — not whether they are productive enough. But the distinction is fragile. A model trained on which apps you open, how fast you type, when you stop clicking, and what appears on your screen can infer performance as precisely as any manager's assessment. The data is identical. Only the stated purpose differs.

Understanding why this data is so valuable requires knowing what AI agents actually need. If you want a deeper look at how AI learns from human behavior at work, the mechanism is straightforward: models need millions of examples of real humans completing real tasks to become capable of replicating those tasks autonomously.

Anthropic Built a Dangerous AI. An Outsider Walked Straight In.

Anthropic's Mythos is a cybersecurity model (an AI system trained specifically to identify and exploit security weaknesses in digital infrastructure) capable of targeting every major operating system — Windows, macOS, Linux — and every major web browser. Anthropic's own characterization of Mythos: potentially "dangerous in the wrong hands."

This week, it reached the wrong hands.

A small group of unauthorized users gained access to Mythos Preview through a third-party contractor's credentials, using what investigators described as "commonly used internet sleuthing tools" — not sophisticated hacking, not a zero-day exploit (a previously unknown software vulnerability used before developers can patch it). Just a contractor login and basic online research. The breach required no technical skill beyond knowing where to look.

The access map for Mythos among US government agencies makes the institutional contradiction hard to ignore:

  • NSA (National Security Agency — the agency responsible for foreign signals intelligence and offensive cyber operations): ✅ has Mythos access
  • Commerce Department: ✅ has Mythos access
  • Additional unnamed federal agencies: currently negotiating expanded Mythos access with the Trump administration
  • CISA (Cybersecurity and Infrastructure Security Agency — the specific federal body responsible for defending US government digital systems from attack): ❌ explicitly excluded from Mythos Preview
Anthropic AI — creator of Claude and the Mythos cybersecurity model, which was breached by unauthorized users via contractor credentials in an AI automation security incident

The agency whose entire mandate is defending US digital infrastructure from exactly the kind of attack Mythos can execute does not have access to Mythos. The agency built to run offensive cyber operations does. Whether this asymmetry reflects deliberate political strategy or procurement friction, the result is the same: America's defensive cyber agency is operating blind to a tool its offensive counterpart is actively using.

The breach itself reveals a deeper problem with "restricted access" as a safety mechanism. An attacker used social engineering (a technique where someone manipulates another person into revealing credentials or access, rather than breaking in technically) to reach a model Anthropic described as dangerous. The vulnerability was not in Mythos itself — it was in the trust chain around it. A contractor account, a forum, and patience were sufficient.

Google's AI Moves Into Your Conference Room — No Invite Required

Google's Gemini AI (Google's series of large language models, functionally comparable to OpenAI's ChatGPT) expanded its meeting notes feature this week to cover in-person physical meetings — not just scheduled video calls. Previously limited to a small group of Android alpha testers (early users testing unreleased features), the capability now works across three distinct environments:

  • In-person meetings — physical rooms with no scheduled calendar event or formal IT setup required
  • Zoom calls — the independent video conferencing platform used by hundreds of millions of workers globally
  • Microsoft Teams meetings — Microsoft's enterprise communication and collaboration suite

Google's framing: "If a user who is not in person wants to join the meeting, you can transition the meeting to a normal video call." The practical implication: a single Android user with the Google Meet app can activate AI transcription (automatic written record of a spoken conversation) in any physical room, without a calendar event, without formal notification to other attendees, and without IT involvement.

For workers: the assumption that off-calendar conversations remain unrecorded is no longer reliable. An impromptu hallway debrief, a whiteboard session, a pre-meeting sidebar — any of these can now be captured and transcribed in real time if one person in the room opens Google Meet on their phone. The feature requires no scheduled meeting. No room booking. No warning.

Three Expansions in One Week — All Without a Public Vote

The thread connecting these three stories is not the technology — it is the speed at which AI capabilities are extending into spaces where people reasonably assumed they had control or privacy.

Meta employees did not vote on MCI. Meeting attendees receive no notification when Google Meet's AI activates in a physical room. CISA did not choose to be excluded from Mythos while the NSA was included. In each case, the expansion happened through internal corporate or governmental decisions, not public process, workplace negotiation, or regulatory approval.

Meanwhile, SpaceX is reportedly weighing a $60 billion acquisition of Cursor — an AI coding platform that competes directly with tools from Anthropic and OpenAI — with a $10 billion access-fee alternative also under discussion. When a single coding productivity tool commands a $60 billion price tag, the commercial stakes behind these quiet workplace expansions become concrete fast.

The most useful action available right now: contact your IT or HR department and ask which monitoring tools are installed on your work devices, what data they collect, how long it is retained, and who can access it. Not as a political statement — as basic information you are entitled to have. In 2026, not knowing what is running on your work computer is a risk you are actively choosing to accept.

To stay on top of how AI automation is reshaping the workplace, follow our ongoing coverage as these tools expand further into daily work life.

Related ContentGet Started | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments