Waymo Robotaxi Trapped 3 Passengers During 6-Minute Attack
A Waymo self-driving robotaxi trapped 3 passengers for 6 minutes while being attacked — and no one, not even Waymo, could override the AI safety system.
In January 2026, Doug Fulop and two friends climbed into a Waymo robotaxi (a self-driving car with no steering wheel, no pedals, and no driver) in San Francisco. Within minutes, a stranger began punching the car's windows, screaming death threats, and trying to lift the vehicle off the ground. The attack lasted 6 minutes. The passengers couldn't flee. The car couldn't — or rather, wouldn't — drive away. The incident exposes a fundamental blind spot in autonomous vehicle AI safety: Waymo's self-driving system cannot distinguish between a pedestrian and an attacker, leaving riders completely powerless.
The reason? Waymo's pedestrian detection system (the safety feature that automatically stops the car whenever it senses a human nearby) treated the attacker exactly the same way it treats a child crossing the street. The AI couldn't tell the difference between someone walking past and someone trying to break in.
Six Minutes Trapped Inside a Waymo Robotaxi
Fulop described the attack in terrifying detail. The man alternated between windows, slamming his fists against the glass from different angles. "If he had kept hammering on one window instead of alternating, I'm sure he would have eventually broken through," Fulop told reporters.
He called 911 and Waymo support simultaneously. Neither could help. Waymo explicitly told the passengers it would not remotely command the car to leave while a person was standing nearby. There is no manual override (no way for passengers to take control of the vehicle), no driver's seat, and no physical escape option while the car's safety protocols kept the doors locked.
The car only moved when random bystanders happened to distract the attacker, causing him to step outside the sensor range (the zone around the car where cameras and radar detect nearby humans). Once he moved away, the Waymo drove off automatically — as if nothing had happened.
"As passengers, we deserve more safety than that if someone is trying to attack us," Fulop said afterward. He stopped using Waymo at night.
Waymo's AI Safety Paradox: Statistics vs. Real-World Threats
Here's the devastating irony: the exact same system that makes Waymo statistically safer than any human driver is what makes its passengers uniquely helpless during deliberate attacks.
Waymo's own data across 170.7 million rider-only miles shows impressive safety numbers:
- 92% fewer serious injury crashes compared to human drivers
- 83% fewer airbag deployment crashes (collisions severe enough to trigger airbags)
- 82% fewer injury-causing crashes overall
- A serious injury rate of just 0.02 per million miles, versus the human benchmark of 0.22
But those numbers describe average driving conditions. They don't account for what happens when a human deliberately exploits the system's rules. A human Uber driver who sees an attacker approaching can simply step on the gas and leave. A Waymo cannot — and its passengers have zero tools to make it.
AV safety expert Phil Koopman warns that relying on remote operators during emergencies creates what he calls "customer service phone waiting queues" during life-threatening situations — an unacceptable delay when seconds matter. But he also raises an uncomfortable question: should an 8-year-old or an intoxicated passenger have the authority to override the car? The very populations autonomous vehicles are designed to serve — elderly riders, disabled passengers, minors — may lack the judgment to safely exercise manual overrides.
Waymo Robotaxi Attacks: A Growing Pattern Across Cities
Fulop's experience isn't isolated. The anti-robotaxi movement has escalated dramatically since 2023, when San Francisco's Safe Street Rebel group pioneered the "coning" tactic (placing traffic cones on Waymo hoods to blind their sensors), which can immobilize a car for hours.
Since then, incidents have grown more violent and more frequent:
- February 2024: A Waymo in San Francisco's Chinatown was surrounded by a crowd, had its windows smashed, and had a lit firework thrown inside
- 2025–26 school year: Austin officials documented 19 cases of Waymos illegally passing school buses
- January 2026: A Waymo struck a child outside a Santa Monica elementary school, reducing speed from 17 mph to under 6 mph before impact — but not stopping entirely. NHTSA, NTSB, California DMV, and CHP are all investigating
- 2026: In Los Angeles, 5 individuals on e-bikes surrounded a Waymo and demanded passengers open the doors
Robert Moreno, a passenger in a separate trapping incident, described the helplessness: "We felt trapped in the sense that we didn't know what to do. If we were outside walking we could've walked away. In this instance, we literally had no control."
Police and fire departments have had to physically relocate stuck Waymo vehicles during active emergencies, draining municipal resources that taxpayers fund.
Waymo's 170 Million Miles of Autonomous Driving Data — With One Critical Blind Spot
Waymo has driven 53.5 million miles in the San Francisco Bay Area, 68.6 million in Phoenix, and 37.9 million in Los Angeles. The company currently operates in six cities — San Francisco, San Jose, Phoenix, Miami, Austin, and Atlanta — with plans to expand to 20 cities by end of 2026.
Waymo spokesperson Katherine Barna characterized attacks on vehicles as "rare." The company's official safety guidance during an attack is simply: "Remain inside the vehicle."
But the OECD AI Incident Database (an international tracker that classifies AI-related harms) categorized the passenger-trapping incidents as "realized harm" — not merely hypothetical hazards. This classification suggests the problem is systemic rather than anecdotal.
Fred Perkins of the Center for Auto Safety raised an even more uncomfortable legal question: when a vehicle physically prevents a passenger from leaving during a dangerous situation, does that constitute false imprisonment? No court has ruled on it yet, but as Waymo scales to 20 cities, the question becomes harder to ignore.
For anyone exploring how AI systems handle real-world edge cases, this is one of the clearest examples of why understanding AI limitations matters — even when the overall statistics look excellent.
No Passenger Override, No Federal Regulation, No Answers for Autonomous Vehicle Safety
The most troubling aspect isn't any single incident — it's the regulatory vacuum. No consistent federal or state framework exists defining what authority passengers should have over autonomous vehicle behavior during emergencies. New York withdrew its robotaxi service plan in February 2026, signaling growing regulatory pushback, but no comprehensive rules have emerged.
The fundamental question isn't whether Waymo's cars are statistically safer than human drivers. By their own data — a 92% reduction in serious injuries across 170.7 million miles — they clearly are. The question is what happens when a system optimized for the average case encounters deliberate human hostility.
UC Berkeley professor Scott Moura compared the coning tactic to "someone putting a blindfold over the eyes of a driver." But the body-blocking vulnerability discovered in 2026 is even simpler: you don't need a traffic cone. You just need to stand near the car. And unlike a blindfolded human driver who might panic and accelerate, Waymo's AI will do exactly what it was programmed to do — nothing.
Mark Gruberg of the San Francisco Taxi Workers Alliance put it bluntly: "Transport workers, cab drivers, Uber and Lyft drivers, truck drivers, bus drivers, shuttle drivers — we're all in the crosshairs of this." He argues that disabled passengers are "far better off with an actual human driver" who can make judgment calls in dangerous situations.
Waymo is betting that its 92% safety improvement across 170 million miles outweighs the edge cases where passengers are trapped. The 3 people who spent 6 minutes wondering if their windows would hold might disagree. And with expansion to 20 cities planned this year, the number of people asking that question is about to grow — along with the number of people who know exactly how to exploit the answer. Keep up with the latest AI and automation developments as this story unfolds.
Related Content — Get Started | Guides | More News
Stay updated on AI news
Simple explanations of the latest AI developments