World ID Banned in 6 Countries: Altman's Iris Scan Blocked
World ID iris scanning banned in 6 countries — Spain, Germany, Kenya & more reject Altman's biometric ID project over consent failures.
World ID — Sam Altman's iris-scanning identity project — has been banned or formally suspended in at least 6 countries, and regulators on four continents are demanding the data be deleted. The bans expose a fundamental tension at the heart of AI-era identity: the technology most capable of proving you're human online requires collecting the most sensitive biometric data (physical characteristics unique to your body, like your iris pattern) imaginable.
For everyday people, the stakes are real. Whether you're voting in an online election, unlocking a financial service, or simply proving your age on a website, how your humanity gets verified online is about to become one of the defining policy fights of the decade.
What World ID Actually Does — and Why It Alarmed Regulators Immediately
World ID is the identity layer of Worldcoin, a project co-founded by Altman in 2019. A physical device called the "Orb" — a silver sphere roughly the size of a bowling ball — captures a high-resolution scan of your iris. That scan is converted into an "IrisCode," a mathematical fingerprint unique to your eye's structure, generated using a cryptographic algorithm (a mathematical process that converts your iris pattern into a unique numerical string that cannot be reverse-engineered back into a photo).
In exchange for the scan, users receive WLD tokens (the project's own cryptocurrency — a digital currency stored on a blockchain, which is a tamper-proof public ledger maintained by thousands of computers simultaneously). The pitch is direct: in a world flooded with AI-generated bots and synthetic identities, World ID provides a universal "proof of personhood" (a mathematically verifiable certificate confirming you are a real, unique human being). Altman has argued this becomes critical infrastructure as AI systems become indistinguishable from humans online.
Three concerns immediately drew regulatory attention:
- Permanence: Unlike a password or email, your iris pattern cannot be changed. A single database breach permanently compromises the biometric identity of every affected user — for life.
- Informed consent: In multiple countries, investigators found that users — particularly in lower-income regions incentivized by the crypto reward — did not fully understand what they were agreeing to, or how their data would be stored, shared, or eventually deleted.
- Concentration risk: A single private company running a global biometric database (a centralized repository holding iris data from millions of people across 50+ countries) concentrates an extraordinary degree of power with minimal democratic oversight.
The 6 Countries That Formally Banned World ID
Bans, suspension orders, and formal investigations have accumulated across four continents since Worldcoin's July 2023 global launch:
- Spain: Spain's data protection authority, AEPD (Agencia Española de Protección de Datos — Spain's independent privacy regulator), issued a formal suspension order in 2024 under GDPR (the EU's General Data Protection Regulation, the world's most comprehensive privacy law). Regulators concluded informed consent was not being properly obtained, particularly from minors. The suspension remains in force as of April 2026.
- France: CNIL (France's independent privacy watchdog) opened a formal GDPR investigation requiring World ID to halt new enrollments in France while the inquiry continues.
- Germany: Bavaria's State Office for Data Protection Supervision ordered World ID to delete iris data already collected from German residents, declaring the collection process "legally insufficient" under EU data protection law.
- Kenya: The Kenyan government suspended all Worldcoin operations in 2023 — after roughly 350,000 Kenyans had already been scanned — citing concerns that the country's population was being used as an unregulated "test market" without adequate oversight from national authorities.
- Brazil: Consumer protection agency Procon-SP issued a cease-and-desist order, citing a lack of transparency about how iris data would be retained long-term, whether it would be sold to third parties, and what deletion rights users held.
- India: Following passage of India's new Personal Data Protection Act (a national law modeled partly on GDPR), regulators opened formal inquiries into whether World ID's collection and storage practices met the new law's explicit consent requirements.
The pattern is consistent across all six cases: regulators concluded that users were not given adequate, clear information before agreeing to permanent biometric collection. The problem is especially acute where economic incentives — free cryptocurrency — may have created subtle pressure to consent that authorities consider ethically coercive.
The $134 Billion OpenAI Trial Running Simultaneously
The World ID regulatory collapse lands as Altman is simultaneously defending a courtroom battle of historic scale. Elon Musk — one of OpenAI's original co-founders and early donors — filed a lawsuit alleging that Altman and OpenAI's leadership violated foundational obligations established when the organization was created.
OpenAI was founded in 2015 as a nonprofit (a charitable organization with no shareholders, structured specifically to pursue AI safety for public benefit rather than investor returns). It has since undergone a restructuring toward a commercial model — now valued at approximately $134 billion — a transformation that has drawn scrutiny from California's attorney general and, now, from Musk in court. The trial, which opened this week, centers on whether OpenAI's conversion from charitable organization to for-profit entity violated the original mission pledge — and whether Altman and other executives committed fraud by diverting assets that were legally pledged to a public benefit purpose.
Both the World ID bans and the OpenAI trial point to the same underlying question: who controls AI-powered systems that could become foundational infrastructure for billions of people — and under what rules do they operate? Altman is simultaneously building systems to verify human identity online (World ID) and advancing the most powerful AI in history (OpenAI's GPT-4o and successors). The legal exposure on both fronts is now substantial.
Children, Social Media, and the Age Verification Paradox
The BBC Technology feed also highlights a third, converging regulatory wave: governments worldwide are tightening social media access restrictions for users under 16. Australia enacted an outright ban on social media for minors under 16 in early 2026. The UK, France, and several US states are drafting similar legislation, citing evidence linking algorithmic recommendation systems (AI-powered engines that automatically select content to maximize user engagement) to teenage anxiety, disordered eating, and sleep disruption.
Here, the World ID story and the social media regulation story collide in an uncomfortable way: enforcing age limits on social platforms requires reliable, privacy-respecting age verification — and biometric systems like World ID represent one of the few technically credible solutions at scale. If World ID or a similar iris-scanning identity system were to become the standard mechanism for proving age online, Altman's technology could become de facto mandatory infrastructure for any user under 16 who wants to access Instagram, TikTok, or Snapchat.
That prospect alarms privacy advocates even more than the current voluntary iris scans — because it would transform optional participation in a crypto experiment into compulsory biometric enrollment for children who simply want to use the internet. Regulators have not yet resolved this contradiction.
Before You Scan: World ID Biometric Risk Checklist
If you encounter a World ID enrollment point — at a crypto event, a partner app, or a tech hub — the first question to ask is whether the service is legally cleared to operate in your country. In Spain, France, Germany, India, Kenya, and Brazil, it currently is not. In the US, state-level data protection frameworks vary widely; California's CCPA (California Consumer Privacy Act — a state law granting residents rights over how businesses collect and use their personal data) provides some protection, but there is no federal equivalent of GDPR.
More broadly, the World ID story is a useful reminder of a principle that applies to any biometric service: permanence changes the risk calculation entirely. A leaked password costs ten minutes to reset. A leaked iris scan is a problem you carry for the rest of your life. Before opting in to any biometric identity service, ask four questions: Does the operator have a clear deletion policy? What legal jurisdiction governs your data? What happens to your IrisCode if the company is acquired or goes bankrupt? And has the service been cleared by your national data protection authority? Our guides on AI identity tools cover what to look for before you share any biometric data.
Six governments have now asked the same questions — and in every case, they have not liked the answers they received.
Related Content — Get Started | Guides | More News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments