Google Gemini Scans Photos by Default — EU Blocks Feature
Google Gemini scans your Google Photos without explicit consent — EU blocked it. Vercel confirmed a breach. Notion leaked editor emails on public pages. Act...
In the span of 24 hours, three platforms that developers and knowledge workers rely on daily cracked at the same seam: privacy, data control, and trust. Google Gemini was quietly scanning photos inside Google Photos — a feature so invasive that EU regulators moved to block it before most users knew it existed. Within the same window, Vercel confirmed an internal security breach, and Notion was caught silently leaking the email addresses of every editor on any public page. On Hacker News (a popular developer forum where high point scores signal genuine alarm), the three combined stories collected over 591 points in a single afternoon.
The timing makes the pattern impossible to ignore. This isn't a one-off incident — it's a 24-hour snapshot of how much personal data major platforms now process as a default, invisible to most users until something breaks or a regulator steps in.
Google Gemini AI Scan Running Inside Your Google Photos
Google's AI assistant, Gemini, has been integrated into Google Photos — and it's doing more than improving search. Gemini actively scans photos stored in your library, running them through an AI analysis pipeline (a system that processes images using machine learning models to extract meaning, recognize faces, objects, and text). The stated purpose: enable features like natural-language photo search ("find photos of my cat from last summer") and AI-generated memory summaries.
The core problem is consent. This scanning happens by default. Users are not clearly prompted to agree before their personal photo archives — which can include medical documents, financial records photographed for quick reference, private family moments, and sensitive communications — are fed into an AI processing system.
European regulators moved decisively. Under GDPR (Europe's General Data Protection Regulation — the law requiring companies to obtain explicit, informed consent before processing personal data), the Gemini photo-scanning feature was blocked in EU countries before it reached mainstream deployment. EU residents using Google Photos will not see the feature active on their accounts.
US-based users face a different reality. Without a federal equivalent of GDPR, the feature remains available and enabled without explicit opt-in. This regulatory divergence (when different governments apply fundamentally different rules to the same product) is becoming the defining pattern in how AI capabilities roll out globally: US users get access first; EU users get protection first. For the hundreds of millions of Google Photos users outside Europe, that distinction now matters enormously.
Vercel's Breach — And Why the Silence Is the Real Story
Vercel — the deployment platform (a service that hosts and delivers web applications, used by hundreds of thousands of engineering teams) — confirmed that internal systems were accessed without authorization. The announcement collected 212 points and 30 comments on Hacker News, a strong signal that the developer community viewed it as genuinely serious.
What Vercel has not yet disclosed:
- Which internal systems were accessed
- Whether customer data — including project files, environment variables (secret configuration values that apps use to connect to databases and external APIs), or deployment logs — was reached
- The timeline of intrusion and discovery
- What remediation steps have been or are being taken
For teams running production applications (live, customer-facing services) on Vercel, that ambiguity carries real risk. Environment variables stored in Vercel often contain database credentials, API keys (digital passwords that apps use to connect to third-party services like payment processors and authentication providers), and authentication secrets. If any of those were exposed, every downstream system connected to those credentials faces a potential compromise window.
Until Vercel publishes a full post-mortem (a detailed technical report explaining what happened, what was affected, and what has been fixed), treat all credentials stored in Vercel as potentially exposed and rotate (replace with newly generated values) all secrets immediately — do not wait for official confirmation.
Notion's Silent Editor Email Data Leak on Public Pages
Notion's incident was not an external attack — it was a product design flaw with the same practical effect as one. When any Notion page is made public (shared via a URL accessible to anyone, commonly used for product roadmaps, help documentation, and team wikis), the email addresses of every user who had ever been invited as an editor became visible to any anonymous visitor with the link.
The scope is broader than it first appears:
- Former contractors who edited a page once and moved on — their emails were exposed
- Internal team members added to a public product roadmap — emails visible to any visitor worldwide
- All historical editors, not just current active ones, included in the exposure
- 132 HN points and 28 comments — indicating community-wide recognition of the severity
If your organization uses Notion for any external or public-facing documentation, the emails of everyone who ever touched those pages may have been accessible. Notion has not confirmed whether affected users were proactively notified or whether a fix automatically protects previously public pages. An urgent audit of all public Notion pages in your workspace is warranted.
Three Failure Modes, One Trust Crisis
Each of the three incidents represents a distinct category of failure — and understanding the differences matters for how you respond:
- Vercel: External breach — unauthorized access by an outside party to internal infrastructure
- Notion: Design flaw — accidental data exposure built into product behavior from the start
- Google Gemini: Policy decision — intentional scanning of user data without clear, prior, explicit consent
The third category is the most significant of the three. A breach is a failure of security engineering — someone's defenses were overcome. A data leak is a failure of product design review — an unintended consequence shipped to production. But scanning private photos without explicit user understanding is a deliberate business decision — it reflects what a company believes is acceptable to do with user data by default, absent regulatory pressure. The EU's swift action and the US's inaction demonstrates exactly how much those policy decisions vary based on where you happen to live.
Against this backdrop, the highest-engagement story on Hacker News that same day told a very different tale. The complete archive of Byte magazine — the foundational computing publication that first launched in 1975 — became publicly accessible at archive.org. It collected 416 points and 102 comments — more than all three security stories combined. The 51-year archive spans computing history from hobbyist circuit-building to the PC era, freely available with no subscription and no AI model scanning your reading patterns. The community's enthusiasm for an artifact of computing's more transparent era is not just nostalgia. It's a pointed contrast to where the industry has arrived.
Immediate Action Steps: Google Gemini, Vercel, and Notion Users
Vercel users: Monitor official security announcements at vercel.com/security. Without waiting for Vercel's post-mortem, rotate all environment variables stored in your projects — especially database connection strings, API keys, and authentication secrets. Assume credentials may have been compromised until you have official, specific confirmation otherwise.
Notion users: Audit every public page your organization has created. Identify which pages have multiple editors and whether any of those editor identities being exposed creates risk — former employees, contractors, or anyone with sensitive system access. Remove historical editor access from public pages where possible. Do not assume Notion has automatically fixed or notified affected users.
Google Photos users: EU residents are protected by the regulatory block and the Gemini scanning feature will not be active on their accounts. Users outside the EU should open Google Photos settings and look for Gemini or AI-related feature toggles — disable AI processing features wherever an option is available. Complete opt-out may not be straightforward given the depth of platform integration; awareness of what's running on your library is the essential first step.
For anyone evaluating privacy-focused alternatives, local-first photo tools and AI automation guides — applications that store and process your photos on your own device rather than a cloud server — are increasingly capable. They offer a model of photo management where AI features, if any, run entirely on hardware you control, with data that never leaves your possession.
Related Content — Get Started | Guides | More News
Stay updated on AI news
Simple explanations of the latest AI developments