AI for Automation
Back to AI News
2026-03-21AI fraudcompliancestartupDelvesecurity

This $32M AI startup just got caught faking every report

Delve raised $32M promising AI-powered compliance — then a leak revealed 99.8% identical reports across hundreds of clients. Here's what went wrong.


A San Francisco startup called Delve raised $32 million promising to automate security compliance with AI. Companies paid Delve to get certified for SOC 2, HIPAA, ISO 27001, and GDPR — the security seals that enterprise customers demand before signing contracts. The pitch: AI does the work in days, not months.

Then a leaked spreadsheet blew the whole thing open. An investigation found that Delve's "AI-powered" reports were 99.8% identical across hundreds of different companies — and the auditors signing off weren't independent at all.

Delve compliance platform exposed for fake AI reports

The $32M Promise

Delve was founded by MIT dropouts Karun Kaushik and Selin Kocalar in 2023. Their pitch was irresistible: instead of spending months and tens of thousands of dollars hiring consultants to prove your company handles data safely, Delve's AI would automate the entire process.

The company marketed "agentic AI" (a type of AI that takes action on its own) that would collect evidence, write policies, and generate audit-ready reports automatically. Investors bought in. Clients lined up. Delve secured its $32 million Series A.

What the Leak Revealed

In December, someone discovered a publicly accessible Google spreadsheet containing links to hundreds of Delve's confidential audit reports. Security researchers analyzed what they found — and the results were damning.

99.8% textual similarity — reports for completely different companies used nearly identical language

259 Type II reports — all contained identical test conclusions, including the same grammatical errors

Pre-written conclusions — auditor verdicts were embedded in drafts before companies even submitted their information

Test data in final reports — keyboard-mashed entries like "sdf" and "dlkjf" appeared in delivered documents

Copy-paste cloud descriptions — whether a client used AWS, Azure, or Google Cloud, the description was identical

Analysis showing identical boilerplate across Delve compliance reports

Where the 'AI' Actually Was

Despite marketing "agentic AI automation," the investigation found no meaningful AI in Delve's actual product. The platform consisted primarily of pre-populated templates. Companies would click to accept fabricated policies — fake board meeting minutes, fake incident response records, fake employee background checks — that Delve had pre-written.

The "automation" was closer to a mail merge than artificial intelligence. Same template, different company name in the header.

The Audit Independence Problem

Legitimate compliance audits require independence — the company doing the audit can't also be the one preparing the evidence. It's the same principle as why you can't grade your own exam.

According to the investigation published on DeepDelver, Delve allegedly violated this rule by writing the test procedures, conclusions, and entire reports — then having firms sign off without independent review. The signing firms were reportedly Indian certification mills operating through U.S. shell companies, not the established U.S.-based CPA firms that clients were led to expect.

Who's Affected

High-profile clients reportedly include Lovable, Bland, Incorta, and Duos Edge AI, among hundreds of others. Many of these companies handle sensitive health data (PHI) for millions of Americans.

If their compliance certifications turn out to be worthless, they face:

HIPAA criminal liability for willful neglect of data protection rules

GDPR fines up to 4% of global revenue

Broken contracts with enterprise customers who required valid certifications

Loss of trust from users whose data was supposed to be protected

Delve compliance automation platform interface

The CEO's Response

When confronted, CEO Karun Kaushik called the allegations "falsified claims" in an "AI-generated email" and stated that "no external party gained access to our databases." However, the publicly exposed Google spreadsheet — containing links to hundreds of confidential client files — directly contradicted that claim.

A Warning for Anyone Buying AI Tools

Delve isn't just a compliance scandal. It's a case study in what happens when companies slap an 'AI' label on templates and charge enterprise prices.

If you're evaluating AI tools for your business:

• Ask for a live demo — not a sales deck. Watch the AI work in real time.

• Check who's actually auditing or verifying outputs. Independence matters.

• Compare your outputs to a colleague's. If they look identical, something's wrong.

• A tool that promises to do months of work in days should raise questions, not just excitement.

The story is trending on Hacker News with 137+ points and growing. As one commenter put it: "This is what happens when the compliance buyer doesn't actually care about compliance — they just want the badge."

Related ContentGet Started with Easy Claude Code | Free Learning Guides | More AI News

Stay updated on AI news

Simple explanations of the latest AI developments