1 million people just followed a fake AI soldier
An AI-generated persona named Jessica Foster posed as a pro-Trump military officer on Instagram, fooled nearly 1 million followers, and funneled them to paid adult content.
Nearly one million people followed, praised, and financially supported a woman who didn't exist. Her name was Jessica Foster — an AI-generated persona that posed as a U.S. military officer and Trump supporter on Instagram. For months, nobody noticed. Then a journalist spotted a tiny detail on her uniform that gave it all away.
The Perfect Persona
Jessica Foster appeared on Instagram in December 2025. Her bio read "america first." Her photos showed a young, attractive woman in military combat uniforms, posing alongside Donald Trump and world leaders. She posted patriotic content, engaged with followers, and quickly built a devoted audience — mostly men.
Within four months, she had nearly a million followers. People thanked her for her military service. They praised her loyalty to Trump. They believed every pixel of it.
But none of it was real. Every photo was generated by AI. The person behind the account has never been identified.
The Real Goal: Money
The patriotic imagery was a funnel. Jessica Foster's Instagram and X profiles directed followers to @jessicanextdoor on OnlyFans, where "she" sold foot fetish content and accepted direct tips from subscribers.
What subscribers were paying for:
• Individual posts for $100+ each
• Adult content from someone who doesn't exist
• "Personal" messages that were likely AI-generated
• Her OnlyFans bio: "I'm not a robot haha" — she was
As journalist Kat Tenbarge put it on the Courier YouTube channel: "She acts as a military advisor to the Trump administration on Instagram, but she operates as a foot model on OnlyFans."
The Mistake That Exposed It
Two details gave the AI away. First, in one combat uniform photo, the nametag on her chest read "Jessica" — her first name. But U.S. military regulations require last names only on uniform nametags. No real soldier would have "Jessica" printed on their uniform.
Second, a placard in a speech photo misspelled "Board of Peace" as "Border of Peace" — a classic AI text-generation error. Background distortions and anatomical inconsistencies in other photos confirmed the deception.
Who Was Behind It — and Why It Matters
Nobody knows who created Jessica Foster. The Washington Post reported that thousands believed the deepfake images were authentic. Speculation ranges from a lone scammer to foreign government propaganda — because the account violated Instagram's rules requiring disclosure of AI-generated political content, and nobody could trace the creator.
The account has since been removed from Instagram. But the damage was done: nearly a million people interacted with AI-generated political content without knowing it, and an unknown number paid real money for fake content.
How to Spot AI Fakes Like This
The Jessica Foster case reveals how sophisticated — and how vulnerable — AI deception has become. Here's what to watch for:
• Text in images — AI still struggles with spelling and context (wrong nametags, misspelled signs)
• Background consistency — look for warped edges, impossible architecture, blurry crowds
• Anatomical details — hands, fingers, ears, and teeth are common AI weak spots
• No verifiable history — a person with 1M followers but zero real-world footprint is a red flag
• Monetization pressure — if a patriotic account funnels you to paid content, question everything
As AI image generation improves, these clues will get harder to spot. The OECD has cataloged this incident as a case study in AI-powered social deception. Meta's policies require disclosure of AI in political ads — but when the creator is anonymous, enforcement is nearly impossible.
Related Content — Get Started with Easy Claude Code | Free Learning Guides | More AI News
Stay updated on AI news
Simple explanations of the latest AI developments