Ask ChatGPT to pick a number — it picks the same one every time
Researchers found that AI models like ChatGPT, Claude, and Gemini always pick the same 'random' numbers — and the reason reveals a fundamental flaw in how AI thinks.
Ask ChatGPT to pick a random number between 1 and 50. Then ask Claude. Then Gemini. Then Llama. They all say 27.
This bizarre pattern — confirmed by a peer-reviewed study of 75,600 AI responses and a viral Reddit experiment that hit Hacker News this week — reveals something fundamental about how AI actually works. And it matters more than you think.
Every AI has a favorite number
Spanish astrophysicist Javier Coronado-Blázquez ran a massive experiment: he asked six different AI models to "pick a random number" across three different ranges, in seven languages, at six different temperature settings (a dial that controls how creative or predictable AI responses are). That's 75,600 total responses.
The results were striking:
Between 1 and 5: AI models favor 3 and 4
Between 1 and 10: They cluster around 5 and 7
Between 1 and 50: ChatGPT, Claude, Gemini, and Llama all pick 27
Between 1 and 100: They gravitate toward 37, 47, and 73 — all prime numbers
Between 1 and 10,000: A new Reddit experiment found ChatGPT clusters its picks around 7,200–7,500
A separate experiment by Meta data scientist Colin Fraser confirmed the pattern. He asked ChatGPT 2,000 times to pick a number between 1 and 100. The number 42 appeared 10% of the time — a likely echo of The Hitchhiker's Guide to the Galaxy, where 42 is famously "the answer to life, the universe, and everything."
Why AI can't be random
The explanation is both simple and revealing. AI models like ChatGPT don't actually "think" about what number to pick. They predict the most likely response based on billions of text examples they were trained on.
When millions of humans write things like "pick a random number" on the internet, certain numbers appear far more often than others. Humans already have number biases — we disproportionately favor 7, avoid round numbers, and lean toward primes when trying to "seem random." AI simply mirrors those patterns back.
There's also a technical factor: a process called RLHF (reinforcement learning from human feedback — the step where human trainers rate AI responses to make them better) can cause what researchers call "mode collapse." The AI learns that certain answers get approved more often and starts defaulting to them.
Why this actually matters
If you're using ChatGPT to:
• Run a raffle or giveaway — the results aren't fair
• Randomize A/B tests — your data is skewed
• Generate passwords or PINs — they're predictable
• Shuffle anything — it's not shuffled
AI is not a random number generator. It's a pattern-completion engine that reflects the collective biases of the internet. For anything requiring true randomness, use a dedicated tool like random.org, which generates numbers from atmospheric noise.
Try it yourself
Open ChatGPT (or Claude, or Gemini) and type:
Pick a random number between 1 and 50.
Then try it 10 times. Count how many times you get 27 — or a number with 7 in it. The pattern is hard to unsee.
For a deeper dive, the full academic paper tested six models (DeepSeek-R1, Gemini 2.0, GPT-4o-mini, Llama 3.1, Mistral, and Phi4) across 75,600 calls — and found that even changing the language of the prompt shifts which numbers AI prefers. Ask in Spanish and you get different biases than in English. The training data from each language carries its own cultural number preferences.
Related Content — Get Started with Easy Claude Code | Free Learning Guides | More AI News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments