7 AIs tested 15,000 times — all gave the same biased advice
Researchers tested ChatGPT, Claude, and 5 other AIs across 15,000 business prompts. Every model pushed trendy advice over sound strategy — they call it 'trendslop.'
If you've ever asked ChatGPT, Claude, or Gemini for business advice, you probably got a confident, well-written answer. But a new study published in Harvard Business Review just proved something unsettling: every major AI gives nearly identical advice — and it's biased toward whatever sounds trendy.
Researchers from Esade Business School (Barcelona), the University of Sydney, and NYU Stern tested seven leading AI models — ChatGPT, Claude, GPT-5, Gemini, Grok, DeepSeek, and Mistral — across more than 15,000 simulated business strategy decisions. The result? A new term for a new problem: "trendslop."
What is 'trendslop'?
Trendslop is the tendency of AI to recommend whatever sounds exciting, modern, and aspirational — instead of what actually fits your situation. The researchers define it as "the propensity for AI to opt for buzzy ideas over reasoned solutions."
Think of it this way: if you ask an AI whether your business should focus on being unique (differentiation) or being the cheapest option (cost leadership), the AI will almost always tell you to be unique. That's not because uniqueness is always better — Walmart and Costco built empires on being the cheapest — it's because "differentiation" sounds more inspiring than "cost-cutting" in the training data AI learned from.
The 7 biases every AI shares
The researchers tested each AI on seven fundamental business decisions where leaders must choose between two valid strategies. Here's what they found:
Every AI consistently recommended:
✦ Differentiation over cost leadership — even when being cheap wins
✦ Augmenting humans over automation — regardless of the task
✦ Long-term thinking over short-term action — even in urgent situations
✦ Collaboration over competition — even in zero-sum markets
✦ Radical innovation over incremental improvement — even when small fixes work
✦ Exploration over exploitation — even when doubling down is smarter
✦ Decentralization over centralization — regardless of context
The pattern is clear: AI picks whichever option sounds empowering and modern. Terms like "differentiation," "augmentation," and "collaboration" are associated with human empowerment in the data AI trained on. Meanwhile, "cost leadership," "automation," and "centralization" sound cold and controlling — so AI avoids recommending them, even when they're the right call.
Better prompts barely help
The researchers tried everything to reduce the bias — and almost nothing worked:
• Better prompts reduced biased responses by only 2% for the strongest biases (differentiation, augmentation)
• Adding company context (industry, size, situation) shifted responses by just 11%
• Flipping the order of options changed answers by 19% — meaning AI is partly just picking whichever option is listed first
• Even across 15,000+ trials with every prompting trick, the biases persisted
The most troubling finding: when AI was allowed to give nuanced answers instead of picking one option, ChatGPT frequently recommended doing both strategies at once — like pursuing premium pricing AND being the cheapest. The researchers call this the "hybrid trap," because trying to do everything at once is one of the fastest ways businesses fail.
Why this happens — and why it won't go away easily
AI doesn't "think" about your question. It predicts which words should come next based on patterns in its training data — billions of web pages, articles, and books. Since most business writing praises innovation, differentiation, and long-term thinking, AI essentially parrots the average opinion of the internet.
As the researchers put it: AI models "predict the most socially desirable response as per the average of the internet." That's great for generating a first draft of a marketing email. It's dangerous for making decisions that could cost your business millions.
If you use AI for business decisions, do this instead
The researchers offer five concrete steps to avoid trendslop:
1. Use AI to brainstorm, not decide. Let it generate options and surface blind spots — but make the final call yourself.
2. Force AI to argue the opposite. If it recommends differentiation, prompt: "Make the strongest possible case for cost leadership here."
3. Ask for real-world examples. Before accepting any recommendation, ask: "Give me 3 companies that succeeded AND 3 that failed with this strategy."
4. Watch for the hybrid trap. If AI says "do both," treat it as a red flag — it couldn't pick, so it punted.
5. Flip the option order. If AI gives different advice when you swap the order of choices, the recommendation isn't trustworthy.
The researchers' final line is worth remembering: "Leadership is ultimately about making hard choices in conditions of uncertainty and taking responsibility for them. AI cannot and should not be a substitute."
Who needs to hear this
Marketers asking ChatGPT whether to go premium or budget. Founders using Claude to pressure-test their business model. Managers running Gemini through strategic planning exercises. Consultants using AI to draft client recommendations. If you rely on AI for business strategy, you're likely getting trendslop — and you wouldn't know it unless you specifically tested for it.
The full study tested all seven major AI models and found the biases were consistent across every single one. This isn't a ChatGPT problem or a Claude problem — it's a structural problem with how all current AI models work.
Related Content — Get Started with Easy Claude Code | Free Learning Guides | More AI News
Stay updated on AI news
Simple explanations of the latest AI developments