Enterprise AI Is Making Wrong Calls — Fix Your Data
50% of companies run enterprise AI in 3+ functions — yet MIT finds failures stem from missing data context, not the model. Learn what to fix.
By the end of 2025, half of all companies were running enterprise AI automation across at least three separate business functions — finance, operations, HR, customer service, or supply chain. The uncomfortable finding from MIT Technology Review's new synthesis report: the models are not the problem. The data they're running on is.
The Enterprise AI Deployment Wave That Already Happened
Enterprise AI crossed a major threshold last year. MIT Technology Review's "10 Things That Matter in AI Right Now" — a guide distilling years of industry research — confirmed that 50% of companies had AI active across three or more core business functions by end of 2025. This is no longer the experimental phase. These are production systems making real decisions about customers, budgets, and operations every day.
And yet, despite that scale, a consistent pattern is emerging: AI systems that perform well in controlled testing quietly produce wrong outputs in live environments. Not because the model is broken. Because the data environment it's operating in doesn't carry the business context it needs to make good calls.
"Speed Without Judgment": The SAP Warning Every AI Team Should Read
Irfan Khan, President of Data & Analytics at SAP (one of the world's largest enterprise software companies, serving 440,000+ customers across 180 countries), explained the problem in terms that apply to any team running AI at scale:
"AI is incredibly good at producing results. It moves fast, but without context it can't exercise good judgment, and good judgment is what creates a return on investment for the business. Speed without judgment doesn't help. It can actually hurt us."
— Irfan Khan, SAP Data & Analytics President & Chief Product Officer
The key phrase is "context." In enterprise AI, context means the business logic embedded in data — things like: "this customer account is under a legacy discount policy," "this cost center looks expensive but indirectly funds three revenue streams," or "this supplier rating reflects a dispute that resolved in Q4." AI systems don't know any of this unless it's explicitly preserved in the data layer they query.
When that context is missing, you get what researchers call technically correct but operationally flawed outputs (results the AI is justified in producing given the raw data it sees, but that lead your team to the wrong decision). These are worse than obvious errors — they pass quality checks, get acted on, and surface as business problems weeks later.
What a "Data Fabric" Actually Means — in Plain Terms
The architectural solution researchers are pointing to is a data fabric (an approach that connects data across different systems, clouds, and departments while preserving the business meaning behind each data point — not just its raw value). Think of it as the difference between handing an AI a spreadsheet of numbers versus handing it the same spreadsheet annotated with why each number looks the way it does.
Traditional enterprise data architecture works like this:
- Collect everything into a central data warehouse (a large, consolidated database optimized for analysis)
- Transform data into a standardized format — stripping away source-system details in the process
- Query it with analytics tools or AI models
The problem is step two. That transformation strips away metadata (data about the data — timestamps, source systems, business rules that produced each record) and business semantics (the organizational meaning behind numbers — which division owns this, what policy governs it, when it was last valid). By the time an AI system queries the warehouse, it sees numbers that have been entirely divorced from their context.
A data fabric approach instead connects source systems through an intelligent integration layer, preserving context at query time. The AI doesn't just see a number — it sees the number plus the logic that governs how to interpret it. In 2025–2026, SAP, Databricks, Microsoft Fabric, and Informatica are all shipping commercial tools built around this architecture. The adoption gap, not the technology gap, is what's causing failures in the field.
The Silent AI Degradation Problem
There's a specific failure mode worth understanding if you run AI in production: silent degradation (gradual performance decline where AI accuracy worsens over time as real-world conditions drift away from what the system was calibrated on, without any visible error alert). Your AI agent keeps running. Outputs look reasonable. Then a quarterly review reveals decisions were consistently wrong for three months.
The triggers are ordinary business events — a product category gets renamed, a pricing tier changes, a team reorganizes. None of these automatically update the business context your AI uses. The model doesn't know. You find out from a customer complaint, a forecast miss, or an audit — not from a dashboard alert.
The Hype-Reality Gap: What Fusion Energy Teaches AI Teams
A new study published in Nature Energy this week — based on expert interviews across public and private sector fusion researchers — offers a framework that maps directly onto AI infrastructure decisions. ETH Zurich researchers analyzed the projected cost reduction rates for fusion power (nuclear energy produced by fusing hydrogen atoms — the same process that powers the sun, and the opposite of the fission reaction used in today's nuclear plants) and compared them to established energy technologies:
- Solar modules: 23% cost reduction for every doubling of deployed capacity
- Lithium-ion batteries: 20% cost reduction per doubling (down 90% overall since 2013)
- Wind power: 12% cost reduction per doubling
- Fission nuclear (existing plants): 2% cost reduction per doubling
- Fusion power (projected): 2–8% cost reduction per doubling
Despite these projections, fusion attracted $1 billion in US federal funding in FY2024, plus $2.2 billion in private investment between July 2024 and July 2025 — a total of $3.2 billion in twelve months. Lingxi Tang, the ETH Zurich PhD candidate who led the study, found "almost unanimous agreement that fusion is incredibly complex" among the experts interviewed, with some describing the complexity as "literally off the scale." Professor Egemen Kolemen of Princeton Plasma Physics Laboratory was direct: "We have to be humble about how much we don't know."
The parallel to enterprise AI is direct: companies are funding the exciting capability layer — the agent, the model, the interface — while treating foundational data infrastructure as a later problem. Solar became cost-competitive through decades of compounding improvements in manufacturing, installation, and grid integration — not through a single breakthrough moment. Enterprise AI that skips the context layer is betting on a fusion-style leap rather than a solar-style compounding curve.
Three Questions to Audit Your AI Stack This Week
Whether you're a developer building AI-powered features, a marketer running AI agents for campaigns, or an operations manager evaluating tools for your team — these three questions reveal whether your deployment is at risk of silent failure:
- Does your AI system know why your data looks the way it does? Not just the values — the rules, policies, and organizational history that produced them. If no: you have a context gap that will eventually surface as operationally wrong outputs in production.
- What happens when your business changes? If you update a pricing rule, restructure a team, or reclassify a product — does that change automatically reach your AI's context layer? Or do you need to manually retrain or reconfigure after the fact?
- Can your AI explain its outputs in business terms? "The model predicted X" is not auditable. "Based on data from Y timeframe, under policy Z, flagged because of condition W" gives your team something to check, challenge, and act on confidently.
If you want to start building more reliable, context-aware AI workflows today, the setup guide at aiforautomation.io covers foundational steps for connecting AI to real business data, and the learning hub has structured walkthroughs on building automations that hold up in production — not just in demos.
Related Content — Get Started | Guides | More News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments