Microsoft Copilot 3% Adoption: LinkedIn AI Wins
Microsoft Copilot adoption stalls at 3% while LinkedIn's AI becomes a surprise hit. Plus xAI rents chips to Cursor and TSMC raises 2026 outlook 30%+.
Microsoft's Office 365 Copilot — the AI assistant built into Word, Excel, and Teams — has hit just 3% adoption among paying subscribers as of late 2025. That number is reshaping Microsoft's AI strategy, because the product now outperforming it is not another billion-dollar bet: it's LinkedIn.
The 3% Problem: Why Microsoft Copilot Adoption Is Stalling
Office 365 Copilot launched with enormous fanfare in 2023. Enterprises were charged an extra $25–$30 per user per month on top of existing Microsoft 365 fees, with the promise that AI would transform how employees write emails, analyze data, and run meetings. The reality as of end-2025: adoption stalled at roughly 3% of paying subscribers.
That is not 3% of all Office users — it is 3% of the users whose companies already paid for the Copilot add-on. The people whose IT departments wrote the check are the ones not using it.
Several factors explain the stall:
- Change friction — Copilot requires employees to modify how they use every Office app at once, a large behavior change with no immediate obvious payoff
- Scope overload — spanning Word, Excel, Teams, Outlook, and PowerPoint means Copilot is not optimized for any single workflow
- Training gap — most organizations deployed Copilot without onboarding programs, leaving employees unsure what to ask it
- Budget pressure — CIOs are reviewing whether $30/user/month justifies renewals when usage is this weak
The Surprise Winner: LinkedIn AI Outperforms Microsoft's Own Flagship
While Copilot stalled, LinkedIn — the professional network Microsoft acquired for $26.2 billion in 2016 — quietly shipped an AI product in late 2025 that became an unexpected breakout hit. According to The Information, leaders across Microsoft are now studying what LinkedIn got right.
The contrast reveals a fundamental split in how AI adoption actually works:
- LinkedIn's AI runs inside tasks users already perform daily — writing connection requests, reviewing job listings, drafting outreach messages. Zero workflow change required
- Office Copilot asks users to fundamentally reconsider tools they have operated the same way for decades
LinkedIn's success validates what product researchers call "task-level AI integration" (embedding AI into one specific, repeated task rather than overhauling an entire platform). Users adopt AI fastest when it does one familiar thing significantly better — not when it promises to transform everything at once.
For enterprise IT leaders currently evaluating Copilot renewals, this is the critical question: are we deploying specific AI or aspirational AI? The adoption gap between Copilot and LinkedIn's product suggests the answer determines everything. See our AI adoption guides for practical frameworks your team can apply today.
xAI Rents Chips to Cursor — The New AI Automation Compute Play
The same week Microsoft's adoption data emerged, a quiet deal is reshaping who controls the compute (the raw processing capacity needed to train and run AI models). xAI — Elon Musk's AI company and operator of the Grok chatbot — has agreed to rent computing capacity from its data centers to Cursor, the AI coding assistant competing directly with GitHub Copilot.
Cursor plans to use tens of thousands of xAI chips to train its next coding AI model. The arrangement has three major implications:
- Cursor gains model independence — rather than relying entirely on OpenAI or Anthropic's APIs (application programming interfaces — the digital connections that let software access external AI), Cursor can develop a proprietary model trained specifically on code
- xAI monetizes idle capacity — SpaceX's Colossus data center GPU (Graphics Processing Unit — the specialized chips optimized for AI training) cluster now generates revenue even outside xAI's internal research workloads
- A new compute marketplace forms — AI companies with excess hardware are becoming infrastructure competitors to AWS, Azure, and Google Cloud for startup training contracts
For developers currently paying $20/month for Cursor Pro, the implication is real: the AI powering your coding suggestions may soon come from a proprietary model trained specifically on developer workflows — not a licensed version of a general-purpose AI. That could mean sharper completions and faster responses on the tasks developers do most.
Chip Crunch: Why Prices Are Rising Everywhere AI Runs
The xAI-Cursor deal did not happen in a vacuum. AI chip demand has structurally outrun supply, and this week's data makes the scale clear. TSMC (Taiwan Semiconductor Manufacturing Company — the world's dominant chip manufacturer, responsible for producing roughly 90% of the most advanced semiconductors globally) raised its 2026 revenue growth forecast to exceed 30%, citing AI chip orders that are overwhelming its production lines.
The downstream effects are already reaching consumers:
- Meta raised Quest 3S prices — the 128GB model now costs $349.99 (up ~$50) and the 256GB model hits $449.99 (up ~$100), with Meta citing rising global memory chip costs as the direct cause
- AI companies are racing to lock in chip contracts before supply tightens further, driving spot prices higher for smaller players without volume leverage
- Consumer hardware will keep getting more expensive as memory and processing components face sustained demand pressure through at least late 2026
The AI industry's hardware cost problem is spilling directly into products you buy today. Meta's headset price hike is one visible example — the same pressure is hitting laptops, smartphones, smart home devices, and anything running modern AI inference (the process of using a trained AI model to generate real-time predictions or responses).
Apple's Siri Boot Camp — Just 2 Months Before the Big Redesign
Even Apple is visibly scrambling. Fewer than 200 Siri programmers — out of a team that numbers in the hundreds — are enrolled in a multi-week AI coding bootcamp, scheduled just 2 months before Apple's expected major Siri redesign announcement. The tight timeline signals that Apple recognizes its engineers need skills they currently lack — and the clock is already running.
Siri has fallen significantly behind competitors on almost every metric:
- Google's Gemini can reason across documents, images, and calendar data simultaneously in ways Siri cannot match
- ChatGPT on iOS has become the default go-to for millions of iPhone users who stopped asking Siri complex questions years ago
- Siri's underlying architecture predates large language models (AI systems trained on vast text datasets to understand and generate natural conversation) — rebuilding it requires engineers who understand both the decade-old legacy system and the new approach simultaneously
Whether a multi-week bootcamp can meaningfully close that skills gap in 2 months is a genuine open question. But the urgency confirms one thing: even Apple, with its trillion-dollar resources, is treating the AI gap in Siri as an emergency — not a roadmap item.
The Real Lesson: Narrow AI Automation Wins, Broad Platforms Stall
Read across all five stories this week and one signal repeats clearly: narrow AI tools embedded in familiar workflows outperform broad platforms that demand behavior change. Microsoft's own portfolio proves it — Copilot at 3% adoption, LinkedIn's AI now drawing attention across the entire company.
If you are evaluating AI tools for your team right now, the right question is not "which AI is the most powerful?" It is "which AI makes one task we already do significantly better — with no training required?" That is where real adoption happens. Microsoft learned it the hard way from its own LinkedIn subsidiary.
Related Content — Get Started | Guides | More News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments