Two Ultra-Fast AI Models Now Available to Free ChatGPT Users — GPT-5.4 mini and nano
OpenAI has simultaneously launched GPT-5.4 mini and GPT-5.4 nano. The mini model is more than twice as fast as its predecessor while approaching the performance of the flagship model, and nano is the smallest and most affordable lightweight model yet. Free ChatGPT users can start using them right away.
On March 17, OpenAI unveiled two new lightweight AI models available directly in ChatGPT. GPT-5.4 mini is more than twice as fast as the previous generation while approaching the performance of the top-tier GPT-5.4, and GPT-5.4 nano is the smallest and most affordable model, optimized for high-volume processing tasks. The biggest change is that free ChatGPT users can now access GPT-5.4 mini right away, with no extra steps required.
GPT-5.4 mini — Twice the Speed, Near Top-Tier Performance
Compared to the previous generation GPT-5 mini, GPT-5.4 mini delivers meaningful improvements across all major capability areas: coding, reasoning (the AI's ability to think through problems step by step), image understanding, and tool use.
What's particularly notable is that it scored near the top-tier GPT-5.4 on two key benchmarks: SWE-Bench Pro (a test that measures how well an AI solves real-world coding problems) and OSWorld-Verified (a test that measures an AI's ability to complete tasks by directly controlling a computer — moving the mouse, clicking buttons, and so on). In plain terms: you now get results nearly as good as the expensive, slower flagship model — at twice the speed.
- Response speed more than 2x faster than the previous GPT-5 mini
- Performance gains across all areas: coding, reasoning, image understanding, and tool use
- Benchmark scores approaching the top-tier GPT-5.4
- Available today in ChatGPT, Codex (OpenAI's AI coding tool), and the API
- Accessible to free and Go plan users via the 'Thinking' feature
GPT-5.4 nano — The Ultra-Lightweight Model That Slashes AI Costs
GPT-5.4 nano is the smallest and most affordable model OpenAI has ever built. Not every task needs the most powerful AI. For things like sorting emails, extracting key information from datasets, ranking search results, or basic coding assistance, a fast and cheap model like nano is far more efficient.
GPT-5.4 nano delivers substantial performance gains over the previous GPT-5 nano generation, and is currently available through the API (application programming interface — the way developers plug AI capabilities into their own software) only. It isn't yet selectable directly in the ChatGPT app, but it is a significant resource for developers building AI-powered services who want to cut costs without sacrificing quality.
Why 'Smaller Models' Matter So Much Right Now
The hottest competition in the AI industry today isn't about building the smartest AI — it's about building AI that is smart enough, fast enough, and cheap enough. Last week, Google released Gemini 3.1 Flash-Lite, and Mistral launched Mistral Small 4. OpenAI's GPT-5.4 mini and nano are a direct answer to that competition.
Why does this race matter? The reason is straightforward. AI needs to get cheaper before more people and businesses can actually use it. Until now, companies building AI-powered services struggled to turn a profit because top-tier models cost too much to run at scale. Lightweight models like mini and nano make a new strategy possible: use affordable models for routine tasks, and reserve the expensive models only for the hard ones.
How to Try It in ChatGPT Right Now
GPT-5.4 mini is available in ChatGPT starting today. No separate installation or configuration needed.
Plus, Pro, Team, and Enterprise users: When demand is high during a GPT-5.4 Thinking session, the system will automatically switch to GPT-5.4 mini — full power when you need it, extra speed when things get busy.
If you're a developer using the API, you can find model IDs and detailed specifications on the OpenAI official announcement page. GPT-5.4 mini is also available immediately in Codex (OpenAI's AI coding tool).
A Particularly Important Shift for Developers Building AI Tools
GPT-5.4 mini and nano are optimized for building AI agents (AI assistants that carry out tasks on behalf of humans). When an agent runs a multi-step workflow, using the flagship model at every single step causes costs to multiply exponentially. A 'tiered strategy' is now practical: use nano for simple steps, mini for complex judgment calls, and GPT-5.4 only for the truly difficult decisions.
The fact that mini's performance approaches the flagship even on computer-use tasks (where the AI views a screen and operates a mouse to get things done) could be a genuine cost-cutting breakthrough for companies building automation services.
What's Happening in the AI Model Market Right Now
The third week of March 2026 has been a battleground for lightweight AI models. Google's Gemini 3.1 Flash-Lite, Mistral's Small 4, NVIDIA's Nemotron 3 Super, and now OpenAI's GPT-5.4 mini and nano. Four companies have released lightweight models within a single week.
What this trend means for everyday users is clear. AI is getting faster, cheaper, and available in more places. Smartphone apps, web services, internal business tools — the excuse of 'AI costs too much to use' is quickly becoming a thing of the past.
Related Content — Get Started with AI | Free Learning Guide | More AI News
Stay updated on AI news
Simple explanations of the latest AI developments