OpenAI Debuts Low-Cost GPT-5.4 Mini and Nano Models
- •OpenAI releases GPT-5.4 mini and nano models with significant pricing and speed improvements
- •GPT-5.4 nano offers vision-based image description at a fraction of competitors' costs
- •Benchmark data shows nano outperforming previous mini models during high-effort reasoning tasks
OpenAI has expanded its flagship model family with the introduction of GPT-5.4 mini and GPT-5.4 nano, arriving just weeks after the standard GPT-5.4 release. These compact models prioritize efficiency and affordability without sacrificing substantial reasoning capabilities. The nano variant, in particular, signals a new floor for AI pricing, notably undercutting competitors like Google’s Gemini 3.1 Flash-Lite with an input cost of just $0.20 per million tokens.
To demonstrate the practical impact of these lower costs, developer Simon Willison (tech blogger and creator of Datasette) tested the nano model’s vision capabilities. He found that generating a detailed description of a single high-resolution photograph cost less than a tenth of a cent. At this rate, an entire personal collection of 76,000 images could be cataloged and searchable for roughly $52, making large-scale multimodal processing accessible to individual developers and small businesses.
The update also includes improvements to reasoning effort levels, where the new nano model reportedly outperforms its predecessor, GPT-5 mini, when given maximum processing time. This release highlights a broader industry shift toward optimized models that offer high-speed inference—the process of a model generating a response—while maintaining the sophisticated understanding required for complex tasks like image analysis and structured data generation across various platforms.