OpenAI Releases GPT-5.4 Mini and Nano Models

OpenAI GPT-5.4 mini nano

OpenAI is releasing today two new small models, GPT-5.4 mini and nano, which are optimized for speed and efficiency. The company said that GPT-5.4 mini is more than 2x faster than the older GPT-5 mini, and it will become available for Free and Go users via the “Thinking” feature in the ‘+’ menu of ChatGPT.

“These models are built for the kinds of workloads where latency directly shapes the product experience: coding assistants that need to feel responsive, subagents that quickly complete supporting tasks, computer-using systems that capture and interpret screenshots, and multimodal applications that can reason over images in real-time,” OpenAI explained today.

In addition to ChatGPT’s Thinking menu, GPT‑5.4 mini is also available in Codex, OpenAI’s new AI coding assistant, as well as the company’s developer API. The smaller GPT-5.4 nano model, however, is only available through that same API. OpenAI described it as “the smallest, cheapest version of GPT‑5.4 for tasks where speed and cost matter most,” and “a significant upgrade over GPT‑5 nano.”

ChatGPT paid users still get access to the bigger GPT-5.4 Thinking model, but GPT-5.4 mini is now available as a rate limit fallback. For regular queries, ChatGPT currently uses the GPT-5.3 Instant model, which is optimized for low-latency and efficiency. It’s not easy to keep track of all of these AI models,

Tagged with

Share post

Thurrott