OpenAI Reverts ChatGPT’s Model Router System for the Majority of Users

OpenAI Reverts ChatGPT’s Model Router System for the Majority of Users

OpenAI has discreetly reverted a significant change affecting how hundreds of millions interact with ChatGPT.

On a modestly publicized blog documenting product updates, the company announced that it rolled back ChatGPT’s model router—an automated feature that directs intricate user queries to more advanced “reasoning” models for those on its Free and $5-a-month Go tiers. Now, default for these users will be GPT-5.2 Instant, the quickest and least expensive version of OpenAI’s new model series. Free and Go users can still access reasoning models, but they will need to select them manually.

The model router was introduced just four months ago as part of OpenAI’s initiative to streamline the user experience alongside the launch of GPT-5. This feature evaluates user queries and determines whether ChatGPT will respond with a fast, cost-effective AI model or a slower, more resource-intensive reasoning AI model. Ideally, the router aims to connect users with OpenAI’s most sophisticated AI models precisely when needed. Users previously accessed advanced systems through a complex “model picker” menu, a feature that CEO Sam Altman expressed strong dislike for, stating, “as much as you do.”

In practice, the router appeared to direct significantly more free users towards OpenAI’s advanced reasoning models, which are more costly for the company to maintain. Shortly after its implementation, Altman noted that the router increased reasoning model usage among free users from under 1 percent to 7 percent. This was a costly gamble intended to enhance ChatGPT’s responses, yet the model router did not gain the acceptance OpenAI anticipated.

A source close to the situation informed WIRED that the router adversely impacted the company’s daily active users metric. While reasoning models are viewed as the zenith of AI performance, they can require minutes to process complex queries, incurring significantly higher computational costs. Most users prefer not to wait, even if it means sacrificing a higher quality response.

According to Chris Clark, the chief operating officer of AI inference provider OpenRouter, fast-responding AI models remain predominant in general consumer chatbots. He notes that on these platforms, the speed and tone of responses are crucial.

“If someone types something, and you have to display thinking dots for 20 seconds, it’s just not very engaging,” Clark remarks. “In terms of general AI chatbots, you’re going up against Google [Search]. Google has always prioritized making Search as fast as possible; they never thought, ‘Let’s provide a better answer, but do it slowly.’”

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant