OpenAI rolls back ChatGPT's model router, defaults free users to cheapest AI model

Reviewed byNidhi Govil

4 Sources

Share

OpenAI has quietly reversed a major change affecting hundreds of millions of ChatGPT users. The company rolled back its model router system for free and Go tier subscribers, now defaulting them to GPT-5.2 Instant instead of automatically routing complex queries to advanced reasoning models. The shift appears to balance cost management with user control, though it raises questions about response quality for sensitive queries.

OpenAI Reverses Model Router System for Free Tier Users

OpenAI has quietly dismantled a key feature that powered ChatGPT for millions of users, marking a significant shift in how the company manages its free and low-cost subscription tiers. In a December 11 update posted to its product changelog, OpenAI announced it was removing the model router system for ChatGPT free users and those on the $5-per-month Go plan

1

. Instead of automatically directing complex queries to more capable reasoning models, these users will now default to GPT-5.2 Instant, described by the company as a "powerful workhorse for everyday work and learning"

2

. The change effectively ends automatic switching between AI model tiers based on query complexity, a feature that launched just four months ago alongside the GPT-5 rollout.

Source: Gizmodo

Source: Gizmodo

Cost Management Drives Strategic Retreat

The rollback represents a clear cost-saving measure for OpenAI, which has been grappling with the expensive reality of serving advanced reasoning models to its massive user base.

Source: Digit

Source: Digit

When the model router system first launched, CEO Sam Altman revealed it increased usage of reasoning models among free users from less than 1 percent to 7 percent

1

. While this delivered better answers, it came at a steep computational price. Sources familiar with the matter told WIRED that the router negatively affected the company's daily active users metric, as consumers proved unwilling to wait through the minutes-long processing times that reasoning models require

1

. By defaulting millions to the cheapest available option, OpenAI is banking on the likelihood that many users won't notice or bother changing their default AI model manually

2

. The shift addresses compute costs while positioning the change as maximizing user choice.

User Experience Concerns and Manual Access

ChatGPT Go subscribers and free tier users can still access the more sophisticated Thinking model, but they must now select it manually from the tools menu in the message composer

3

. This manual selection must presumably occur on every visit, fundamentally altering the user experience that OpenAI had worked to streamline. The original model router system was designed to eliminate the confusing "model picker" menu that Sam Altman admitted the company hated "as much as you do"

1

. Yet the automated approach faced backlash from frequent users who complained about a "lobotomized" version of the chatbot with significantly less personality when GPT-5 first rolled out

2

. Chris Clark, chief operating officer of AI inference provider OpenRouter, explains why speed matters: "If somebody types something, and then you have to show thinking dots for 20 seconds, it's just not very engaging. For general AI chatbots, you're competing with Google [Search]"

1

.

Implications for Sensitive Queries and Safety

The change raises important questions about how ChatGPT will handle sensitive queries going forward. Previously, OpenAI automatically routed conversations involving mental health concerns and other delicate topics to reasoning-focused models, which demonstrated better responses when interacting with users showing signs of distress. With automatic switching now removed, OpenAI claims GPT-5.2 Instant has been improved to handle such situations adequately

4

. Whether this upgrade truly matches the capabilities of the Thinking model for demanding problems requiring deeper reasoning and refinement remains to be seen in practice. The company has not yet clarified whether additional limits or changes are planned for these user tiers

4

. As OpenAI scales its services and manages the tension between performance and profitability, the decision reflects broader challenges facing AI companies: balancing response times, computational expenses, and answer quality while maintaining user engagement across different subscription levels.

Source: Wired

Source: Wired

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo