4 Sources
4 Sources
[1]
OpenAI Rolls Back ChatGPT's Model Router System for Most Users
OpenAI has quietly reversed a major change to how hundreds of millions of people use ChatGPT. On a low-profile blog that tracks product changes, the company said that it rolled back ChatGPT's model router -- an automated system that sends complicated user questions to more advanced "reasoning" models -- for users on its Free and $5-a-month Go tiers. Instead, those users will now default to GPT-5.2 Instant, the fastest and cheapest-to-serve version of OpenAI's new model series. Free and Go users will still be able to access reasoning models, but they will have to select them manually. The model router launched just four months ago as part of OpenAI's push to unify the user experience with the debut of GPT-5. The feature analyzes user questions before choosing whether ChatGPT answers them with a fast-responding, cheap-to-serve AI model or a slower, more expensive reasoning AI model. Ideally, the router is supposed to direct users to OpenAI's smartest AI models exactly when they need them. Previously, users accessed advanced systems through a confusing "model picker" menu; a feature that CEO Sam Altman said the company hates "as much as you do." In practice, the router seemed to send many more free users to OpenAI's advanced reasoning models, which are more expensive for OpenAI to serve. Shortly after its launch, Altman said the router increased usage of reasoning models among free users from less than 1 percent to 7 percent. It was a costly bet aimed at improving ChatGPT's answers, but the model router was not as widely embraced as OpenAI expected. One source familiar with the matter tells WIRED that the router negatively affected the company's daily active users metric. While reasoning models are widely seen as the frontier of AI performance, they can spend minutes working through complex questions at significantly higher computational cost. Most consumers don't want to wait, even if it means getting a better answer. Fast-responding AI models continue to dominate in general consumer chatbots, according to Chris Clark, the chief operating officer of AI inference provider OpenRouter. On these platforms, he says, the speed and tone of responses tend to be paramount. "If somebody types something, and then you have to show thinking dots for 20 seconds, it's just not very engaging," says Clark. "For general AI chatbots, you're competing with Google [Search]. Google has always focused on making Search as fast as possible; they were never like, 'Gosh, we should get a better answer, but do it slower.'"
[2]
OpenAI Defaults Free Users to Cheapest Model to Cut Back on Costs
ChatGPT users who aren't paying for one of the higher-tiered subscriptions are going to have to talk to OpenAI's cheapest AI model. Wired spotted a recent update in the release notes for ChatGPT that indicates free users and ChatGPT Go subscribers will now have their prompts served to the GPT-5.2 Instant model by default and will no longer have more complicated queries routed to more powerful models. In an update dated December 11, OpenAI indicated that it is "removing automatic model switching for reasoning in ChatGPT" for free and Go users. "Previously, some questions were automatically routed to the Thinking model when ChatGPT determined it might help. To maximize choice, free users will now use GPT-5.2 Instant by default, and can still choose to use reasoning anytime by selecting Thinking from the tools menu in the message composer." Gizmodo reached out to OpenAI for additional details on the change, including whether free and Go users will have any other limitations on their accounts. We will update this post if we hear back. Free and Go users, who pay $5 per month for a limited subscription plan that is only available to users in certain regions, will still be able to access ChatGPT's Thinking model, but will have to do so manuallyâ€"and presumably will have to make that change on every visit. OpenAI describes its Instant model as a "powerful workhorse for everyday work and learning," whereas Thinking "solves harder work tasks more effectively and with more polish." The change is something that OpenAI can position as a quality-of-life improvement for users, who were in revolt over the company's decision to automatically route users to a model based on their query. Back when the company rolled out GPT-5, frequent users of ChatGPT were frustrated by the "lobotomized" version of the chatbot that had significantly less "personality." Earlier this year, CEO Sam Altman conceded, "We hate the model picker as much as you do." But it's also noteworthy that the change is a cost-saving measure for OpenAI, which is almost certainly banking on the fact that a significant chunk of free users aren't going to look at the model they are using and will go along with the company filtering them into its cheapest available option. The cost-saving measure for the company could come at a cost to users. Previously, OpenAI said it would route sensitive queries to its reasoning model as it displayed better responses when interacting with users showing signs of mental distress. It's no longer doing that, in part on the grounds that GPT-5.2 Instant is now better equipped to handle those situations. Hopefully, the company is right about that.
[3]
OpenAI Shifts ChatGPT Free Users to Cheaper Default AI Model
ChatGPT Free and Go Users Receive New Default as OpenAI Tweaks Model Access OpenAI has quietly rewired how ChatGPT works for users on its free tier and low-cost Go subscription. The tech giant has shifted them to a more affordable AI model by default. The move, as cited by Wired, suggests the company is reworking both user experience and costs as it scales its services. Under this new arrangement, prompts coming from free and Go users are now processed by the GPT-5.2 instant model. OpenAI has also ceased automatically switching over certain queries to more advanced reasoning-focused models in the background.
[4]
OpenAI quietly pushes free users to lower-cost AI model: Here's why
The change is likely aimed at cutting compute costs, raising questions around performance and safety for complex queries. OpenAI has reportedly changed how its ChatGPT works for users on its free tier and low-cost Go subscription. The AI giant has switched to the most affordable AI model for the tiers as the default, possibly to reduce operational costs. As per recent update notes spotted by Wired, ChatGPT will now route prompts from free and Go users to the GPT-5.2 Instant model by default. The company has also discontinued automatic switching to more advanced reasoning models for these users. Previously, certain complex or sensitive queries were automatically routed to a more capable Thinking model when the system believed it would produce better results. In a December 11 update, OpenAI stated that the change was intended to give users more control. While the default model is now GPT-5.2 Instant, users can still select the Thinking model from the tools menu while composing a message. However, rather than occurring automatically in the background, this must be done deliberately and, most likely, repeatedly. The company described GPT 5.2 Instant as a general-purpose model capable of solving everyday tasks and learning, whereas the Thinking model is better suited to demanding problems requiring deeper reasoning and refinement. Users on the free and Go plans can still access the advanced model, but only if they actively choose to. OpenAI describes the update as a response to long-standing user concerns about automated model selection. When GPT-5 was first released, many regular users criticised the experience, claiming that automatic routing lowered the chatbot's personality and output quality. CEO Sam Altman has previously acknowledged frustrations with the model selection system, stating that the company was dissatisfied with how it operated. At the same time, the shift also serves as a clear cost-cutting measure as defaulting the millions of free users to get cheaper models, OpenAI is likely to reduce compute expenses, particularly since many users may not notice or bother to change which model they are using. There are also potential implications for user safety. In the past, OpenAI said sensitive conversations, including those involving mental health concerns, were automatically handled by reasoning-focused models that produced more nuanced responses. With automatic switching now removed, OpenAI says GPT-5.2 Instant has been improved to handle such situations, though it remains to be seen how well it performs in practice. It must be noted that OpenAI has not yet clarified whether additional limits or changes are planned for free and Go users.
Share
Share
Copy Link
OpenAI has quietly reversed a major change affecting hundreds of millions of ChatGPT users. The company rolled back its model router system for free and Go tier subscribers, now defaulting them to GPT-5.2 Instant instead of automatically routing complex queries to advanced reasoning models. The shift appears to balance cost management with user control, though it raises questions about response quality for sensitive queries.
OpenAI has quietly dismantled a key feature that powered ChatGPT for millions of users, marking a significant shift in how the company manages its free and low-cost subscription tiers. In a December 11 update posted to its product changelog, OpenAI announced it was removing the model router system for ChatGPT free users and those on the $5-per-month Go plan
1
. Instead of automatically directing complex queries to more capable reasoning models, these users will now default to GPT-5.2 Instant, described by the company as a "powerful workhorse for everyday work and learning"2
. The change effectively ends automatic switching between AI model tiers based on query complexity, a feature that launched just four months ago alongside the GPT-5 rollout.
Source: Gizmodo
The rollback represents a clear cost-saving measure for OpenAI, which has been grappling with the expensive reality of serving advanced reasoning models to its massive user base.

Source: Digit
When the model router system first launched, CEO Sam Altman revealed it increased usage of reasoning models among free users from less than 1 percent to 7 percent
1
. While this delivered better answers, it came at a steep computational price. Sources familiar with the matter told WIRED that the router negatively affected the company's daily active users metric, as consumers proved unwilling to wait through the minutes-long processing times that reasoning models require1
. By defaulting millions to the cheapest available option, OpenAI is banking on the likelihood that many users won't notice or bother changing their default AI model manually2
. The shift addresses compute costs while positioning the change as maximizing user choice.ChatGPT Go subscribers and free tier users can still access the more sophisticated Thinking model, but they must now select it manually from the tools menu in the message composer
3
. This manual selection must presumably occur on every visit, fundamentally altering the user experience that OpenAI had worked to streamline. The original model router system was designed to eliminate the confusing "model picker" menu that Sam Altman admitted the company hated "as much as you do"1
. Yet the automated approach faced backlash from frequent users who complained about a "lobotomized" version of the chatbot with significantly less personality when GPT-5 first rolled out2
. Chris Clark, chief operating officer of AI inference provider OpenRouter, explains why speed matters: "If somebody types something, and then you have to show thinking dots for 20 seconds, it's just not very engaging. For general AI chatbots, you're competing with Google [Search]"1
.Related Stories
The change raises important questions about how ChatGPT will handle sensitive queries going forward. Previously, OpenAI automatically routed conversations involving mental health concerns and other delicate topics to reasoning-focused models, which demonstrated better responses when interacting with users showing signs of distress. With automatic switching now removed, OpenAI claims GPT-5.2 Instant has been improved to handle such situations adequately
4
. Whether this upgrade truly matches the capabilities of the Thinking model for demanding problems requiring deeper reasoning and refinement remains to be seen in practice. The company has not yet clarified whether additional limits or changes are planned for these user tiers4
. As OpenAI scales its services and manages the tension between performance and profitability, the decision reflects broader challenges facing AI companies: balancing response times, computational expenses, and answer quality while maintaining user engagement across different subscription levels.
Source: Wired
Summarized by
Navi
[3]
28 Feb 2025•Technology

08 Aug 2025•Technology

12 Aug 2025•Technology
