Why ChatGPT Feels Broken Since the GPT-5 Launch
A power user’s take on how OpenAI’s GPT-5 router broke consistency, productivity, and trust.
It’s 2 AM, and I’m on my fourteenth message trying to get ChatGPT to do something it would have done in two messages three months ago. I’m not asking it to solve extremely complicated mathematical equations, or do PHD-level analysis. I’m asking it to format a simple data structure in a specific way; something I’ve done hundreds of times before. But tonight, like so many nights recently, I find myself typing the words I’ve grown to dread: “No, I meant...”
When OpenAI launched GPT-5, they didn’t just push an update. For many users, they broke a tool that had become essential to daily workflows. And the most frustrating part? The GPT-5 upgrade isn’t better, it’s worse.
The Forced Migration: How GPT-5 Changed ChatGPT Access
Behind the scenes, ChatGPT now relies on a family of GPT-5 model variants with varying capabilities, some optimized for speed, others for complex reasoning. The router automatically decides which variant to use for each query. The idea has merit in theory. Routing lets AI providers use cheaper, faster models for simple queries while reserving expensive reasoning models for complex tasks. But the execution has been anything but smooth.
For people who understood the strengths and weaknesses of different models, who knew which one to use for which task, this abstraction feels less like simplification and more like removing control under the guise of convenience. We went from being able to choose the right tool for the job to having an algorithm guess what we need and often guess wrong.
What this means is that OpenAI deprecated legacy models for most users. Free tier users lost access to all legacy models entirely, while after major backlash, Plus users regained access to GPT-4o among some other legacy models. Only Pro, Business, and Enterprise users retained full access to the legacy lineup. For many people who had built workflows around specific models, the choice was simple: upgrade to a more expensive tier or switch to GPT-5.
The Router Roulette: GPT-5’s Hidden Model Switching Problem
But the real problem isn’t just that we lost control over which model to use, it’s that we have no idea which model we’re actually getting. The router system that decides which GPT-5 variant to use for each query is completely invisible to users. Sometimes you get the brilliant version that thinks deeply and solves complex problems. Sometimes you get what feels like a lobotomized version that can barely follow basic instructions. And there’s no way to predict which one you’ll get.
This inconsistency is maddening. The same prompt, asked in two different conversations, can produce wildly different results. One moment ChatGPT is insightful and helpful; the next, it’s obtuse and surface-level. Power users have discovered that unless you explicitly add phrases like “think harder” or “use maximum reasoning” to your prompts, the system often defaults to a smaller, faster model variant even for tasks that clearly need deeper reasoning. But even that doesn’t guarantee consistency.
The Paradox: Too Many Words, Too Little Said
Here’s the thing that really gets me: ChatGPT’s responses are simultaneously too long and completely insufficient. It’s like listening to someone who’s padding a college essay to meet a word count. Lots of words, very little substance.
And the process of getting there is its own special torture. You ask a simple question. ChatGPT goes into “thinking” mode. You wait. And wait. Then you get hit with a wall of text that doesn’t fully answer your question or consider the context you just provided. So you ask a follow-up. More thinking time. Another wall of text that’s somehow both verbose and incomplete. Rinse and repeat, several times for every one simple question, until you’ve wasted twenty minutes on something that should have taken two. It’s technically responding to your question, but the density of useful information in each reply is so low that you end up more confused than when you started.
The older models had this beautiful quality: they were detailed when they needed to be, but every word carried weight. If I asked for help debugging code, I’d get a focused explanation of the problem, a clear solution, and maybe one example. Now? I get three paragraphs of introduction about what debugging is, four paragraphs about general coding principles, a solution buried somewhere in the middle, and then two more paragraphs about best practices I didn’t ask for.
And the worst part is that all this verbosity doesn’t even mean it’s being thorough. It’s verbose about the wrong things. It’ll spend two hundred words explaining context I already have, then breeze past the one technical detail I actually needed.
The Death of Instruction Following
I’ve spent a lot of time learning how to write effective prompts. I know how to be specific, how to structure requests, how to provide the right context. With the older models, that effort paid off. None of that matters anymore.
These days, it takes me ten or more messages to get ChatGPT to do something I’ve clearly specified in the first message. Not complex things, mind you. Simple, straightforward tasks with explicit instructions. And yet, message after message, it does something adjacent to what I asked for, something in the general neighbourhood of correct, but never quite right.
But here’s what really makes me want to throw my laptop across the room: ChatGPT has this infuriating habit of pretending it understands. “Oh! Thanks for clarifying! I understand now, let me try again,” it says cheerfully. And then it proceeds to understand absolutely nothing and gives me a reworded copy of the first reply. It’s like talking to someone who’s nodding along enthusiastically while completely ignoring everything you’re saying.
And when ChatGPT doesn’t understand, it’s not just a little confused. It’s fundamentally not smart about parsing intent. Multiple clarifications don’t help and eventually, you just give up because you’ve spent more time trying to explain what you want than it would have taken to just do it yourself.
And here’s the compounding problem: when it fails, you can’t just continue in the same conversation. The context is now poisoned. The AI has gone down the wrong path, and in my experience, it’s often easier to just start a fresh thread and hope this time it’ll work. But starting over means losing all the context from the previous conversation, which means more explaining, which means more chances for it to misunderstand again.
It’s exhausting. It genuinely takes more time to complete tasks with ChatGPT than it did before, which is a damning statement about what’s supposed to be a productivity tool.
A Way Forward
AI advancement isn’t linear. We’ve been sold this story of continuous improvement, each model better than the last, each update bringing us closer to artificial general intelligence. But that’s not how it actually works. Sometimes you make changes that look good on paper but fail in practice. Sometimes you optimize for the wrong things. Sometimes you lose capabilities while gaining others.
The GPT-5 model family might be better at some benchmark tests. It might score higher on some standardised evaluations. It might even be “smarter” by some technical definition. But it’s worse at the thing that actually matters: helping people get their work done.
The question is whether OpenAI will recognize this and course-correct, or whether they’ll double down and assume users just need time to adjust. The market will answer that question eventually. Users have options now, and they’re increasingly willing to use them.
In the meantime, I’m still here at 2 AM, still trying to accomplish something that used to take five minutes. But now instead of hoping the next ChatGPT message will work, I’m tabbing between browser windows, asking Claude and Gemini the same question, waiting to see which one actually understands what I’m asking for.
Update—Since I initially drafted this piece, OpenAI released GPT-5.1 (available in ‘Instant’ and ‘Thinking’ variants) under the banner “A smarter, more conversational ChatGPT.” The release explicitly addresses many of the complaints outlined above: warmer responses, better instruction following, and a new “adaptive reasoning” feature where the model decides when to think deeply, supposedly striking a better balance between speed and quality.
The central router remains, now called GPT-5.1 Auto, still making decisions about which model variant handles each query. OpenAI claims that regardless of this auto-routing, the improvements will be evident across the board.
Time will tell whether these changes address the fundamental issues users experienced, or whether they’re simply iterating toward what should have been the baseline from the start.


I resonate with what you wrote; it's like a new Pilates instructor trying to "simplify" a familiar move and suddanly you're just tangled. Do you think we'll ever get back the ability to choose the model we want?