AI systems such as ChatGPT 5.5, Claude Opus 4.7, Gemini 3.1 and DeepSeek V4 have become so capable that they seem to require little instruction. Even vague requests -- "summarise this" or "analyse that" -- often produce passable results. It is therefore tempting to conclude that prompting is on the verge of obsolescence, reinforced by the buzz around "no-prompt AI", much like the "vibe coding" trend popular among non-coders.
The reality, however, is more nuanced. Prompting today is no longer about getting the model to work Rather, it's about getting it to work on your terms. The reason is that advanced AI models infer intent, fill in missing structure, and run internal reasoning loops. The result is a convincing illusion that prompting is fading away. But this is a feature of user experience, not of system design. Beneath the surface, prompts are generated, refined and chained together by the model itself even as the human sees less of them.
The distinction becomes clearer when one compares how "casual users" play with prompts as opposed to the so-called "power users". A typical casual user might ask a system to "explain AI regulation" and receive a competent, if generic, overview. The system does the work of structuring the answer. A power user, by contrast, frames the same request very differently: "Analyse AI regulation as a coordination problem between the US, the EU and China. Include regulatory arbitrage, enforcement asymmetries and second-order effects on open-source ecosystems."
The second, longer prompt encodes an intellectual lens, and signals what counts as insight, thus eliciting a better and sharper response from the AI model.
Similar divide in software development
Tools such as GitHub Copilot and Cursor have popularised what some developers describe as "vibe coding" (a term coined by AI researcher, Andrej Karapathy), which describes an application in broad terms and lets the machine fill in the rest.
For simple tasks, the results can be surprisingly effective. But professional developers quickly encounter the limits of this approach, which includes bloated code. Production-grade systems require explicit architecture, well-defined constraints and careful handling of edge cases -- elements that must be specified, not inferred.
This explains why I'm drawing the parallel with prompting. As with code, the less one specifies, the more one cedes control. What appears to be a reduction in effort is often a redistribution of responsibility -- from human to machine, and from precision to probability.
This divergence widens as models improve. Earlier systems required elaborate prompts simply to produce coherent output. Newer ones can do that unassisted. But precisely because they are more capable, they are also more open-ended. A weak model demands prompting for competence while a strong model demands prompting for precision. In economic terms, the variance of outcomes increases, raising the premium on those who can control it.
Nowhere is this more evident than in context-heavy workflows. Models with large context windows, such as those in the DeepSeek family, or systems built on frameworks like LlamaIndex and LangChain, allow users to supply vast amounts of material -- documents, transcripts, datasets -- alongside instructions.
Here the prompt is not a standalone query but part of a larger construct that includes curated context, explicit constraints and iterative refinement. A casual user might paste a report and ask for a summary. A power user will specify what to ignore, what to prioritise, how to structure the output, and what analytical frame to apply. The difference in results is often disproportionate to the difference in effort.
Agentic systems complicate matters further
Tools such as AutoGPT or experimental coding agents like Devin generate prompts programmatically as part of multi-step workflows: planning, execution and validation. To the end user, prompting appears to recede even further. Yet these systems are, in effect, vast prompt engines -- continuously producing and consuming instructions. The human role shifts from writing prompts directly to designing the conditions under which good prompts are generated. The locus of control moves up a level, but it does not vanish.
The danger, then, lies in mistaking interface simplicity for conceptual simplicity. The rhetoric of "no-prompt AI" is true only for tasks where approximate answers suffice. It breaks down in domains that demand depth, accuracy or originality. Consider a newsroom. A casual user might ask for "a 1,000-word feature on semiconductor geopolitics" and receive a serviceable draft. A journalist working to deadline will instead specify angle, sources, comparative frameworks and narrative structure, often across multiple iterations and with substantial context attached.
A similar pattern appears in software development. A novice might ask for "a Python script to analyse sales data" and accept whatever emerges. An experienced engineer will define data assumptions, edge cases, performance constraints and output formats, and will iteratively refine both prompt and context based on observed failures. The model is the same; the outcomes differ markedly because the prompting does.
What follows is a bifurcation of users. For the majority, prompting becomes optional, absorbed into ever more forgiving interfaces. For a smaller but more consequential group -- researchers, developers, analysts -- it becomes more important, not less. Their prompts are fewer in number but richer in content, less about phrasing and more about framing. They resemble briefs, hypotheses or specifications rather than queries.
AI as a thought partner
Computer scientist and co-founder of Coursera, Andrew Ng, urges users to use "AI as a Thought Partner". He explains that prompting AI models is very different from when ChatGPT was first released in November 2022. A short prompt sometimes doesn't give it enough information or enough background context to answer your question accurately. So if you tell the AI "please write a good self-review to send to my boss", the AI doesn't know what you've actually done over the last year because you haven't told it yet. And it might write a very generic self-review, which isn't that helpful. In contrast, "I find that AI power users almost have empathy for the AI", says Ng.
Simply put, the prompt is not disappearing but being redefined. As AI systems grow more powerful, they will require fewer instructions to act, but more guidance to act well. Those who understand this distinction will find that prompting remains indispensable as a means of thinking.