3 Sources
3 Sources
[1]
'I felt a little useless and it was sad': Sam Altman feels obsolete using his own AI tools -- and he's not the only one | Fortune
Sam Altman's admission about feeling sad as he watched the incredible advancements of artificial intelligence (AI) tools after using his own company's AI tools has struck a nerve across the tech world. A new kind of workplace anxiety has crystallized: feeling obsolete not in spite of your skills, but because your tools have become too good. And as stories of panic attacks, disorientation, and quiet grief over disappearing skills pile up, it is increasingly clear Altman is far from alone. In a recent post on X, OpenAI CEO Sam Altman described building an app with Codex, the company's new AI coding agent, as "very fun" at first. The mood shifted when he began asking the system for new feature ideas and realized "at least a couple of them were better than I was thinking of." "I felt a little useless and it was sad," he added, a moment of vulnerability that quickly ricocheted around the developer community. Codex, released as a standalone Mac app aimed at "vibe coding," lets developers offload everything from writing new features to fixing bugs and proposing pull requests to an AI agent tightly integrated with their codebase. For a founder whose identity is intertwined with building software and championing AI progress, the realization his own product could outperform his ideas landed with unusual force. "I am sure we will figure out much better and more interesting ways to spend our time," Altman added in a follow‑up, "but I am feeling nostalgic for the present." If Altman expected empathy, much of X offered something closer to rage. His confession became a lightning rod for frustrations from workers who say AI is already eroding their livelihoods. One user, an anonymous headhunter in the tech sector claiming over a decade of experience, asked him back: "What do you think your average white-collar worker will feel when AI takes their job?" Others accused him of shedding tears "into a giant pile of money" while they adjusted to careers reshaped around talking to chatbots instead of doing the work they trained for. A food writer described watching her career "disappear" as AI systems churn out "hollow copies" of her work, trained on data taken "without anyone's consent." The replies also became a staging ground for broader anger about OpenAI's rapid product shifts, including the planned deprecation of older models like GPT‑4o, with users pleading for more stability and transparency. At the same time, some peers recognized their own discomfort in Altman's post. Aditya Agarwal, former CTO of Dropbox, wrote that a weekend spent coding with Anthropic's Claude left him "filled with wonder and also a profound sadness." He concluded that "we will never ever write code by hand again. It doesn't make any sense to do so." Agarwal described coding as "something I was very good at" but it is now "free and abundant," leaving him "happy, but disoriented ... sad and confused." The emotions Altman and Agarwal describe echo a broader phenomenon of AI anxiety emerging as even Silicon Valley veterans see their hard‑won skills and identity being outpaced by software that arrived faster than anyone was prepared for. The Conversation recounted the tale of Chris Brockett, a veteran Microsoft researcher who talked to Cade Metz for his 2022 book, Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World. Brockett said he was rushed to the hospital after encountering an early AI system that could do much of what he had spent decades mastering. Believing he was having a heart attack, he later described it, "my 52-year-old body had one of those moments when I saw a future where I wasn't involved." The same piece draws on MIT physicist Max Tegmark's worry AI might "eclipse those abilities that provide my current sense of self-worth and value on the job market," and on reports from professionals who now see AI completing, "quickly -- and relatively cheaply," the tasks they once relied on for income and status. A Silicon Valley product manager put it bluntly in an interview with Vanity Fair in 2023: "We're seeing more AI-related products and advancements in a single day than we saw in a single year a decade ago." Despite the mounting unease, some economists argue AI's trajectory is not destiny. Labor economist David Autor has suggested that, if used deliberately, AI could expand "decision‑making tasks currently arrogated to elite experts" to a broader swath of workers, improving job quality and moderating inequality. In his view, the future of work with AI is "a design problem," not a prediction exercise: Societies can still choose how tools like Codex and Claude are deployed, and who benefits. Wharton management professor Peter Cappelli, who Fortune has interviewed for his somewhat contrarian, evidence-based research on the perils of remote work and the nuts and bolts of AI automation, said in January a great deal of work is still involved with implementing these tools across the enterprise. He specifically warned about listening too sincerely to statements like Altman's or Agarwal's, as they are not only claiming sadness at such great progress but hyping their products for the market. "If you're listening to the people who make the technology, they're telling you what's possible," he said. "They're not thinking about what is practical." Still, regardless of how easy these tools will be to adopt across the enterprise, Altman's tweet captured a paradox now confronting many knowledge workers: The very tools that make them faster, more capable, and sometimes more creative can also puncture the belief that their unique expertise is indispensable. For now, at least, even the people building those tools are grappling with what it means to feel both impressed by their power -- and a little useless in their shadow.
[2]
AI Made Sam Altman Feel 'Useless and Sad' -- X Users Tried to Make Him Feel Worse - Decrypt
The backlash was amplified by frustration over OpenAI's planned retirement of GPT-4o, a user favorite whose deprecation has reignited trust and stability concerns. In a moment of raw tech-bro vulnerability, OpenAI CEO Sam Altman took to X last night to confess that building an app with Codex left him feeling "a little useless and it was sad." OpenAI's Codex is an AI coding agent designed to help developers with software engineering tasks like writing new features, fixing bugs, answering questions about a codebase, running tests, and proposing pull requests -- all in a sandboxed environment that understands and interacts with real code. But the tool, Altman said, spit out feature ideas better than his own, sparking a nostalgic sigh for human relevance amid his hype for the singularity. "I am sure we will figure out much better and more interesting ways to spend our time," he tweeted, "but I am feeling nostalgic for the present." Oh, Sam -- welcome to the club of mortals staring down the AI abyss! Or maybe not: While the CEO of one of the most valuable companies in the world pondered his obsolescence, X users treated his post like dry kindling, then supplied the accelerant -- and kept turning up the heat until nothing recognizable was left. "Feel better," sniped one. "You will have a 100 billion-dollar parachute exit. Meanwhile, 50-60% of white collar [jobs] eliminated due to AI will cause workers to feel a tad more useless and sad without a parachute." "I guess you can cry into a giant pile of money meanwhile I'll go talk to a chatbot for the rest of my career," an OpenSea engineer wrote. "[Thank you,] I guess." "Sort of like how I've felt watching my career disappear because a league of no-talent AI bros can now prompt hollow copies of my work that are 'just passable enough' to flood the internet with slop until it chokes, all because you trained your models without anyone's consent," replied food writer Chrisy Toombs. And that was within the first hour of Altman's post. Nearly 3 million views and over 2,100 replies later, people were still venting at Altman. Many of the posts reflected user backlash over OpenAI's announced deprecation of GPT-4o, which is slated to occur on February 13, with many replies taking Altman to task over model stability. Though the company is also retiring GPT-4.1, GPT-4.1 mini, and o4-mini, as well as legacy GPT-5 variants. GPT-4o is a particular favorite among users due to its warm, conversational style and multimodal capabilities. OpenAI even reinstated it after initial attempts to deprecate it following the release of GPT-5, after user backlash. The company said the decision reflects usage patterns: most users now prefer newer versions like GPT-5.2, which incorporate customizable personality and creative controls inspired in part by GPT-4o's strengths. Some people applauded Altman for his honesty and apparent vulnerability. Aditya Agarwal, former CTO at Dropbox and an early Facebook engineer, said he was also "filled with wonder and also a profound sadness." "I spent a lot of time over the weekend writing code with Claude. And it was very clear that we will never ever write code by hand again. It doesn't make any sense to do so," he said. "Something I was very good at is now free and abundant. I am happy... but disoriented... both the form and function of my early career are now produced by AI. I am happy, but also sad and confused."
[3]
Sam Altman feels useless thanks to ChatGPT Codex, and he's sad about it
It's one thing when a developer on Reddit says AI is taking the fun out of coding. It's another thing entirely when the person running the company says it. Earlier today, Sam Altman shared a candid moment on X which is most likely . He'd been using the new Codex Mac app, which OpenAI officially launched this morning, to build an application. He said the process was initially fun, but the mood shifted when he started asking the model for feature ideas. The AI didn't just help; it out-thought him. Altman noted that several of the suggestions were better than anything he had in mind. "I felt a little useless," he admitted, "and it was sad." Also read: An RTX 5070 Ti GPU with a hole in it set a new world record: Here's how For a long time, the narrative has been that AI is a "copilot." It handles the boring stuff - the boilerplate, the debugging, the documentation - while we provide the creative spark. But Altman's experience hits a raw nerve for anyone in a creative or technical field: what happens when the copilot is a better navigator than the captain? When you're a builder, your value is often tied to that "Aha!" moment, the specific second you think of a solution or a feature no one else did. If a model serves that to you on a silver platter, the project might be better, but the process can feel hollow. You aren't really building; you're just approving. Also read: Inside Snowflake and OpenAI's $200 million plan for enterprise AI Altman described this feeling as "nostalgia for the present." It's a bit of a mind-bender, but it captures the grief for a version of work that still feels "human" before it's fully automated. Even for the person leading the charge toward AGI, there's a clear sense of loss for the days when human intuition was the only game in town. We are in a weird transition where the tech is incredible, but the psychological side effect, that creeping feeling of being secondary to the tool, is starting to set in. Altman ended his thoughts by saying we'll eventually find "better and more interesting ways to spend our time" and "new ways to be useful to each other." It's a hopeful sentiment, but a tough one to process in real-time. If your current way of being useful (coding, designing, or strategizing) is being outperformed by a prompt, "rethinking your purpose" feels less like an opportunity and more like a necessity. For now, it seems even the architects of the future are looking at their creations and feeling a little smaller.
Share
Share
Copy Link
OpenAI CEO Sam Altman confessed to feeling 'useless and sad' after his company's Codex AI coding agent generated better feature ideas than his own. The admission triggered fierce backlash from workers already grappling with AI-driven job insecurity, while tech leaders like former Dropbox CTO Aditya Agarwal echoed similar feelings of disorientation as AI outperforms decades of human expertise.
In a moment of unexpected vulnerability, Sam Altman took to X to share an unsettling experience with his own company's technology. The OpenAI CEO described building an app with Codex, the company's AI coding agent, as initially "very fun" before the mood shifted dramatically
1
. When he asked the system for new feature ideas, he realized "at least a couple of them were better than I was thinking of." His conclusion was stark: "I felt a little useless and it was sad"2
. Sam Altman's admission quickly ricocheted across the developer community, crystallizing a new form of workplace anxiety that extends far beyond Silicon Valley's elite circles.
Source: Digit
Codex, released as a standalone Mac app designed for what OpenAI calls "vibe coding," handles everything from writing new features to fixing bugs, answering codebase questions, running tests, and proposing pull requests
2
. For a founder whose professional identity is deeply intertwined with building software and championing AI progress, watching his own product outperform his creative thinking landed with unusual force. He added in a follow-up that "I am sure we will figure out much better and more interesting ways to spend our time, but I am feeling nostalgic for the present"1
.If Altman expected empathy, much of X offered something closer to rage. His post attracted nearly 3 million views and over 2,100 replies, many expressing fury from workers who say AI is already eroding their livelihoods
2
. An anonymous headhunter with over a decade of experience in the tech sector asked him pointedly: "What do you think your average white-collar worker will feel when AI takes their job?"1
. One OpenSea engineer wrote, "I guess you can cry into a giant pile of money meanwhile I'll go talk to a chatbot for the rest of my career"2
.
Source: Fortune
Food writer Chrisy Toombs captured the sentiment of many creative professionals, describing how she's watched her career "disappear" as AI systems churn out "hollow copies" of her work, trained on data taken "without anyone's consent"
2
. The user frustration extended beyond job insecurity to product stability concerns, with many replies taking Altman to task over OpenAI's planned deprecation of GPT-4o on February 132
. Though the company is also retiring GPT-4.1, GPT-4.1 mini, and o4-mini, GPT-4o remains a particular favorite among users due to its warm, conversational style and multimodal capabilities2
.Yet some peers recognized their own discomfort in Altman's confession. Aditya Agarwal, former CTO of Dropbox and early Facebook engineer, wrote that a weekend spent coding with Anthropic's Claude left him "filled with wonder and also a profound sadness." He concluded that "we will never ever write code by hand again. It doesn't make any sense to do so"
2
. Agarwal described coding as "something I was very good at" but noted it is now "free and abundant," leaving him "happy, but disoriented ... sad and confused"1
.
Source: Decrypt
This phenomenon of AI outperforming humans extends beyond current anxieties. The Conversation recounted the experience of Chris Brockett, a veteran Microsoft researcher who was rushed to the hospital after encountering an early AI system that could replicate decades of his expertise. Initially believing he was having a heart attack, he later described it: "my 52-year-old body had one of those moments when I saw a future where I wasn't involved"
1
. MIT physicist Max Tegmark expressed similar concerns that AI might "eclipse those abilities that provide my current sense of self-worth and value on the job market"1
.Related Stories
For many in creative and technical fields, the narrative has long positioned AI as a "copilot" that handles mundane tasks while humans provide the creative spark. But Altman's experience exposes a raw nerve: what happens when the copilot becomes a better navigator than the captain?
3
When professional identity is tied to those "Aha!" moments of unique insight, having a model serve solutions on demand can make the project better while rendering the process hollow3
.Altman described this sensation as "nostalgia for the present," a grief for work that still feels human before full automation arrives
3
. A Silicon Valley product manager told Vanity Fair in 2023: "We're seeing more AI-related products and AI advancements in a single day than we saw in a single year a decade ago"1
. This accelerating pace leaves even architects of the future looking at their creations and feeling smaller3
.Despite mounting job insecurity, some economists argue AI's trajectory is not predetermined. Labor economist David Autor has suggested that, if deployed deliberately, AI could expand "decision-making tasks currently arrogated to elite experts" to a broader range of workers, improving job quality and moderating inequality. In his view, the future of work with AI is "a design problem," not a prediction exercise: societies can still choose how tools like Codex are deployed and who benefits
1
. Wharton management professor Peter Cappelli noted in January that significant work remains in implementing these tools across enterprises1
.Altman ended his thoughts by saying we'll eventually find "better and more interesting ways to spend our time" and "new ways to be useful to each other"
3
. It's a hopeful sentiment about human creativity and redefining purpose, but one that feels abstract when your current way of contributing value is being outperformed by a prompt. For tech professionals and workers across industries, the psychological challenge of skills deprecation is no longer theoretical—it's arriving faster than anyone was prepared to process.Summarized by
Navi
[2]
23 Aug 2025•Technology

09 Dec 2025•Entertainment and Society

11 Sept 2025•Technology

1
Business and Economy

2
Technology

3
Technology
