2 Sources
2 Sources
[1]
AI's Biggest Moment Since ChatGPT
Over the holidays, Alex Lieberman had an idea: What if he could create Spotify "Wrapped" for his text messages? Without writing a single line of code, Lieberman, a co-founder of the media outlet Morning Brew, created "iMessage Wrapped" -- a web app that analyzed statistical trends across nearly 1 million of his texts. One chart that he showed me compared his use of lol, haha, π, and lmao -- he's an lol guy. Another listed people he had ghosted. Lieberman did all of this using Claude Code, an AI tool made by the start-up Anthropic, he told me. In recent weeks, the tech world has gone wild over the bot. One executive used it to create a custom viewer for his MRI scan, while another had it analyze their DNA. The life optimizers have deployed Claude Code to collate information from disparate sources -- email inboxes, text messages, calendars, to-do lists -- into personalized daily briefs. Though Claude Code is technically an AI coding tool (hence its name), the bot can do all sorts of computer work: book theater tickets, process shopping returns, order DoorDash. People are using it to manage their personal finances, and to grow plants: With the right equipment, the bot can monitor soil moisture, leaf temperature, CO, and more. Some of these use cases likely require some preexisting technical know-how. (You can't just fire up Claude Code and expect it to grow you a tomato plant.) I don't have any professional programming experience myself, but as soon as I installed Claude Code last week, I was obsessed. Within minutes, I had created a new personal website without writing a single line of code. Later, I hooked the bot up to my email, where it summarized my unread emails, and sent messages on my behalf. For years, Silicon Valley has been promising (and critics have been fearing) powerful AI agents capable of automating many aspects of white-collar work. The progress has been underwhelming -- until now. Read: Was Sam Altman right about the job market? This is "bigger" than the ChatGPT moment, Lieberman wrote to me. "But Pandora's Box hasn't been opened for the rest of the world yet." Claude Code has seemingly yet to take off outside Silicon Valley: Unlike ChatGPT, Claude Code can be somewhat intimidating to set up, and the cheapest version costs $20 a month. When Anthropic first released the bot in early 2025, the company explicitly positioned it as a tool for programmers. Over time, others in Silicon Valley -- product managers, salespeople, designers -- started using Claude Code, too, including for noncoding tasks. "That was hugely surprising," Boris Cherny, the Anthropic employee who created the tool, told me. The bot's popularity truly exploded late last month. A recent model update improved the tool's capabilities, and with a surplus of free time over winter break, seemingly everyone in tech was using Claude Code. "You spent your holidays with your family?" wrote one tech-policy expert. "That's nice I spent my holidays with Claude Code." (On Monday, Anthropic released a new version of the product called "Cowork" that's designed for people who aren't developers, but for now it's only a research preview and is much more expensive.) I can see why the tech world is so excited. Over the past few days, I've spun up at least a dozen projects using the bot -- including a custom news feed that serves me articles based on my past reading preferences. The first night I installed it, I stayed up late playing with the tools, sleeping only after maxing out my allowed usage for the second time that evening. (Anthropic limits usage.) The next morning, I maxed it out again. When I told a friend to try it out, he was skeptical. "It sounds just like ChatGPT," he told me. The next day he texted with a gushing update: "It just DOES stuff," he said. "ChatGPT is like if a mechanic just gave you advice about your car. Claude Code is like if the mechanic actually fixed it." Part of what works so well about Claude Code is that it makes it easy to connect all sorts of apps. Sara Du, the founder of the AI start-up Ando, told me that she is using it to help with a variety of life tasks, like managing her texts with real-estate agents. Because the bot is hooked up to her iMessages, she can ask it to find all of the Zillow links she's sent over the past month and compile a table of listings. "It gives me a lot of dopamine," Du said. Andrew Hall, a Stanford political scientist, had Claude Code analyze the raw data of an old paper of his studying mail-in voting. In roughly an hour, the bot replicated his findings and wrote a full research paper complete with charts and a lit review. (After a UCLA Ph.D. student performed an audit of the bot's paper, he and Hall offered a "subjective conclusion": Claude Code made only a few minor errors, the kind that a human might make.) "It certainly was not perfect, but it was very, very good," Hall told me. AI is not yet a substitute for an actual political-science researcher, but he does think the bot's abilities raise major questions for academia. "Claude Code and its ilk are coming for the study of politics like a freight train," he posted on X. Not everyone is so sanguine. The bot lacks the prowess of an excellent software engineer: It sometimes gets stuck on more complicated programming tasks -- and occasionally trips up on simple tasks. As the writer Kelsey Piper has put it, 99 percent of the time, using Claude Code feels like having a tireless magical genius on hand, and 1 percent of the time, it feels like yelling at a puppy for peeing on your couch. Regardless, Claude Code is a win for the AI world. The luster of ChatGPT has worn off, and Silicon Valley has been pumping out slop: Last fall, OpenAI debuted a social network for AI-generated video, which seems destined to pummel the internet with deepfakes, and Elon Musk's Grok recently flooded X with nonconsensual AI-generated porn. But Claude Code feels materially different in the way it presents obvious, immediate real-world utility -- even if it also has the potential to be used to objectionable ends. (Last fall, Anthropic discovered that Chinese state-sponsored hackers had used Claude Code to conduct a sophisticated cyberespionage scheme.) Whatever your feelings on the technology, the bot is evidence that the AI revolution is real. In fact, Claude Code could turn out to be an inflection point for AI progress. A crucial step on the path to artificial general intelligence, or AGI, is thought to be "recursive self-improvement": AI models that can keep making themselves better. So far, this has been largely elusive. Cherny, the Claude Code creator, claims that might be changing. In terms of "recursive self-improvement, we're starting to see early signs of this," he said. "Claude is starting to come up with its own ideas and it's proposing what to build." A year ago, Cherny estimates that Claude Code wrote 10 percent of his code. "Nowadays, it writes 100 percent." Read: Things get strange when AI starts training itself If Claude Code ends up being as powerful as its biggest supporters are promising, it will be equally disruptive. So far, AI has yet to lead to widespread job losses. That could soon change. Annika Lewis, the executive director of a crypto foundation who described herself as "fairly nontechnical," recently used the bot to build a custom tool that scans her fridge and suggests recipes in order to minimize grocery-store runs. Next she wants to hook it up to Instacart so it can order her groceries. In fact, Lewis thinks the bot could help with all kinds of work, she told me. She has two young kids, and had been considering hiring someone to help out with household administrative work such as finding birthday-party venues, registering the kids for extracurricular activities, and booking dental appointments. Now that she has Claude Code, she hopes to automate much of that instead.
[2]
How Claude Reset the AI Race
Over the holidays, some strange signals started emanating from the pulsating, energetic blob of X users who set the agenda in AI. OpenAI co-founder Andrej Karpathy, who coined the term "vibe coding," but had recently minimized AI programming as helpful but unremarkable "slop," was suddenly talking about how he'd "never felt this much behind as a programmer," and tweeting in wonder about feeling like he was using a "powerful alien tool." Others users traded it's so overs and we're so backs, wondering aloud if software engineering had just been "solved" or was "done," as recently anticipated by some industry leaders. An engineer at Google wrote of a competitor's tool, "I'm not joking and this isn't funny," describing how it replicated a year of her team's work "in an hour." She was talking about Claude Code. Everyone was. The broad adoption of AI tools has been strange and unevenly distributed. As general-purpose search, advice, and text-generation tools, they're in wide use. Across many workplaces, managers and employees alike have struggled a bit more to figure out how to deploy them productively, or to align their interests (we can reasonably speculate that in many sectors, employees are getting more productivity out of unsanctioned, gray-area AI use than they are through their workplace's official tools). The clearest exception to this, however, is programming. In 2023, it was already clear that LLMs had the potential to dramatically change how software gets made, and coding assistance tools were some of the first tools companies found reason to pay for. In 2026, the AI-assisted future of programming is rapidly coming into view. The practice of writing code, as Karpathy puts it, has moved up to another "layer of abstraction," where a great deal of old tasks can be managed in plain English, and writing software with the help of AI tools amounts to mastering "agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, [and] IDE integrations" -- which is a long way of saying that, soon, it might not involve actually writing much code at all. What happened? Some users speculated that the winter break just gave people some time to absorb how far things had come. Really, as professor and AI analyst Ethan Mollick puts it, Anthropic, the company behind Claude, had stitched together a "wide variety of tricks" that helped tip the product into a more general sort of usefulness than had been possible before: to deal with the limited "memory" of LLMs, it started generating and working from "compacted" summaries of what it had been doing so far, allowing it to work for longer; it was better able to call on established "skills" and specialized "subagents" that it could follow or delegate to smaller, divvied-up tasks; it was better at interfacing with other services and tools, in part because the tech industry has started formalizing how such tools can talk to one another. The end result is a product that can, from one prompt or hundreds, generate code -- and complete websites, features, or apps -- to a degree that's taken even those in the AI industry by surprise. (To be clear, this isn't all about Claude, although it's the clear exemplar and favorite among developers: Similar tools from OpenAI and Google also took steps forward at the end of last year, which helped feed AI Twitter's various explosions of mania, doom, and elation.) If you work in software development, the future feels incredibly uncertain. Tools like Claude Code are plainly automating a lot of tasks that programmers had to do manually until quite recently, allowing non-experts to write software and established programmers to increase their output dramatically. Optimists in the industry are arguing that the sector is about to experience the Jevons Paradox, a phenomenon in which a dramatic reduction in cost of using a resource (in the classic formulation, coal use; this time around, software production) can lead to far greater demand for the resource. Against the backdrop of years of tech industry layoffs, and CEOs signaling to shareholders that they expect AI to provide lots of new efficiencies, plenty of others are understandably slipping into despair. The consequences of how code gets written won't just be contained to the tech industry, of course -- there aren't many jobs left in the American economy that aren't influenced in some way by software -- and some Claude Code users pointed out that the tool's capabilities, which were designed by and for people who are comfortable coding, might be able to generalize. In a basic sense, what it had gotten better at was working on tasks over a longer period, calling on existing tools, and producing new tools when necessary. As the programmer and AI critic Simon Willison put it, Claude Code at times felt more like a "general agent" than a developer tool, which could be deployed toward "any computer task that can be achieved by executing code or running terminal commands," which covers, well, "almost anything, provided you know what you're doing with it." Anthropic seems to agree, and within a couple of weeks of Claude Code's breakout, announced a preview of a tool called Cowork: Willison tested the tool on a few tasks -- check a folder on his computer for unfinished drafts, check his website to make sure he hadn't published them, and recommend which is closest to being done -- and came away impressed with both its output and the way it was able to navigate his computer to figure out what he was talking about. "Security worries aside, Cowork represents something really interesting," he wrote. "This is a general agent that looks well-positioned to bring the wildly powerful capabilities of Claude Code to a wider audience." These tools represent both a realization of long-promised "agentic" AI tools and a clear break with how they'd been developing up until recently. Early ads for enterprise AI software from companies like Microsoft and Google suggested, often falsely, that their tools could simply take work off users' plates, dealing with complex commands independently and pulling together all the data and tools necessary to do so. Later, general-purpose tools from companies like OpenAI and Anthropic, now explicitly branded as agents, suggested that they might be able to work on your behalf by taking control of your computer interface, reading your browser, and clicking around on your behalf. In both cases, the tools overpromised and underdelivered, overloading LLMs with too much data to productively parse and deploying them in situations where they were set up to fail. Cowork charts a different path to a similar goal, and one that runs through code. In Willison's example, Cowork's agent didn't just direct its attention to a folder, drift back to the web, and start churning out text. It wrote and executed a command in the Mac terminal, hooked into a web search tool, and coded a bespoke website with "animated encouragements" for Willison to finish his posts. In carrying out a task, in other words, it did something that LLM-based tools have been doing much more in the past year: Rather than attempting to carry the task out directly, they first see if they might be able to write a quick script, or piece of software, that can accomplish the goal instead. The ability to rapidly spit out functional pieces of software has major (if not exactly clear) implications for the people and companies who make software. It also suggests an interesting path for AI adoption for lots of other industries, too. AI firms are betting that next generation of AI tools will try to get work done not just by throwing your problems into their context windows and seeing what comes out the other side, but by architecting and coding more conventional pieces of software, on the fly, that might be able handle the work better. The question of whether LLMs are well-suited to the vast range of tasks that make up modern knowledge work is still important, in other words, but perhaps not as urgent as the question of what the economy might do with a near-infinite supply of custom software produced by people who don't know how to code.
Share
Share
Copy Link
Anthropic's Claude Code has exploded in popularity, enabling users to automate complex tasks without coding experience. From creating custom web apps to managing finances and analyzing DNA, this AI tool is demonstrating capabilities far beyond traditional programming assistance. The advancement in AI agents is prompting fresh debate about job automation and the future of software development.
Over the winter holidays, Silicon Valley witnessed what some are calling AI's biggest moment since ChatGPT. Claude Code, an AI tool from Anthropic, captured the attention of developers and non-developers alike with its ability to complete complex tasks autonomously
1
. Alex Lieberman, co-founder of Morning Brew, created "iMessage Wrapped" β a web app analyzing nearly 1 million text messages β without writing a single line of code. Other users deployed the AI tool to create custom MRI viewers, analyze DNA data, and even monitor plant growth by tracking soil moisture and leaf temperature1
.
Source: The Atlantic
Boris Cherny, the Anthropic employee who created Claude Code, expressed surprise at how the tool expanded beyond its original programmer audience. "That was hugely surprising," he noted, as product managers, salespeople, and designers began using it for noncoding tasks
1
. The bot's popularity exploded late last month following a model update that improved its capabilities, with tech professionals spending their winter break exploring its potential.What sets Claude Code apart from ChatGPT is its ability to execute tasks rather than simply provide advice. As one user described it: "ChatGPT is like if a mechanic just gave you advice about your car. Claude Code is like if the mechanic actually fixed it"
1
. The tool excels at connecting disparate apps and managing personal and professional tasks β from booking theater tickets and processing shopping returns to ordering DoorDash and managing finances.Sara Du, founder of AI start-up Ando, uses it to manage communications with real-estate agents, asking it to compile tables of Zillow listings from her iMessages. Andrew Hall, a Stanford political scientist, had Claude Code analyze raw data from an old paper studying mail-in voting. In roughly an hour, the bot replicated his findings and wrote a complete research paper with charts and literature review
1
. While not perfect, the work demonstrated how AI programming tools are moving beyond simple code generation.The tech industry is witnessing a fundamental shift in how software gets made. OpenAI co-founder Andrej Karpathy, who coined the term "vibe coding," tweeted about feeling like he was using a "powerful alien tool" and had "never felt this much behind as a programmer"
2
. A Google engineer described how the tool replicated a year of her team's work "in an hour," highlighting the dramatic impact on software engineering workflows2
.
Source: NYMag
According to professor and AI analyst Ethan Mollick, Anthropic stitched together a "wide variety of tricks" that tipped the product into general usefulness. The system generates compacted summaries to deal with limited LLM memory, allowing it to work for longer periods. It calls on established "skills" and specialized subagents that handle divvied-up tasks, and interfaces better with other services through formalized tool interfacing protocols
2
. This technical sophistication enables programmers to work at a higher "layer of abstraction," managing agents, subagents, prompts, contexts, and permissions in plain English rather than writing traditional code.Related Stories
While Claude Code costs $20 a month for the cheapest version and can be intimidating to set up, it has reset the AI race among major players
1
2
. On Monday, Anthropic released "Cowork," a new version designed for non-developers, though it remains a more expensive research preview1
. Similar tools from OpenAI and Google also advanced at the end of last year, feeding Silicon Valley's cycles of mania and uncertainty.The implications for job automation spark debate across the tech industry. Optimists argue the sector will experience the Jevons Paradox, where dramatically reduced software production costs lead to far greater demand, creating new opportunities. Against the backdrop of years of tech industry layoffs and CEOs signaling expectations for AI-driven efficiencies, others express concern about the future of software development careers. Programmer and AI critic Simon Willison noted that Claude Code sometimes feels more like a "general agent" capable of "any computer task" rather than just a developer tool. As user experiences continue to demonstrate productivity gains that extend far beyond coding, the question of how AI agents will reshape white-collar work remains urgent and unresolved.
Summarized by
Navi
[1]
[2]
1
Policy and Regulation

2
Technology

3
Policy and Regulation
