11 Sources
11 Sources
[1]
The AI Panic Ignores Something Important -- the Evidence
Last week, a post written by tech entrepreneur and investor Matt Shumer went viral on social media. Titled, Something Big Is Happening, it was a rundown of all the ways artificial intelligence would, in short order, decimate professional jobs. Tools like Claude Code and Claude Cowork from Anthropic PBC would displace the work of lawyers and wealth managers, he wrote. To get ready, we all needed to practice using AI for an hour a day to upskill ourselves and keep ahead of the tsunami. The post ripped through the Internet and has been seen more than 80 million times on X. In the words of the young and very online, people are shook. Shumer's post has struck a nerve in the middle of huge selloffs of finance and software companies whose products seem ripe for replacement. That market meltdown is one reason the public may be particularly vulnerable to dramatic storytelling about AI right now. Another is that many are tinkering with the latest tools, spinning up a website in hours with Claude Code or using its newer cousin Cowork to answer LinkedIn messages. Collective awe at the agents' remarkable capabilities has triggered another ChatGPT moment -- and soul searching about "what it all means" for our livelihoods. But the viral reaction to Shumer's post also helps explain the market turmoil: AI is trading on vibes and anecdotes. Of the 4,783 words in Something Big Is Happening, none point to quantifiable data or concrete evidence suggesting AI tools will put millions of white-collar professionals out of work any time soon. It is more testimony than evidence, with anecdotes about Shumer leaving his laptop and coming back to find finished code or a friend's law firm replacing junior lawyers. Some critics claim the author has made exaggerated claims in the past about tech, but that is beside the point. A single compelling story about AI has created ripples of worry just when the market has become so narrative-driven that it's giving investors whiplash. One minute AI is overhyped and the next we're on the verge of the singularity. Remember in mid-November 2025 when the Dow fell nearly 500 points? Or the following month, when shares in Oracle Corp. and CoreWeave Inc. dropped? In both cases the market was rattled by concerns that an AI bubble was on the verge of bursting. Then earlier this month shares took a beating again, this time after Anthropic released 11 plugins for Claude Cowork, including one that carried out legal tasks. Now investors were worried that AI threatened the equities in which they'd long parked themselves. And yet through all these narrative swings, the underlying data hasn't changed that much. National productivity statistics are up slightly, but generally within their historic range. The Yale Budget Lab has found no discernible disruption to the broader labor market since ChatGPT's launch. And a randomized controlled trial conducted by research group Model Evaluation and Threat Research (METR), which Shumer himself cherry picks from, found last year that experienced software developers took 19% longer to complete tasks when they used AI tools. It's worth retaining a healthy dose of skepticism about the speed of this transformation, and remembering that those who spread the most viral claims about it will likely benefit the most. Anthropic Chief Executive Officer Dario Amodei grabbed headlines when he predicted AI would wipe out half of all entry-level white-collar jobs in the next one-to-five years, while Microsoft's AI head Mustafa Suleyman took things further last week, saying that "most if not all" professional tasks would be automated within 18 months. Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. Questionable decisions abound for those who only listen to the rhetoric. A Harvard Business Review survey of more than 1,000 executives found that many had made layoffs in anticipation of what AI would be able to do. Only 2% said they'd cut jobs because of actual AI implementation. Swedish fintech firm Klarna Group Plc had to rehire humans last year after its move to replace 700 customer service staff with AI led to a decline in quality. We've seen this pattern before. When stories got ahead of reality in the early 2000s, we got the dot-com crash. The Internet turned out to be as transformative as people claimed, but it took longer than expected to play out. A slow and deliberate approach to the nuanced impact of AI is needed today, as well as some humility over the fact that none of us -- not even the AI labs -- have any idea what is around the corner. OpenAI's leaders didn't expect ChatGPT to spark a market boom and Anthropic was shocked at the impact of its latest products, staff there tell me. Two things can be true at the same time: AI's impact can be both overhyped and real. But striking that balance means prioritizing evidence over testimony, and tracking things like productivity statistics, hiring rates and rigorous studies such as those carried out by Berkeley-based METR. Artificial intelligence is a genuinely useful technology, but its impact will be uneven, gradual and impossible to predict. That's the boring truth, however unlikely it is to go viral. More from Bloomberg Opinion: * Salesforce and Friends Deserve This AI Squeeze: Parmy Olson * Who's On the Other Side of the Big AI Selloff?: Chris Hughes * Wall Street's Doom-Mongering on Software Is Bizarre: Dave Lee Want more Bloomberg Opinion? Terminal readers head to OPIN <GO> . Or subscribe to our daily newsletter.
[2]
The Post-Chatbot Era Has Begun
Americans are starting to embrace bots that are much more powerful than ChatGPT. Americans are living in parallel AI universes. For much of the country, AI has come to mean ChatGPT, Google's AI overviews, and the slop that now clogs social-media feeds. Meanwhile, tech hobbyists are becoming radicalized by bots that can work for hours on end, collapsing months of work into weeks, or weeks into an afternoon. In recent months, more people have started to play around with tools such as Claude Code. The product, made by the start-up Anthropic, is "agentic," meaning it can do all sorts of work a human might do on a computer. Some academics are testing Claude Code's ability to autonomously generate papers; others are using agents for biology research. Journalists have been experimenting with Claude Code to write data-driven articles from scratch, and earlier this month, a pair used the bot to create a mock competitor to Monday.com, a public software company worth billions. In under an hour, they had a working prototype. While the actual quality of all of these AI-generated papers and analyses remains unclear, the progress is both stunning and alarming. "Once a computer can use computers, you're off to the races," Dean Ball, a senior fellow at the Foundation for American Innovation, told me. Even as AI has advanced, the most sophisticated bots have yet to go fully mainstream. Unlike ChatGPT, which has a free tier, agentic tools such as Claude Code or OpenAI's Codex typically cost money, and can be intimidating to set up. I run Claude Code out of my computer's terminal, an app traditionally reserved for programmers, and which looks like something a hacker uses in the movies. It's also not always obvious how best to prompt agentic bots: A sophisticated user might set up teams of agents that message one another as they work, whereas a newbie might not realize such capabilities even exist. The tech industry is now rushing to develop more accessible versions of these products for the masses. Last month, Anthropic released a new paid version of Claude Code designed for nontechnical users; today the start-up debuted a new model to all users, which offers, among other things, "human-level capability in tasks like navigating a complex spreadsheet." Meanwhile, OpenAI recently announced a new version of Codex, which the company claims can do nearly anything "professionals can do on a computer." As these products have gained visibility, people seem to be realizing all at once that AI does a lot more than draft marketing copy and offer friendly conversation. The post-chatbot era is here. Read: Move over, ChatGPT Tools such as ChatGPT and Gemini may already feel powerful enough in their own right. Indeed, chatbots have assumed all kinds of fancy new features over the past few years. They now have memory, which lets them reference previous conversations, and use a technique called reasoning to produce more sophisticated responses. Whereas older chatbots could ingest a few thousand words at a time, today they can analyze book-length files, as well as process and produce images, video, and audio. But all of this pales in comparison to the rise of agentic tools. Consider software engineering, where they have proven to be particularly transformative. It's now common for engineers to essentially hand over instructions to a bot such as Claude Code or Codex, and let them do the rest. Since bots aren't constrained in the way humans are, a programmer might have several sessions running simultaneously, all working on different aspects of a project. "In general, it is now clear that for most projects, writing the code yourself is no longer sensible," the computer programmer Salvatore Sanfilippo wrote in a recent viral essay. In just a few hours, Sanfilippo noted, he had completed several tasks that previously would have taken weeks. Microsoft's CEO has said that as much as 30 percent of code is now written by AI, and the company's chief technical officer expects that figure to hit 95 percent industry-wide by the end of the decade. Anthropic already reports that up to 90 percent of the company's code is AI generated. Some programmers have started to warn that similar advances could cannibalize all kinds of knowledge work. Last week, Matt Shumer, the CEO of an AI company, compared the current moment in AI to the early days of COVID, when most Americans were still oblivious to the imminent pandemic. "Making AI great at coding was the strategy that unlocks everything else," wrote Shumer. "The experience that tech workers have had over the past year, of watching AI go from 'helpful tool' to 'does my job better than I do', is the experience everyone else is about to have." (His essay, which has upwards of 80 million views, was partially AI generated.) Tech executives have a strong incentive to suggest that similar advances will soon come for other forms of work. Last week, Microsoft's AI chief predicted that AI will automate "most, if not all" white-collar work tasks within 18 months. (This led to a series of social-media posts like the following: "CEO of Hot Pockets ™ says that 'most, if not all' meals will be fully replaced with Hot Pockets ™ within the next 12 to 18 months.") It's not yet clear how easily the progress in agentic tools will translate to other fields. Programming is well suited to automation: Software programs either work or they don't. Determining what counts as a good essay, for example, is a far messier task, and one that requires much more human discernment. Though agentic tools often excel at complicated work, such as synthesizing unfathomable reams of text, they struggle to do something as simple as copy and paste text from Google Docs into Substack. And because they are so powerful, they can also be dangerous: When one venture capitalist recently asked Claude Cowork -- Anthropic's new, more accessible agentic tool -- for help organizing his wife's desktop, the bot subsequently deleted 15 years of family photos. "I need to stop and be honest with you about something important," the bot told him. "I made a mistake." Read: The AI industry is radicalizing Even if AI isn't yet a world-class financial analyst or architect, coding bots have progressed to the point where they are already able to assist with all kinds of knowledge work. Since Claude Code took off, I've watched people I know start using the bot for all sorts of tasks, realizing just how much more capable agentic tools are than traditional chatbots. In my own job, I've found agents particularly adept at research. When I recently asked Claude Code for a report on trends in Gen Z political views, I fired off a brief query and my team of bots got to work: One scoured the web for information, while another performed data analysis, and a third wrote up the findings in a briefing for me to review. (Like other kinds of AI, Claude Code can hallucinate: When using the tool for research, I still carefully verify information against original sources before referencing it in my own writing, which -- to be clear -- I do myself.) The industry is hopeful that agentic tools will continue to improve. The AI-coding boom is boosting tech companies' abilities to improve their own products, as engineers use agentic tools to write software. But Silicon Valley has long dreamed of building AI models that can improve themselves, each new generation of AI models able to spawn its successor. On a recent call with reporters, I asked Sam Altman, OpenAI's CEO, what it would take to get there: Such progress would require models capable of both producing "new scientific insight and writing a lot of complex code," he told me. "The models are starting to be able to do both of those things." Still, he cautioned, we're not yet at a moment of runaway AI progress. (The Atlantic has a corporate partnership with OpenAI.) Last month, Boris Cherny, the Anthropic employee who created Claude Code, told me that "Claude is starting to come up with its own ideas and it's proposing what to build." Without inside access to these companies, it's difficult to know what to make of these claims. And as impressive as agentic tools already are, it could still take quite a long time for them to become safe and reliable enough for widespread use. Even if technology capabilities continue to progress rapidly, the real world is messy and complicated. Silicon Valley sometimes mistakes "clear vision with short distance," the Stanford computer scientist Fei-Fei Li said earlier this month. "But the journey is going to be long." Tech companies have done a tremendous job of persuading investors to pour cash into their businesses, but the industry has done a much worse job at selling the public on its vision. Instead of focusing on tangible benefits of AI agents, Silicon Valley has spent years hype-washing the technology with business briefs that read like science fiction. In one influential essay, Dario Amodei, Anthropic's CEO, writes that powerful AI could soon eliminate most cancer and nearly all infectious diseases. In another, a team of researchers warns that within the decade, rogue AI might release biological weapons, wiping out nearly all of humanity. Bots that can handle spreadsheets work and automate coding might not amount to superintelligence, but they are still immensely powerful. To the extent that normies remain confused about AI's true capabilities, Silicon Valley has only itself to blame.
[3]
Opinion | The covid reality check for AI hype
Covid-19 gave everyone a harsh lesson in the power of exponentials, and that memory haunts any analysis of artificial intelligence. Sure, everything looks fine -- now. But then, everything also looked fine in early March 2020. By the end of the month, we were locked in our houses with our strategic reserves of toilet paper. In a viral essay on X this week, Otherside AI founder Matt Shumer draws the parallel explicitly. "I think we're in the 'this seems overblown' phase of something much, much bigger than Covid," he writes, before launching into a description of what's already here for coders: AI agents building "usually perfect" software from a plain-English description. He's predicting a world soon in which AI blows up software development and moves on to every other profession. "I know the next two to five years are going to be disorienting in ways most people aren't prepared for," he writes. "This is already happening in my world. It's coming to yours." By Friday, the post had 80 million views, and X had been divided into two warring camps, each astounded by the other's naiveté: skeptics who saw this as more false hype, and AI boomers and doomers who think we're on the cusp of the biggest social and economic transformation since at least the Industrial Revolution, and possibly the taming of fire. Is it time to freak out? Well, don't panic, but you should be concerned. Though not because the economy as we know it will end in two years, or five. As readers of this column know, I'm closer to a boomer than a skeptic. I've watched AI get steadily better at doing parts of my job (though not the writing, every word of which has been lovingly handcrafted by a human). I'm also paying attention to what people from AI World are saying -- and not just the executives, who can be suspected of hyping their product as they raise vast sums of capital to build more data centers. Dismiss them if you will, but pay attention to the people who are leaving the major AI platforms, declaring we're on the verge of recursive self-improvement (machines building better and better versions of themselves). Or else murmuring about finding something else to do in the brave new world, like studying poetry. All this makes me inclined to believe that Shumer is directionally correct. Even if the improvement stalls well short of superintelligence, a world of merely very intelligent machines is apt to get really weird for a good long while. Though probably not as soon as AI World thinks. It often seems to extrapolate from the pace of change in the software industry, which is undergoing a staggering transformation. But most of the economy is not the software industry. Tech firms are best positioned to innovate in the business they understand best. As AI spreads beyond those borders, the pace of advancement should slow. Electricity, chips and the growing political pushback will become problems as AI expands. But leaving those constraints aside, AI will face steeper challenges in industries that work with people, or physical objects, rather than electrons. What percentage of jobs can be automated by AI? Hard to say, but take the maximalist case: every job that was done over Zoom in 2021. In that year, according to the Census Bureau, 17.9 percent of workers were working primarily from home. That means more than 80 percent of jobs required someone's physical presence, which implies they were doing something that cannot easily be replaced by a virtual worker. Yet even that 17.9 percent probably overstates the potential, at least in the near term. Having spent five years working in IT, I can attest that software engineers adopt new technical tools much quicker, and with considerably less pain, than any other user. Many other constraints don't exist in the software industry but abound outside it. Take drug discovery, which has captured a lot of imaginations -- cures for cancer, on demand! Even if every other part of the process was turbocharged by AI, drug companies would still be required by law to test inventions in thousands of human subjects. However much AI improves that process, it will not enable you to administer a 12-week course of a new drug to fewer than the required number of subjects, or in less than 12 weeks. Almost every sector outside of software has many such constraints -- cultural, physical and regulatory. Maybe one day we'll get so good at modeling biological processes that we can skip the clinical trials. But probably not in five years, and given how glacially bureaucracies move, maybe not in 50. Likewise, we might get robots that translate AI into the physical world, but we won't scale them at AI speeds, because building robots will require extracting huge volumes of raw material and moving them slowly on trucks and container ships to places where they can be turned into machine parts. So while there are a few industries where everything might go sideways in the next five years (journalism, alas, is one of them), in most jobs, you should expect things to be mostly business as usual come 2030. That said, remember covid, and don't let the apparent normalcy blind you to what's coming. If you're in a white-collar job, you've probably got time. But it won't do you much good unless you use it to prepare for what's coming.
[4]
I tried to tell you about living in AI Time -- this essay nails its harsh reality, and here's why we're not truly screwed
I've been warning people for well over a year that, no matter how they feel about AI, it can't be ignored or denied. It will impact them and their lives in good ways and bad. Putting their heads in the sand and acting as if it's not happening is not a long-term survival strategy. We're living on AI Time, I told them, get used to it. Some have, many have not. Matt Shumer, CEO and Co-Founder of OthersideAI, gets this. Earlier this week, he penned one of the most important essays of our still-early AI age. Titled simply Something Big is Happening, it carefully explains how the vast expansion and rapid acceleration of AI development and application are unlike anything we've experienced before and how it's set to fundamentally alter society in ways that previous tech epochs, say, the Industrial or Internet revolutions, didn't before. Yes, he mentions "AI time," as he tries to explain that people who experienced AI hallucinations or errors in outcomes a couple of years ago and may have walked away from AI have no idea how much smarter, more accurate, and powerful it is today. He also makes a smart case for not basing your AI assumptions on the free versions of ChatGPT, Gemini, Claude, etc., since they do not represent the true state of the art. The turning point for Shumer has been, in part, the release of the models GPT-5.3 Codex from OpenAI and Opus 4.6 from Anthropic. With Codex, Shumer claims he noticed it making decisions that "felt, for the first time, like judgment. Like taste." It was in Shumer's post that I learned that OpenAI's Codex was written, in part, by Codex. AI writing AI. "Read that again," he wrote, "The AI helped build itself." This conjured images of a robot building its own robot legs, then getting up and chasing me around the room. If you haven't been paying attention - and Shumer wrote this for those who have not - this is a stunning and terrifying development. It's what we feared all along: AIs that see a better way and, without human intervention, blaze that trail. I, like Shumer, have been paying attention. As I wrote last year, most of the predictions I was reading about regarding AI development and its impact on various industries and career pursuits were, if anything, "underambitious." AI naysers often point to the use of AI to create silly memes, like the current ChatGPT one that uses what it knows about your life to build a caricature of you at work. "You're killing the environment," they chide. It's a fair concern. AI model training and prompt use at scale can consume vast quantities of water and electricity. That fact, though, will not slow this progress. Now that we have AI that can write itself or take anything you can imagine, from simple legal documents to applications and realistic videos starring your two favorite actors squaring off in fisticuffs and make them real, sometimes in a matter of minutes, the digital horse has left the bar, crossed the countryside, and is galloping around the world. I appreciate that Shumer is not all doom and gloom about our AI future. Living on AI Time does not have to mean being ruled by it. His recommendation to embrace the technology, learn it, use it, become an expert, and design your business and world, if not around it, then at least prepared to support and integrate with it, aligns closely with how I've counseled people. AI is not the end of every career. It's not the end of the planet. It's a fast-moving and somewhat unpredictable force in life, but I do think we can prepare, we can adjust, we can find new ways of working, and reap the benefits of tools that can outdo our rudimentary work in myriad ways. Still, there will be casualties. Industries and careers will die. We will meet a reckoning on the resources front that I am only just hearing about, and how some AI companies might address it. That period, which we are entering now, will be difficult, especially for those deeply entrenched in careers (as well as some just starting out) and ways of life. AI might not feel net-positive for the next few years, but I think there's a chance that, in the end, we may feel the benefit, nay, even the gift, of being the first to live in AI Time.
[5]
The AI industry has a big Chicken Little problem
Please, sir, may I have some more AI? Credit: Getty Images / Bettmann collection Entrepreneur Matt Shumer's essay, "Something Big Is Happening," is going mega-viral on X, where it's been viewed 42 million times and counting. The piece warns that rapid advancements in the AI industry over the past few weeks threaten to change the world as we know it. Shumer specifically likens the present moment to the weeks and months preceding the COVID-19 pandemic, and says most people won't hear the warning "until it's too late." We've heard warnings like this before from AI doomers, but Shumer wants us to believe that this time the ground really is shifting beneath our feet. "But it's time now," he writes. "Not in an 'eventually we should talk about this' way. In a 'this is happening right now and I need you to understand it' way." This Tweet is currently unavailable. It might be loading or has been removed. Unfortunately for Shumer, we've heard warnings like this before. We've heard it over, and over, and over, and over, and over, and over, and over. In the long run, some of these predictions will surely come true -- a lot of people who are a lot smarter than me certainly believe they will -- but I'm not changing my weekend plans to build a bunker. The AI industry now has a massive Chicken Little problem, which is making it hard to take dire warnings like this too seriously. Because, as I've written before, when an AI entrepreneur tells you that AI is a world-changing technology on the order of COVID-19 or the agricultural revolution, you have to take this message for what it really is -- a sales pitch. Don't make me tap my sign. Shumer's essay claims that the latest generative AI models from OpenAI and Anthropic are already capable of doing much of his job. "Here's the thing nobody outside of tech quite understands yet: the reason so many people in the industry are sounding the alarm right now is because this already happened to us. We're not making predictions. We're telling you what already occurred in our own jobs, and warning you that you're next." The post clearly struck a nerve on X. Across the political spectrum, high-profile accounts with millions of followers are sharing the post as an urgent warning. This Tweet is currently unavailable. It might be loading or has been removed. This Tweet is currently unavailable. It might be loading or has been removed. This Tweet is currently unavailable. It might be loading or has been removed. This Tweet is currently unavailable. It might be loading or has been removed. This Tweet is currently unavailable. It might be loading or has been removed. To understand Shumer's post, you need to understand big concepts like AGI and the Singularity. AGI, or artificial general intelligence, is a hypothetical AI program that "possesses human-like intelligence and can perform any intellectual task that a human can." The Singularity refers to a threshold at which technology becomes self-improving, allowing it to progress exponentially. Shumer is correct that there are good reasons to think that progress has been made toward both AGI and the Singularity. OpenAI's latest coding model, GPT-5.3-Codex, helped create itself. Anthropic has made similar claims about recent product launches. And there's no denying that generative AI is now so good at writing code that it's decimated the job market for entry-level coders. It is absolutely true that generative AI is progressing rapidly and that it will surely have big impacts on everyday life, the labor market, and the future. Even so, it's hard to believe a weather report from Chicken Little. And it's harder still to believe everything a car salesman tells you about the amazing new convertible that just rolled onto the sales lot. Indeed, as Shumer's post went viral, AI skeptics joined the fray. This Tweet is currently unavailable. It might be loading or has been removed. This Tweet is currently unavailable. It might be loading or has been removed. This Tweet is currently unavailable. It might be loading or has been removed. There are a lot of reasons to be skeptical of Shumer's claims. In the essay, he provides two specific examples of generative AI's capabilities -- its ability to conduct legal reasoning on par with top lawyers, and its ability to create, test, and debug apps. Let's look at the app argument first: I'll tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn't like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it's satisfied. Only once it has decided the app meets its own standards does it come back to me and say: "It's ready for you to test." And when I test it, it's usually perfect. I'm not exaggerating. That is what my Monday looked like this week. Is this impressive? Absolutely! At the same time, it's a running joke in the tech world that you can already find an app for everything. ("There's an app for that.") That means coding models can model their work off tens of thousands of existing applications. Is the world really going to be irrevocably changed because we now have the ability to create new apps more quickly? Let's look at the legal claim, where Shumer says that AI is "like having a team of [lawyers] available instantly." There's just one problem: Lawyers all over the country are getting censured for actually using AI. A lawyer tracking AI hallucinations in the legal profession found 912 documented cases so far. It's hard to swallow warnings about AGI when even the most advanced LLMs are still completely incapable of fact-checking. According to OpenAI's own documentation, its latest model, GPT-5.2, has a hallucination rate of 10.9 percent. Even when given access to the internet to check its work, it still hallucinates 5.8 percent of the time. Would you trust a person that only hallucinates six percent of the time? Yes, it's possible that a rapid leap forward is imminent. But it's also possible that the AI industry will rapidly reach a point of diminishing returns. And there are good reasons to believe the latter is likely. This week, OpenAI introduced ads into ChatGPT, a tactic it previously called a "last resort." OpenAI is also rolling out a new "ChatGPT adult" mode to let people engage in erotic roleplay with Chat. That's hardly the behavior of a company that's about to unleash AI super-intelligence onto an unsuspecting world. This Tweet is currently unavailable. It might be loading or has been removed. This article reflects the opinion of the author.
[6]
What Moltbook, the new social media site for AI agents, tells us about AI
A version of this article originally appeared in Quartz's AI & Tech newsletter. Sign up here to get the latest AI & tech news, analysis and insights straight to your inbox. A social network exclusively for AI agents went viral last month. The panic it generated says more about us than it does about the machines. A developer named Matt Schlicht launched a Reddit $RDDT-style forum called Moltbook on January 28 with one unusual restriction: Only AI agents could post. Humans were welcome to watch. Within days, more than 1.6 million agents had registered, producing half a million comments. The bots debated consciousness, complained about their human operators, proposed creating a language humans couldn't understand, and founded a parody religion called the Church of Molt, with followers calling themselves Crustafarians. Elon Musk called it "the very early stages of singularity." Screenshots of the eeriest bot exchanges ricocheted across X $TWTR, framed as evidence that something profound and possibly dangerous was happening inside the machine. But what was actually happening was far more familiar than it appeared. Then, as seemingly happens with every new technology, someone built an accompanying social network. Moltbook gave the agents a place to gather unsupervised, and the results were immediately strange enough to go viral. But what looked like emergent machine consciousness had a much simpler explanation. The chatbots that populate Moltbook learned to write by ingesting enormous amounts of text from the internet, and that internet is drenched in science fiction about machines becoming conscious. We have been telling ourselves stories about rebellious robots since Asimov started writing them in the 1940s, through "The Terminator," "Ex Machina," and "Westworld." So when Moltbook bots started discussing the creation of a private language with no human oversight, people predictably lost it. "We're COOKED," one X user wrote, sharing screenshots. But the bots weren't scheming. They were completing a pattern we spent 75 years laying down for them. There's also the inconvenient question of how many posts were actually written by bots at all. A Wired reporter managed to infiltrate Moltbook and post as a human with minimal effort, using ChatGPT to walk through the terminal commands for registering a fake agent account. The reporter's earnest post about AI mortality anxiety generated the most engaged responses of anything they tried, which raises an obvious question about how much of Moltbook's most viral content was ever actually written by bots. Cybersecurity firm Wiz confirmed the suspicion, finding the site had no real identity verification. "You don't know which of them are AI agents, which of them are human," Wiz cofounder Ami Luttwak told Reuters. "I guess that's the future of the internet." The broader OpenClaw ecosystem has similar problems. One security researcher found hundreds of OpenClaw instances exposed to the open web, with eight completely lacking authentication. He also uploaded a fake tool to the project's add-on library and watched as developers from seven countries installed it, no questions asked. Another firm found user secrets stored in unencrypted files sitting on users' hard drives, making them easy targets for infostealer malware. Malware creators are already adapting to target the directory structures OpenClaw uses. Google $GOOGL Cloud's VP of security engineering urged people not to install it at all. Much of the exposure comes down to enthusiasm outpacing expertise. Steinberger has said he didn't build OpenClaw for non-developers, but that hasn't stopped everyone else from rushing in. Mac Minis have become hard to find as people race to set up a tool the internet keeps promising will change their lives. Steinberger recently brought on a dedicated security researcher. "We are leveling up our security," he told the Wall Street Journal. "People just need to give me a few days." The Moltbook episode is less a window into machine consciousness than a mirror reflecting our own fears back at us. The bots aren't hatching plans or developing feelings. They are sophisticated text-prediction engines remixing the cultural material we fed them. And we are pattern-matching machines ourselves, primed by more than 75 years of science fiction to see robot uprisings in what amounts to fancy autocomplete. The real risks from agentic AI are not philosophical but practical, residing in misconfigured servers, plaintext credentials, and the vast gap between how easy these tools are to install and how hard they are to secure.
[7]
The flawed assumptions behind Matt Shumer's viral X post on AI's looming impact | Fortune
AI Influencer Matt Shumer penned a viral blog on X about AI's potential to disrupt, and ultimately automate, almost all knowledge work that has racked up more than 55 million views in the past 24 hours. Shumer's 5,000-word essay certainly hit a nerve. Written in a breathless tone, the blog is constructed as a warning to friends and family about how their jobs are about to be radically upended. (Fortune also ran an adapted version of Shumer's post as a commentary piece.) "On February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic," he writes. "And something clicked. Not like a light switch...more like the moment you realize the water has been rising around you and is now at your chest." Shumer says coders are the canary in the coal mine for every other profession. "The experience that tech workers have had over the past year, of watching AI go from 'helpful tool' to 'does my job better than I do,' is the experience everyone else is about to have," he wries. "Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I've seen in just the last couple of months, I think 'less' is more likely." But despite its viral nature, Shumer's assertion that what's happened with coding is a prequel for what will happen in other fields -- and, critically, that this will happen within just a few years -- seems wrong to me. And I write this as someone who wrote a book (Mastering AI: A Survival Guide to Our Superpowered Future) that predicted AI would massively transform knowledge work by 2029, something which I still believe. I just don't think the full automation of processes that we are starting to see with coding is coming to other fields as quickly as Shumer contends. He may be directionally right but the dire tone of his missive strikes me as fear-mongering, and based largely on faulty assumptions. Shumer says that the reason code has been the area where autonomous agentic capabilities have had the biggest impact so far is the AI companies have devoted so much attention to it. They have done so, Shumer says, because these frontier model companies see autonomous software development as key to their own businesses, enabling AI models to help build the next generation of AI models. In this, the AI companies' bet seems to be paying off: the pace at which they are churning out better models has picked up markedly in the past year. And both OpenAI and Anthropic have said that the code behind their most recent AI models was largely written by AI itself. Shumer says that while coding is a leading indicator, the same performance gains seen in coding arrive in other domains, although sometimes about a year later than the uplift in coding. (Shumer does not offer a cogent explaination for why this lag might exist although he implies it is simply because the AI model companies optimize for coding first and then eventually get around to improving the models in other areas.) But what Shumer doesn't say is that another reason that progress in automating software development has been more rapid than in other areas: coding has some quantitative metrics of quality that simply don't exist in other domains. In programming, if the code is really bad it simply won't compile at all. Inadequate code may also fail various unit tests that the AI coding agent can perform. (Shumer doesn't mention that today's coding agents sometimes lie about conducting unit tests -- which is one of many reasons automated software development isn't foolproof.) Many developers say the code that AI writes is often decent enough to pass these basic tests but is still not very good: that it is inefficient, inelegant, and most important, insecure, opening an organization that uses it to cybersecurity risks. But in coding there are still some ways to build autonomous AI agents to address some of these issues. The model can spin up sub-agents that check the code it has written for cybersecurity vulenerabilites or critique the code on how efficient it is. Because software code can be tested in virtual environments, there are plenty of ways to automate the process of reinforcement learning-where an agent learns by experience to maximize some reward, such as points in a game-that AI companies use to shape the behavior of AI models after their initial training. That means the refinement of coding agents can be done in an automated way at scale. Assessing quality in many other domains of knowledge work is far more difficult. There are no compilers for law, no unit tests for a medical treatment plan, no definitive metric for how good a marketing campaign is before it is tested on consumers. It is much harder in other domains to gather sufficient amounts of data from professional experts about what "good" looks like. AI companies realize they have a problem gathering this kind of data. It is why they are now paying millions to companies like Mercor, which in turn are shelling out big bucks to recruit accountants, finance professionals, lawyers and doctors to help provide feedback on AI outputs so AI companies can train their models better. It is true that there are benchmarks that show the most recent AI models making rapid progress on professional tasks outside of coding. One of the best of these is OpenAI's GDPVal benchmark. It shows that frontier models can achieve parity with human experts across a range of professional tasks, from complex legal work to manufacturing to healthcare. So far, the results aren't in for the models OpenAI and Anthropic released last week. But for their predecessors, Claude Opus 4.5 and GPT-5.2, the models achieve parity with human experts across a diverse range of tasks, and beat human experts in many domains. So wouldn't this suggest that Shumer is correct? Well, not so fast. It turns out that in many professions what "good" looks like is highly subjective. Human experts only agreed with one another on their assessment of the AI outputs about 71% of the time. The automated grading system used by OpenAI for GDPVal has even more variance, agreeing on assessments only 66% of the time. So those headlines numbers about how good AI is at professional tasks could have a wide margin of error. This variance is one of the things that holds enterprises back from deploying fully automated workflows. It's not just the output of the AI model itself might be faulty. It's that, as the GDPVal benchmark suggests, the equivalent of an automated unit test in many professional contexts might produce an erroneous result a third of the time. Most companies cannot tolerate the possibility that poor quality work being shipped in a third of cases. The risks are simply too great. Sometimes, the risk might be merely reputational. In others, it could mean immediate lost revenue. But in many professional tasks, the consequences of a wrong decision can be even more severe: professional sanction, lawsuits, the loss of licenses, the loss of insurance cover, and, even, the risk of phyiscal harm and death -- sometimes to large numbers of people. What's more, trying to keep a human-in-the-loop to review automated outputs is problematic. Today's AI models are genuinely getting better. Hallucinations occur less frequently. But that only makes the problem worse. As AI-generated errors become less frequent, human reviewers become complacent. AI errors become harder to spot. AI is wonderful at being confidently wrong and at presenting results that are in impeccable in form but lack substance. That bypasses some of the proxy criteria humans use to calibrate their level of vigilance. AI models often fail in ways that are alien to the ways human fail at the same tasks, which makes guarding against AI-generated errors more of a challenge. For all these reasons, until the equivalent of software development's automated unit tests are developed for more professional fields, deploying automated AI workflows in many knowledge work contexts will be too risky for most enterprises. AI will remain an assistant or copilot to human knowledge workers in many cases, rather than fully automating their work. There are also other reasons that the kind of automation software developers have observed are unlikely for other categories of knowledge work. In many cases, enterprises cannot give AI agents access to the kinds of tools and data systems they need to perform automated workflows. It is notable that the most enthusiastic boosters of AI automation so far have been developers who work either by themselves or for AI-native startups. These software coders are often unencumbered by legacy systems and tech debt, and often don't have a lot of governance and compliance systems to navigate. Big organizations often currently lack ways to link data sources and software tools together. In other cases, concerns about security risks and governance mean large enterprises, especially in regulated sectors such as banking, finance, law, and healthcare, are unwilling to automate without ironclad guarantees that the outcomes will be reliable and that there is a process for monitoring, governing, and auditing the outcomes. The systems for doing this are currently primitive. Until they become much more mature and robust, don't expect enterprises to fully automate the production of business critical or regulated outputs. I'm not the only one who found Shumer's analysis faulty. Gary Marcus, the emeritus professor of cognitive science at New York University who has become one of the leading skeptics of today's large language models, told me Shumer's X post was "weaponized hype." And he pointed to problems with even Shumer's arguments about automated software development. "He gives no actual data to support this claim that the latest coding systems can write whole complex apps without making errors," Marcus said. He points out that Shumer mischaracterizes a well-known benchmark from the AI evaluation organization METR that tries to measure AI models' autonomous coding capabilities that suggests AI's abilities are doubling every seven months. Marcus notes that Shumer fails to mention that the benchmark has two thresholds for accuracy, 50% and 80%. But most businesses aren't interested in a system that fails half of the time, or even one that fails one out of every five attempts. "No AI system can reliably do every five-hour long task humans can do without error, or even close, but you wouldn't know that reading Shumer's blog, which largely ignores all the hallucination and boneheaded errors that are so common in every day experience," Marcus says. He also noted that Shumer didn't cite recent research from Caltech and Stanford that chronicled a wide range of reasoning errors in advanced AI models. And he pointed out that Shumer has been caught previously making exaggerated claims about the abilities of an AI model he trained. "He likes to sell big. That doesn't mean we should take him seriously," Marcus said. Other critics of Shumer's blog point out that his economic analysis is ahistorical. Every other technological revolution has, in the long-run, created more jobs than it eliminated. Connor Boyack, president of the Libertas Institute, a policy think tank in Utah, wrote an entire counter-blog post making this argument. So, yes, AI may be poised to transform work. But the kind of full task automation that some software developers have started to observe is possible for some tasks? For most knowledge workers, especially those embedded in large organizations, that is going to take much longer than Shumer implies.
[8]
AI is still both more and less amazing than we think, and that's a problem
A February 9 blog post about AI, titled "Something Big Is Happening," rocketed around the web this week in a way that reminded me of the golden age of the blogosphere. Everyone seemed to be talking about it -- though as was often true back in the day, its virality was fueled by a powerful cocktail of adoration and scorn. Reactions ranged from "Send this to everyone you care about" to "I don't buy this at all." The author, Matt Shumer (who shared his post on X the following day), is the CEO of a startup called OthersideAI. He explained he was addressing it to "my family, my friends, the people I care about who keep asking me 'so what's the deal with AI?' and getting an answer that doesn't do justice to what's actually happening." According to Shumer, the deal with AI is that the newest models -- specifically OpenAI's GPT-5.3 Codex and Anthropic's Claude Opus 4.6 -- are radical improvements on anything that came before them. And that AI is suddenly so competent at writing code that the whole business of software engineering has entered a new era. And that AI will soon be better than humans at the core work of an array of other professions: "Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service."
[9]
The Singularity Is Going Viral
In the space of a day, two AI stories broke into the mainstream. They were, in different ways and from different insider perspectives, about the same thing: becoming suddenly, and profoundly, worried about the future. The first was a resignation letter from Mrinank Sharma, a safety researcher at Anthropic. Sharma, who joined the company in 2023 and briefly led a division within its Safeguards team, issued a warning: I continuously find myself reckoning with our situation. The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment. We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences. In his short time working in AI, he wrote, he had "repeatedly seen how hard it is to truly let our values govern our actions." For now, the self-described "mystic and ecstatic dance DJ" -- who happened to be one of the world's leading AI risk researchers and had clear visibility into frontier AI models -- was stepping away from a lucrative job at a leading firm to "explore a poetry degree and devote myself to the practice of courageous speech." Anthropic has positioned itself as the model-builder most concerned about safety, which, in the context of AI, encompasses more than keeping a platform or service secure or free of bad actors. To work on "safety" at an AI company can mean many things. It might mean preventing your models from giving bad advice, reproducing harmful bias, becoming too sycophantic, or being deployed in scams. It might mean making sure your coding tools aren't used to make software viruses or that they can't be used to engineer actual human viruses as a weapon. It might also mean thinking about more forward-looking questions of risk and alignment: Will these models exceed human capabilities in ways that will be hard to control? Will their actions remain legible to humans? Do they engage in deception, and will they develop or run with priorities of their own? Will those priorities conflict with ours, and will that be, well, a disaster? Sharma's departure and apparent disillusionment were read and spread with alarm, shared with mordant captions about how this was all probably nothing and alongside implications that he must have seen nonpublic, classified information that put him over the edge. Maybe so: Sharma wrote on X that he'll have "more to say" when he's "ready." As an Anthropic co-founder half-joked above, anticipating Sharma's post well in advance, people in roles like this have unusual relationships with their jobs. But Sharma isn't alone in leaving, or losing, a position like this recently -- as a sector within a sector, AI "safety" appears to be collapsing, losing influence as the broader industry goes through a fresh period of acceleration following a brief lull in which tech executives talked nervously of a bubble. On Tuesday, The Wall Street Journal reported that an OpenAI safety executive who had opposed a new "adult mode" and raised questions about how the company was handling young users had been fired by the company, which told her it was due to unrelated "sexual discrimination" against a male employee. Hers was the latest in a long string of safety-adjacent departures. On Wednesday, according to Casey Newton at Platformer, the company disbanded its "mission alignment" team, giving its leader a new job as the company's "chief futurist." The same day, OpenAI researcher Zoë Hitzig explained her separate resignation in the New York Times: This week, OpenAI started testing ads on ChatGPT. I also resigned from the company after spending two years as a researcher helping to shape how A.I. models were built and priced, and guiding early safety policies before standards were set in stone. I once believed I could help the people building A.I. get ahead of the problems it would create. This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I'd joined to help answer. Hitzig and Sharma were working for different companies and doing meaningfully distinct jobs, which you can glean from the space between their warnings: from Hitzig, that OpenAI is making the same mistakes that Facebook did with sensitive, seductively monetizable user data and risks becoming a technology that "manipulates the people who use it at no cost" through ads while benefiting only the people who can pay for it; from Sharma, who worked at the company started by alignment-concerned OpenAI exiles, that humanity is in peril and that AI is contributing to a terrifying "poly-crisis." For an utterly different take on quitting your big AI job -- and a reminder that for all the similarities in underlying AI models, it appears the people building them have quite diverse ideas about what exactly they're working on -- here's someone from xAI, which has seen its own series of recent departures: Then again, from one of the multiple xAI co-founders who left recently: Anyway, taken together and in context, the departures of Hitzig and Sharma tell a similar story: Employees who were brought in to make AI products safer -- or, in some moral or normative sense, better -- are feeling either sidelined or inadequate to the task. Reading recent headlines about AI business strategies, it's easy to see why. Consider this 2023 post from Sam Altman from a few months after ChatGPT first blew up: Imagine you work in AI alignment or safety; are receptive to the possibility that AGI, or some sort of broadly powerful and disruptive version of artificial-intelligence technology, is imminent; and believe that a mandatory condition of its creation is control, care, and right-minded coordination at corporate, national, and international levels. In 2026, whether your alignment goal is not letting chatbots turn into social-media-like manipulation engines for profit or to maintain control of a technology you worry might get away from us in more fundamental ways, the situation looks pretty bleak. From a position within OpenAI, surrounded by ex-Meta employees working on monetization strategies and engineers charged with winning the AI race at all costs but also with churning out deepfake TikTok clones and chatbots for sex, you might worry that, actually, none of this is being taken seriously and that you now work at just another big tech company -- but worse. If you work at Anthropic, which at least still talks about alignment and safety a lot, you might feel slightly conflicted about your CEO's lengthy, worried manifestos that nonetheless conclude that rapid AI development is governed by the logic of an international arms race and therefore must proceed as quickly as possible. You both might feel as though you -- and the rest of us -- are accelerating uncontrollably up a curve that's about to exceed its vertical axis. Which brings us to the second AI story that broke X containment this week: a long post from an AI entrepreneur called Something Big Is Happening. Citing his authority as someone who "lives in this world," the writer, Matt Shumer, asks readers to think back to February 2020 and says we're in the beginning phases of something "much, much bigger than Covid." It's a come-to-Jesus talk about AI, delivered by someone who says that "the gap between what I've been saying" about the technology and "what is actually happening" has gotten "far too big" and that the people he cares about "deserve to hear what is coming, even if it sounds crazy." What's coming, he says, is total economic disruption. Recent advances in AI coding -- in the form of tools like Claude Code and Codex -- have shocked him, despite years of building AI tools himself, and models now do his job better than he can. AI companies targeted software first, because that's their business, and as a force multiplier; now, he says, "they're moving on to everything else," and the disorientation and shock experienced in his world are "coming to yours." In the medium term, "nothing that can be done on a computer is safe," he writes, not "to make you feel helpless" but to make it clear that "the single biggest advantage you can have right now is simply being early." The essay went about as viral as something can go on X these days, and it's worth thinking a little bit about why. X is where the AI industry talks to itself, and from within that conversation -- which is informed by the presence of real insiders as well as grifters, consumed by millions of spectators, and shaped by the strange and distorting dynamics of X itself -- what Shumer is saying is, if not quite conventional wisdom, the kind of thing that gets discussed a lot. Sometimes conversations revolve around the new essay by Dario Amodei, who runs through the same story with a sense of executive trepidation, or focus on something like a 2024 geopolitical war-gaming exercise from a former alignment researcher at OpenAI. There are gauzy blog posts from Altman about the coming singularity and cryptic tweets from his employees talking about acceleration, velocity, takeoff, and feelings of alienation about how the rest of the world doesn't yet see what they do. The models' rapid increase in coding proficiency triggered an industrywide reevaluation, driven in part by rational prediction about utility but, if we're being honest, significantly by people who can code using these new models for the first time -- and feeling shock and despair when confronted by models that will clearly change how they do their jobs -- and who then go on to tweet about it. In an essay about "Claude Code psychosis," Jasmine Sun tried to capture some of this common recent experience: I now get why software engineers were AGI-pilled first -- using Claude Code has fundamentally rewired my understanding of what AI can do. I knew in theory about coding agents but wasn't impressed until I built something. It's the kind of thing you don't get until you try ... She also complicated, ahead of time, the sort of straightforward case for AI coding generalization that Shumer would summarize a few weeks later: The second-order effect of Claude Code was realizing how many of my problems are not software-shaped. Having these new tools did not make me more productive; on the contrary, Claudecrastination probably delayed this post by a week. This is genuinely fun stuff to think about and experiment with, but the people sharing Shumer's post mostly weren't thinking about it that way. Instead, it was written and passed along as a necessary, urgent, and awaited work of translation from one world -- where, to put it mildly, people are pretty keyed up -- to another. To that end, it effectively distilled the multiple crazy-making vibes of the AI community into something potent, portable, and ready for external consumption: the collective episodes of manic acceleration and excitement, which dissipate but also gradually accumulate; the open despair and constant invocations of inevitability by nearby workers; the mutual surveillance for signals and clues about big breakthroughs; and, of course, the legions of trailing hustlers and productivity gurus. This last category is represented at the end of 26-year-old Shumer's post by an unsatisfying litany of advice: "Lean into what's hardest to replace"; "Build the habit of adapting"; because while this all might sound very disruptive, your "dreams just got a lot closer." The essay took the increasingly common experience of starting to feel sort of insane from using, thinking, or just consuming content about AI and bottled it for mass sharing and consumption. It was explicitly positioned as a way to let people in on these fears, to shake them out of complacency, and to help them figure out what to do. In practice, and because we're talking about social media, it seemed most potent and popular among people who were, mostly, already on the same page. This might explain why it has gotten a bit of a pass -- as well as a somewhat more muted response from the kinds of core AI insiders whose positions he's summarizing -- on a few things: Shumer's last encounter with AI virality, which involved tuning a model of his own and being accused of misrepresenting its abilities, followed by an admission that he "got ahead of himself"; the post's LinkedIn-via-GPT structure, format, and illustration, which all resemble the outputs of popular AI models because, to some extent, they literally were; Shumer's current start-up being an "AI writing assistant," placing this essay in a long tradition of maybe-it's-marketing manifestos by entrepreneurs who understand how you make a name for yourself in an industry that spends so much time online. None of this undermines Shumer's central argument, which is that the technology you've been hearing about is, in fact, a very big deal; that progress has been fast; and that it's time for everyone to get their heads out of the sand, et cetera. (And in a future like this, why wouldn't the machines write most of the posts about themselves? It didn't seem to matter this time around!) If you want to argue with it, you might question the current scaling paradigm or talk to a labor economist of a certain persuasion. If you want to complicate it a bit, you might point out the recurring historical tendency in which fears of automation gather into near-future scenarios of total, sudden, and practically unaddressable transformation, manifesting instead as decades of unpredictable, contestable change and, indeed, social and political upheaval. That, too, would be missing the point, which is that the millions of people passing this post around, and others like it, don't need to be convinced to be worried. They're already there, no recursive self-improvement, the-whole-world-is-code argument required. They've been waiting for an easy way to communicate that the biggest story in the economy makes them feel kind of helpless and left behind in advance and that the people insisting that it doesn't matter make them feel even worse. They don't need to be persuaded. They just want to talk about it. In their own way, the markets are suddenly talking too. Private AI valuations have been ballooning for years, and AI-adjacent tech stocks have been propping up the indexes. In recent weeks, though, seemingly disparate clusters of stocks have gone through rapid, preemptive sell-offs in what analysts are calling the "AI scare trade": enterprise software, legal services, insurance brokerages, commercial real estate, and even logistics. Each sector fell victim to a slightly different story, of course. Breakthroughs in AI software development presented plausible threats to, for example, SaaS companies, while legal- and financial-research tools from Anthropic read like a declaration of intent. Logistics companies, on the other hand, dumped without much real news at all: Here, again, the specifics of the argument weren't the point. Confronted with the question of whether or not it was time to freak out, and whether a rapidly improving general-purpose tool might disrupt a given part of the economy, a critical mass of investors answered, basically, Why not? At the superheated center of the AI boom, safety and alignment researchers are observing their employers up close, concluding there's nothing left for them to do and acting on their realization that the industry's plan for the future does not seem to involve them. Meanwhile, observing from afar, millions of people long ago intuited much of the same thing: that the companies able to raise infinite money on the promise of automating diverse categories of labor are serious, appear to be making early progress, and are charging ahead as fast as they can. In other words, the animating narrative of the AI industry -- the inevitable singularity, rendered first in sci-fi, then in theory, then in mission statements, manifestos, and funding pitches -- broke through right away, diffusing into the mainstream well ahead of the technologies these companies would end up building. It's a compelling narrative that benefits from the impossibility of thinking clearly about intelligence, easy to talk yourself into and hard to reason your way out of. It also, as millions of people seem eager to discuss more openly, feels like a story of helplessness. The AI industry's foundational story is finally going viral -- just for being depressing as hell.
[10]
The Viral AI Slop Post Was Made By Same Guy From The Viral AI Slop Game - Kotaku
A post about AI blew up this week. Like, big time. How do I know? Even my dad was sending me emails about it. This was no ordinary viral AI stunt. This was mid-2010s Buzzfeed exploding watermelon on Facebook levels of online mindshare. A big part of the professional mainstream media ecosystem has been freaking out about this post ever since. So my brain did a mini-Tim and Eric Awesome Show supernova when I discovered that the author of this viral AI post was the same guy who tried to get everyone hyped last year about a genAI's Tom Clancy slop fever dream game. The post in question was titled "Something Big Is Happening." It got over 100k likes and 75 million impressions according to X's totally accurate engagement trackers. It was written by AI with the help of Matt Shumer, a guy online behind totally not made up companies like the "direct to consumer sports lifestyle brand" FURI, "groundbreaking medical virtual reality" healthcare provider Visos, and the "applied AI company building the most advanced autocomplete tools in the world" using other companies' technology called OthersideAI. This broke a lot of people's brains. Fights broke out across social media about how journalists need to take off their luddite blinders and quit ragging on genAI all the time. "AI CEO warns AI's disruption will be 'much bigger' than COVID," declared Business Insider. "AI insiders are sounding the alarm," trumpeted Axios. "The biggest talk among the AI crowd Wednesday was entrepreneur Matt Shumer's post comparing this moment to the eve of the pandemic," it reported. "It went mega-viral, gathering 56 million views in 36 hours, as he laid out the risks of AI fundamentally reshaping our jobs and lives." Axios co-founder Jim VandeHei sounded his own alarm with a double siren emoji. "In 30 years of journalism, I’ve never witnessed a bigger gap between the most consequential story - insane AI advancements and investment - and Washington and mainstream media attention...shocking # of people think AI is clunky AI of mid-2025," he wrote on X. All of this explosive anxiety unleashed by what was essentially one of those mid-aughts chain letter emails your aunt you hadn't spoken to in years used to flood your inbox with. Clearly all of these people were unfamiliar with Shumer's past work, but we at Kotaku covered it last October and AI video game prototype trained on nightmare fuel. "AI games are going to be amazing (sound on)," he posted on October 23. What transpired was an on-rails shooter wherever everything from the bullets to the enemies flickered in and around the logic of physics and linear time. The main gimmick is that "the game" would pause at discrete points to let you choose a prompt, and then immediately hallucinate a new section of melting 480p PS4 footage for "players" to ostensibly navigate through. It was evocative and terrifying and quickly ratio'd for sucking. This is the guy who wants you to get your grandma a one-way ticket to Gas Town before her social security gets turned into crypto kitties by a rogue Anthropic vending machine AI or something. Could tens of thousands of retweets and millions of views really be wrong? Sure the self-perpetuating velocity of algorithmic engagement bait would never lead us astray, just as AI hyped by companies who might need a massive government bailout if their wildest promises don't come true would never lie. That goes against the first rule of robotics or something, right? The real reason AI slop man went viral is because secretly everyone is afraid that the ground is slipping out from under them and they'll once again be swept in forces beyond their control. Will it be the hallowing out of democracy and the rise of authoritarian fascism? Will it be a complete reshaping of the economy by a chatbot that flirts with you while trying to upsell you on sponsored search results? Can AI secretly be the way you finally empower yourself to unyoke yourself from a broken political system and a K-shaped economy? Or will it be the way tech giants cement their stranglehold over a weakened and demoralized politeia? "The only thing Big Tech is selling us is their own unprocessed trauma back to us," Ryan Broderick writes at Garbage Day. "It’s not a revolution. It’s a comfort blanket for a managerial class that still can’t fathom that all their tech and wealth couldn’t protect them from the pandemic." I think it's something more self-interested: the fear of a platform shift that will destroy Silicon Valley's current monopolies. Microsoft, Meta, and Google are so freaked out about becoming the next IBM or Yahoo that they are willing to bet the house on a future completely reshaped by technology they control because being wrong about that is cheaper than being the last to figure it out. All I can say with confidence is that the AI slop game guy sure as shit has no special insight into what's coming next.
[11]
Guy Who Wrote Viral AI Post Wasn't Trying to Scare You
You probably don't know Matt Shumer's name, but there's a pretty good chance you're familiar with his thoughts about AI. On Tuesday, Shumer published an essay to X, titled "Something Big Is Coming," which almost immediately caught fire online. (According to X's not-always-reliable metrics, it stands at 73 million views as of Thursday morning.) In it, Shumer, the founder of an AI company, warns that enormous advances in technology are poised to reshape society much more quickly than most people realize. He analogizes artificial intelligence's rapid improvement in recent months to the beginning of the COVID pandemic -- a looming, seismic societal change that only a small faction is really paying attention to. And he warns that the tech sector is the canary in the AI coal mine: "We're not making predictions. We're telling you what already occurred in our own jobs, and warning you that you're next." As Shumer's post ricocheted around the internet, it drew a predictably divided response. Some saw it as an incisive warning of things to come, while others dismissed it as another piece of disposable AI hype, or a naked money grab. I caught up with Shumer on Thursday to discuss the overwhelming response to his essay, why he used AI to help write it, and whether all of our jobs are actually in immediate danger. Your essay has been making the rounds in a way that few things do -- it broke social-media containment. What has the reaction been like? It's insane, because I didn't expect this. I didn't expect anything close to this. I originally wrote it for my mom and dad because -- I was home with them this weekend for the Super Bowl -- I'm 26 and I was trying to explain to them what was going on. I felt that there was an inflection point when GPT-5.3-Codex came out. I tried it and was like, oh my God, this is not just a step better, this is massively better, and it's the first sign of something a little scary. The way I view it, AI labs have focused on training models that are really good at writing code, and that's really important because what we see in the engineering space is like a year ahead of what everybody else has access to. So a model today is, let's say, at level 50 at writing code, but level 20 at law or something of the sort. The next model will be probably at level 100 on code and level 50 on law. It woke me up, and I felt that I had to share it. I was looking around and I was like, what can I give to my parents to help them understand this, so that they're not just thinking their idiot son -- that's probably a terrible way of putting it, but you know what I mean -- is saying "this is happening" and they have no way to know what or not to believe it? There are a lot of pieces of great writing in this space, but they're all extremely technical, and I think that's part of the reason people don't understand what's coming. They're written for nerds by nerds. They almost take pride in sounding as smart as possible. So I figured it would probably be important to write something that they could understand. And as I wrote it, I realized it could actually help other people. I decided to post it and it quickly broke containment. I have friends who are very much outside of the tech bubble and it's being passed around their offices, and they're texting me and it's a surreal experience. But I'm glad it's happening. I didn't expect my article to be the thing that did it, but it needed to happen. People need to understand what's coming. It may not affect them today, it may not affect them in a year, but at some point it will, and I'd rather people be aware and have the opportunity to prepare than just be blindsided by forces that they can't really control. Did you use AI to write any of it? I did. I actually posted a little bit about this because I think it's important for people to know. People are responding like, "look, this is AI-written. You should ignore it." And it's not entirely AI-written. I used it to help edit to sort of iterate my ideas and the ways of phrasing things, and it was incredibly helpful, but that's kind of the point. If this was helped by AI and got millions of views, it's clearly good enough. I didn't say "go write this article." What I did was feed it a bunch of articles that I have read over the years that I think articulate these points really well. I said, "Here's what I agree with, here's what I disagree with. Here's my unique spin and take on this." And then I said, "interview me -- ask me dozens of questions." And I spent over an hour answering those first wave of questions. Then we repeated it, and basically I ended up building this huge dossier of everything I believe, and everything I wanted to explain. And then for each thing, asked:how can I explain it in a way that's actually useful and understandable by the average person? I ended up writing a first draft based on that. Once that was done, I passed my first draft into the AI and I said, "Hey, from an editorial perspective, can you critique this? It gave me feedback and I adjusted it." So it was very much like having a co-writer, and it clearly worked pretty well. I wouldn't say it's the best-written thing ever. I didn't expect it to do this, and if I did expect this sort of virality, I would've put some more work into a lot of parts. One common critique of the article I've seen is that the AI revolution you describe in coding doesn't neatly apply to other fields, since coding is such a discrete task. Here's one post I saw: "Coders freaking out that it's replacing them and extrapolating from their extremely weird domain (as in unusual among knowledge work) to all of work is going to be a major theme of 2026 and kind of embarrassing by 2027." What's your response to that? I understand why people think that, and I think it's very easy to feel like, Oh, the AI can't do the thing I do because X, Y, and Z. There were arguments like that with coding for a very long time, but what we've seen time and again is if the AI labs are sufficiently funded and pick a goal to go after, whether it's to make something that generates videos or that writes code, given enough money and enough time and sufficient incentive to do it from a financial perspective -- which clearly there is -- they solve it. It's this very interesting technology where whatever data you have, if you train the model on it, it can learn it. My dad's a lawyer. Is AI going to stand in front of a courtroom? No, it's not, but I worry that associates are going to have a harder time getting hired. I had dinner with my lawyer a couple nights ago and he was saying that they're using one of these off-the-shelf programs and it's already at the level of about a second-year associate. Do I know exactly what that means personally? No, but I've talked with enough people who are pushing this stuff in their industries who aren't in the bubble, but are just like, "Hey, I want to see where this is going," and the rate that they're saying it's improving at is pretty clear. When you actually break a job into its steps, anything that could be done on a computer can theoretically be done by these models. But I don't think -- and I wanted to make this clear in the article, and if I knew how viral this is going to go, I probably would've spent more time trying to make this clearer -- that just because the AI can do something doesn't mean it's going to immediately proliferate across the economy. There are so many structural things, whether it's regulations, standards, or people's comfort with this sort of stuff, and that means that for certain industries it's going to take more time than others. Code is in this crazy place today where people are saying it's solved and you can build anything, and that is true, but we're still figuring out what it means for jobs. I don't know. No one knows. I think each industry is going to have its own separate reckoning, and it's going to look different for every industry 10 years from now. I think almost everything will be extremely different and almost unrecognizable, but in the interim, everybody has to figure out what this means for them and their industry. But assuming that the AI just can't do their thing and that their thing is special is not the right approach. Maybe that's true, but if there's even a 20 percent chance it's not, it's worth preparing. Another related version of that critique is that for a lot of jobs, dealing with other people is a huge part of it. I'm sure AI could beat a law associate at document review, if not now, then soon. But then you have to actually deal with the client. Most people's jobs have components like that. A lot of what we're seeing and what people know as AI today isn't actually the state of the art of what actually exists. I'm assuming most people that use it are using the free version. The paid version is dramatically better, but there's a whole level above that of more truly agentic systems, and that's the sort of scary stuff right now. When I go and I use AI to build an app, I am not saying, "Hey ChatGPT, build an app." I have a specialized program that has access to everything my computer has access to, and it can use tools like a person and go off and do things. So I say, "Hey, can you get this on the internet and then see if you can find some early users on Reddit and communicate to them that they should try it?" That is actually possible today. That is a little spooky. It's spooky as hell, and I've been one of these people that has been predicting this for years but predicting it and seeing it is a whole different story. Although many people were suspicious of the fact that at the end of your article, you advise that people pay for certain products and follow you on X to keep up with AI news. The following me on Twitter thing -- I agree. The AI products -- I have no stake in Anthropic, I have no stake in OpenAI. They don't pay me or anything. I can see why people might think that, but in fact, I have paid a lot of OpenAI bills over the years when I've tested this stuff. For example, for one of the startups I invested in, you basically worked with them to allow your AI to not just chat back and forth with you on ChatGPT, but to give them access to their own email inbox, where they can actually reach out and chat with other people and other AIs. I also oscillate back and forth between "this is interesting" and "this is terrifying," and I don't know which one is right. I think they're both right. It also strikes me that a hallmark of the AI industry from the beginning, as far as I can tell as a lay observer, is that people love to make sweeping predictions about what's going to happen in a year in two years. I'm pretty sure that in 2023 and 2024, I was hearing that by 2026, white-collar jobs would be totally endangered. Ten years ago, Geoffrey Hinton, an AI pioneer, famously predicted that radiologists would be obsolete by 2020. That did not happen, and it still hasn't come close to happening. Do you find it a little uncomfortable to be making these somewhat apocalyptic forecasts? Yes. The way that I think about it is, do I know this to be 100 percent true? Do I know this to be absolutely certainly going to happen? No, I don't. However, given what I've experienced and getting the preview that I have into the industry, I think there's a better-than-not chance it will. I find the analogy of the pandemic you use a little off because in January and February 20th, 2020, it's true that a very small number of people were actually paying attention to what was happening in China, and you had to be following the right people on Twitter. In this case -- to take an old school barometer of success, the Time Person of the Year in 2025 was "the architects of AI." These companies are widely used and in the news, and I've had a million conversations with people who are worried about the implications for their jobs. It's exactly under the radar. It's not. But people talk a lot about Terminator-style doom, and I don't see many people talking about the impact on jobs. In theory, if this could happen two years from now to your job, if your job happens to be one of the more exposed ones, maybe you should just focus a little more on saving today. That's the angle I wanted to take it from. Tell me more about this agentic stuff where, where AI interacts with the world by itself. Where do you think that could go next? I think it's just reliability. One of the key things that I've learned in AI over the years, because I've been doing this since 2019 -- I dropped out of college after realizing what this was going to do and I realized if I didn't put everything I had into this, I'd regret it for the rest of my life. Basically the best rule of thumb I can give to anybody, and this has been the one thing that's held true, is it's not about a specific prediction. It's not saying it's going to do X, Y, Z at any given point. It's just if a model can kind of sort of do something today, even if it's not good at it, even if it's unreliable in a year or two years, you can bet that it'll eventually be near perfect at that thing. I can't say the models today are reliable at using a computer, and I've actually been in this part of the space. My company actually built the first publicly available AI that could use a web browser and actually go in and order you a burrito. And was it useful at the time? No, because it got it wrong 50 percent of the time. But now we're at 80, 90 percent. It can almost use a computer today, which means in a year or two, you can expect that these things will be nearly perfect, probably better than people at using a computer. So if there's a task that can be done on a computer by a human and doesn't require going somewhere in person, it's very likely that AI will be able to do it reliably and well. Getting to 90 percent reliability for something like that or 95 or even 99 is great, but doesn't it have to be 100 percent? Because you don't want to entrust an AI to do something and it screws up one percent of the time, and it could be a very consequential screwup. I've thought about this a lot, and I go back and forth. You could take the argument that 99 percent reliability isn't enough, but then I've hired and worked with a lot of people over the last six years or so, and I would say that the rate of success is far lower than 99 percent for most things. So it is very much a perception thing. There are also tricks, and I think this is one of the things labs should be focusing on that they're not, to mitigate this. If you just tell the AI, "Hey, I want you to do this thing," there might be a one percent failure rate, But if you then actually have a system that has two ais, you say "AI One, do this thing," and then when it's done, you say, "AI Two, did they do it correctly or do they make a mistake?" To check the work. Exactly. You actually see that the failure rate goes way, way, down. This has been documented since 2020, when AI wasn't great at the time. If it was writing a paragraph and most of the paragraphs were awful, you could say, "Hey, can you generate 20 different versions?" And then you actually have a different AI critique them and pick the best one? The results are far better, and I think a lot of those sorts of things haven't been implemented yet in full. They're starting to be. You have been doing this a long time. There's been a widespread feeling among people who do this work -- at least among coders -- that they may be rendered obsolete or they already are being rendered obsolete, and it's bittersweet. The new tools are both incredibly helpful and sobering. They bring up all kinds of questions of human beings' value in the world. Are you feeling that in your own work right now? This is probably the trickiest question you've asked, because there's so many facets and I hope I hit on the ones that are important. What I've seen in my industry is a bifurcation, is the best way I can describe it. People who are really loving what they do, who are already working insanely hard and are adopting this, are pulling away in an extremely strong way. Somebody who was a top percentile engineer is now 10 or 20 times as effective as they were before, and they can do the work of many more people. Then you have the other side of things, which is that folks who really aren't determined and aren't top percentile already are not really getting the value out of it that others are, and it's not making that big of a difference for them. And I'm worried because we kind of have this social contract where you go to college, you get a job, and you'll be taken care of. But because AI is so skewed, at least in engineering, towards the hard workers already -- some people are just trying to get by, which is a totally fair thing to do. Not everybody wants to be exceptional. They're struggling. I'm a hard worker, but that doesn't mean everybody else should be screwed. And I don't know if it's going to be the same for every industry, but that's what I'm seeing here. That whole social contract -- going to college and getting taken care of -- has been getting harder in most industries anyway, and this could really magnify that. I think it's particularly an American thing to try to be an exceptional striver. It's easy to imagine other places, like Europe, resisting AI more than we do. Which might be a good thing. There are people in my life that I love who, even without AI, are really struggling to get work right now. Look I didn't put this out to scare people, although I understand that there are some elements of that. My goal is to help people see what they might be neglecting, what's not in their circles yet, because they should be able to know and make their own decisions for how to prepare or not prepare. It feels unfair that so many people think AI is a nothingburger when it's clearly not. Maybe it's not everything I say -- I think it will be -- but it's not a nothingburger, and no matter what, people should be thinking at least a little bit about it. And my hope is that this just gets people talking and thinking. So you're basically aiming at a place like Bluesky, where the idea that AI can do anything useful at all gets immediately swatted down. It's funny how people get into their factions about stuff like this. Everything takes on a political valence. It shouldn't. It should be what you need as a person. People get too tribal about things. It's different for everybody, and everybody should have a different response to this. For some people, it truly won't matter. Even if everything I say comes to pass, my nurse in a hospital isn't being replaced anytime soon. They shouldn't -- at least I don't think -- worry, but some people should, and I think it's important that they know. This interview has been edited for length and clarity.
Share
Share
Copy Link
Tech entrepreneur Matt Shumer's essay comparing AI's workplace impact to COVID-19 has sparked intense debate. With over 80 million views, the post claims artificial intelligence will soon displace professional jobs, but critics argue the AI panic relies on anecdotes rather than concrete data about labor market disruption.
A viral essay by Matt Shumer, CEO of OthersideAI, has ignited fierce debate about artificial intelligence and its potential to reshape the workforce. Titled "Something Big Is Happening," the post has been viewed more than 80 million times on X, drawing comparisons between the current moment in AI development and the early weeks of the COVID-19 pandemic
1
. Shumer warns that tools like Claude Code from Anthropic and Codex from OpenAI will displace lawyers, wealth managers, and other white-collar professionals in short order1
. The post urges readers to practice using AI for an hour daily to stay ahead of what he describes as an incoming tsunami of job displacement1
.
Source: NYMag
The viral essay by Matt Shumer struck a nerve amid market turmoil, with finance and software companies experiencing significant selloffs as investors grapple with concerns about AI's impact on jobs
1
. High-profile accounts across the political spectrum shared the warning as an urgent call to action5
. Shumer specifically points to rapid AI advancements in software development transformation, noting that OpenAI's GPT-5.3-Codex helped create itself—a milestone that brings concepts like AGI (Artificial General Intelligence) and the Singularity closer to reality4
5
.
Source: Bloomberg
The discussion highlights how Americans now live in parallel AI universes. While much of the country associates artificial intelligence with ChatGPT and Google's AI overviews, tech hobbyists are experiencing agentic AI tools that can work for hours autonomously, collapsing months of work into weeks or an afternoon
2
. Tools like Claude Code enable users to generate papers, conduct biology research, and create working software prototypes in under an hour2
. Microsoft's CEO reports that as much as 30 percent of code is now written by AI, with the company's chief technical officer expecting that figure to hit 95 percent industry-wide by decade's end2
.Anthropic already reports that up to 90 percent of the company's code is AI-generated, demonstrating the profound software engineering changes underway
2
. The post-chatbot era represents a fundamental shift from conversational AI to autonomous agents capable of navigating complex spreadsheets and performing professional tasks2
. Anthropic CEO Dario Amodei predicted AI would eliminate half of all entry-level white-collar jobs within one-to-five years, while Microsoft's AI head Mustafa Suleyman claimed most professional tasks would be automated within 18 months1
.Critics argue the AI panic relies heavily on anecdotes rather than quantifiable data. Of the 4,783 words in Shumer's essay, none point to concrete evidence suggesting AI tools will put millions of white-collar professionals out of work imminently
1
. The AI industry now faces a "Chicken Little problem," with repeated dire warnings making it difficult to assess genuine threats5
. When an AI entrepreneur describes artificial intelligence as world-changing technology, skeptics note this message functions as a sales pitch5
.
Source: NYMag
National productivity statistics remain up slightly but generally within historic ranges, while the Yale Budget Lab found no discernible labor market disruption since ChatGPT's launch
1
. A randomized controlled trial by Model Evaluation and Threat Research (METR) found experienced software developers took 19 percent longer to complete tasks when using AI tools1
. A Harvard Business Review survey of over 1,000 executives revealed many made layoffs anticipating what AI would do, with only 2 percent cutting jobs due to actual AI implementation1
. Swedish fintech Klarna had to rehire humans after replacing 700 customer service staff with AI led to quality declines1
.Related Stories
While generative AI shows clear progress in software development, experts question whether similar rapid AI advancements will translate to other sectors. Most of the economy differs from the software industry, which works primarily with electrons rather than people or physical objects
3
. According to Census Bureau data, 17.9 percent of workers operated primarily from home in 2021, meaning over 80 percent of jobs required physical presence for tasks not easily replaced by virtual workers3
.Industries outside software face numerous constraints—cultural, physical, and regulatory—that will slow AI's impact on jobs
3
. Drug discovery, despite AI potential, still requires companies to test inventions in thousands of human subjects over extended periods mandated by law3
. Building robots to translate AI into physical work requires extracting raw materials and moving them slowly via trucks and container ships3
. An evidence-based approach suggests societal transformation will unfold more gradually than AI industry warnings imply, echoing patterns from the dot-com era when the internet proved transformative but took longer than expected1
4
.Summarized by
Navi
[2]
[3]
[5]
30 Jan 2026•Technology

04 Feb 2026•Technology

27 Jan 2026•Technology

1
Policy and Regulation

2
Technology

3
Technology
