2 Sources
2 Sources
[1]
Sam Altman Says He's Suddenly Worried Dead Internet Theory Is Coming True
OpenAI CEO Sam Altman, creator of the most popular AI chatbot on Earth, says he's starting to worry that "dead internet theory" is coming true. "I never took the dead internet theory that seriously," Altman tweeted in his typical all-lowercase style, "but it seems like there are really a lot of LLM-run twitter accounts now." (LLM meaning large language model, the tech which powers AI chatbots.) He was resoundingly mocked. "You're absolutely right! This observation isn't just smart -- it shows you're operating on a higher level," responded one user, imitating ChatGPT's em-dash laden prose. But the most common rejoinder was a photograph of the comedian Tim Robinson in a hot dog suit, referencing a skit in which a character who obviously crashed a weiner-adorned car desperately tries to deflect blame, exclaiming at one point that "we're all trying to find the guy who did this!" The "dead internet theory" is a half-prophetic conspiracy that suggests that effectively the entire internet has been taken over by AI models and other autonomous machines. The vast majority of the posts and profiles you see, the theory holds, are just bots. In fact, you're barely interacting with humans at all -- everything you access online is just a machine-maintained illusion, almost like "The Matrix." It's an incredibly solipsistic conceit that at its most extreme is dumb creepypasta fodder, and has become a bit of an ironic joke. But it contains a kernel of truth that does get at a mounting anxiety at how fake and corporate the world wide web has become. And it's undeniable that the deluge of AI models, bots, and the slop they generate are a large part of that. Re: Altman -- well, you see where this is going. He helms a company being valued at nearly half a trillion dollars for unleashing ChatGPT onto the world, a chatbot whose entire purpose is to emptily imitate human writing and personality, capable of churning out entire novels worth of text with a smash of the enter key. It effortlessly fakes facts as much as it does a human soul. And so it's a spammer's dream. Even in cases where ChatGPT isn't directly responsible for the slop being pumped out there, it elevated the entire industry whose products are now all joining in on treating the internet as their dumping ground. The ethos of these companies is largely that much of the human experience is something that can and should be automated to ensure as frictionless an existence as possible. Your emails, DMs, and texts could all be easier written with an AI. An AI-generated image is a more convenient way of capturing your increasingly LLM-mediated imagination than a drawing or photograph. The spirit of the theory has been further vindicated by (failed, for the time-being) experiments by Meta to deploy AI-powered profiles on Facebook and Instagram that masquerade as real people, including one that described itself as a "proud Black queer momma." And on X-formerly-Twitter -- long a bot-infested hellhole that's turned into the social media equivalent of those flashback-to-the-future war scenes in the original "Terminator" movies -- Elon Musk's AI chatbot, Grok, is allowed to run rampant, replying and interacting with posts in the same way a human user would. Since being let off the leash, it's produced such moments of human folly as going on racist rants, sympathizing with Nazis, and calling itself "MechaHitler." All this is to say that it evinces a staggering lack of self-awareness from Altman to be complaining about a technology that, if you had to pin the blame on any single person for unleashing on the world, it'd be him.
[2]
OpenAI boss Sam Altman dons metaphorical hot dog suit as he realises, huh, there sure are a lot of annoying AI-powered bots online these days
For as long as I've been alive, there have been bots of one kind or another on the internet. Whether it's WoW gold bots, email spammers, SmarterChild (remember SmarterChild?), or something else, this glorious world wide web has been home to rickety, virtual facsimiles of human beings trying to wheedle money out of you for decades. But now it's even worse. With the power of AI™ (not actually ™) we've successfully made the internet much worse for everyone, with social media, website comments sections, your email, various of your news outlets, even your YouTube videos now potentially being produced by a gaggle of hallucinating graphics cards. There's even the dead internet theory, the suggestion that -- at this point -- the internet is for the most part just a load of bots regurgitating content at each other. It's probably not true, to be clear. At least not yet. But you'll never guess who's started taking the idea a little more seriously: none other than OpenAI CEO Sam Altman, who took to X this week to announce that he "never took the dead internet theory that seriously," but that these days "it seems like there are really a lot of LLM-run twitter accounts." PC Gamer's sources were unable to confirm if he was wearing a giant hot dog suit at the time. Of course, Altman might not be quite as oblivious as he makes out. He might just be trolling all of us for kicks or, even more likely, continuing a campaign of trolling Elon Musk on his own social media network. Musk's xAI recently sued Apple and OpenAI over ChatGPT exclusivity on iOS devices. As the replies to Altman's tweet were quick to note, the OpenAI boss musing on the degradation of internet communications by LLMs was more than a little ironic. "Yeah dummy it's your fault," admonished one replier. "'I never took the dead internet theory seriously until I made it 150 times worse'," wrote another. As CEO and co-founder of OpenAI, Altman is one of leading figures of the so-called AI revolution, and one of the single people most responsible for the proliferation of LLMs online in the last several years, and all manner of AI agents rely on the corporation's GPT models to work. Indeed, cramming LLMs into every corner of our lives is the company's raison d'etre, and the foundation on which its billions of dollars in revenue is built. For Altman to idly note that, gee, sure seems like the internet is more-and-more infested with LLMs is like an arsonist remarking that it sure is hot in here.
Share
Share
Copy Link
Sam Altman, CEO of OpenAI, sparks debate by acknowledging the increasing presence of AI-powered accounts on social media, drawing criticism and ironic comparisons to his role in AI proliferation.
OpenAI CEO Sam Altman has sparked a heated debate in the tech world with his recent tweet expressing concern about the increasing presence of AI-powered accounts on social media platforms. Altman, whose company is behind the popular AI chatbot ChatGPT, stated, "I never took the dead internet theory that seriously, but it seems like there are really a lot of LLM-run twitter accounts now"
1
.Source: pcgamer
The 'Dead Internet Theory' is a controversial concept suggesting that a significant portion of online content and interactions are generated by AI models and autonomous machines rather than humans. While often dismissed as a conspiracy theory, Altman's comments have reignited discussions about the authenticity of online interactions in the age of advanced AI
1
.Altman's observation has been met with a mix of mockery and criticism, with many pointing out the irony of his statement. As the CEO of OpenAI, Altman is considered one of the leading figures in the AI revolution and a key player in the proliferation of large language models (LLMs) online
2
.Critics have likened Altman's comments to "an arsonist remarking that it sure is hot in here," highlighting the role OpenAI has played in developing and promoting AI technologies that are now being used to create the very accounts he expresses concern about
2
.Related Stories
Source: Futurism
The discussion surrounding Altman's tweet underscores broader concerns about the impact of AI on online communication and content creation. With the rise of sophisticated AI models like ChatGPT, there are growing worries about the potential for these technologies to flood the internet with AI-generated content, making it increasingly difficult to distinguish between human and machine-generated interactions
1
.Altman's comments come at a time when the AI industry is facing increased scrutiny over the ethical implications of its technologies. The incident has reignited debates about the responsibility of AI companies in managing the societal impact of their creations. It also raises questions about the future of online communication and the potential need for new strategies to maintain the authenticity of human interactions on the internet
2
.Summarized by
Navi
16 Apr 2025•Technology
26 Jun 2025•Policy and Regulation
13 Jun 2025•Technology
1
Business and Economy
2
Technology
3
Technology