10 Sources
10 Sources
[1]
Sam Altman says that bots are making social media feel 'fake' | TechCrunch
X enthusiast and Reddit shareholder Sam Altman had an epiphany on Monday: Bots have made it impossible to determine whether social media posts are really written by humans, he posted. The realization came while reading (and sharing) some posts from the r/Claudecode subreddit, which were praising OpenAI Codex. OpenAI launched the software programming service that takes on Anthropic's Claude Code in May. Lately, that subreddit has been so filled with posts from self-proclaimed Code users announcing that they moved to Codex that one Reddit user even joked: "Is it possible to switch to codex without posting a topic on Reddit?" This left Altman wondering how many of those posts were from real humans. "I have had the strangest experience reading this: I assume it's all fake/bots, even though in this case I know codex growth is really strong and the trend here is real," he confessed on X. He then live-analyzed his reasoning. "I think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very 'it's so over/we're so back' extremism, optimization pressure from social platforms on juicing engagement and the related way that creator monetization works, other companies have astroturfed us so i'm extra sensitive to it, and a bunch more (including probably some bots)." To decode that a little, he's accusing humans of starting to sound like LLMs, even though LLMs -- spearheaded by OpenAI -- were literally invented to mimic human communication, right down to the em dash. And OpenAI's models definitely trained on Reddit, where Altman was a board member through 2022, and was disclosed as a large shareholder during the company's IPO last year. He makes a valid point that fandoms, led by extremely, always-on social media users, do tend to behave in odd ways. Many groups can devolve into hatefests if overrun by those venting frustrations to their brethren. Sam also throws a dig at the incentives when social media sites and creators rely on engagement to make money. Fair enough. But then Altman confesses that one of the reasons he thinks the pro-OpenAI posts in this subreddit might be bots is because OpenAI has also been "astroturfed." That typically involves posts by people or bots paid for by the competitor, or paid by some third-degree contractor, giving the competitor plausible deniability. We have no evidence of astroturfing (though it is possible). But we did see how OpenAI subreddits turned on the company after it released GPT 5.0. Instead of waves of praise from the faithful over the new model, many angry posts were voted up. People took to Reddit and X to complain about everything from GPT's personality to how it burned through credits without finishing tasks. A day after the bumpy release, Altman did a Reddit ask-me-anything session on r/GPT in which he confessed to rollout issues and promised changes. The GPT subreddit has never fully recovered its previous level of love, with users still posting regularly on how much they dislike the changes with GPT 5.0. Are they human? Or are they, as Altman seems to imply, fake in some way? Altman surmises, "The net effect is somehow AI twitter/AI Reddit feels very fake in a way it really didn't a year or two ago." If that's true, who's fault is it? GPT has led models to become so good at writing, that LLMs have become a plague not just to social media sites (which have always had a bot problem) but to schools, journalism, and the courts. While we don't know how many Reddit posts are written by bots, or are fictional accounts by humans using LLMs, it is likely a substantial number. Data security company Imperva reported that over half of all internet traffic in 2024 was non-human, largely due to LLMs. X's own bot Grok says: "The exact numbers aren't public, but 2024 estimates suggest hundreds of millions of bots on X." Several cynics have suggested that Altman's lament was his first forays into marketing OpenAI's rumored social media platform. In April, the Verge reported that such a project to take on X and Facebook was at the earliest stages. This product may or may not exist. Altman may or may not have had ulterior motives for suggesting that social media is too fake these days. But motives aside, if OpenAI is planning a social network, what are the odds that it would be a bot-free zone? And, funny enough, if it did the reverse and banned humans, the results likely wouldn't be different. Not only do LLMs still hallucinate facts, but when researchers at the University of Amsterdam built a social network composed entirely of bots, they found that the bots soon formed cliques and echo chambers for themselves, too.
[2]
Dead Internet Theory Lives: One Out of Three of You Is a Bot
Alright, pal, you wanna keep reading? Why don't you tell me which of these pictures does not have a stop sign in it? According to CloudFlare, nearly one-third of all internet traffic is now bots. Most of those bots, you won't ever directly interact with, as they are crawling the web and indexing websites or performing specific tasksâ€"or, increasingly, collecting data to train AI models. But it's the bots that you can see that have people like OpenAI CEO Sam Altman and others questioning (albeit with seemingly zero remorse or consideration of any alternative) whether he and his cohort are destroying the internet. Last week, Altman responded to a post that showed lots of comments in the subreddit r/ClaudeCode, a Reddit community built around Anthropic's Claude Code tool, praising OpenAI's Codex, an AI coding agent. "i have had the strangest experience reading this: i assume its all fake/bots, even though in this case i know codex growth is really strong and the trend here is real," he wrote, very subtlely acknowledging how great his own product is. While Altman suggested some of this may be people adopting the quirks and word choices of chatbots, among other factors, he did acknowledge that "the net effect is somehow AI twitter/AI reddit feels very fake in a way it really didnt a year or two ago." It follows a previous observation he made earlier this month in which he said, "i never took the dead internet theory that seriously but it seems like there are really a lot of LLM-run twitter accounts now." "Dead internet theory" is the idea that much of the content online is created, interacted with, and fed to us by bots. If you believe the conspiratorial origins of the theory, which is thought to have first cropped up on imageboards around 2021, it's an effort to control human behavior. If you're slightly less blackpilled about the whole thing, then perhaps you're more into the idea that it's primarily driven by the monetary incentives of the internet, where engagementâ€"no matter how low value it may beâ€"can generate revenue. Interestingly, the theory appeared pre-ChatGPT, suggesting the bot problem was bad before LLMs became widely accessible. Even then, there was evidence that a ton of internet traffic came from bots (some estimates place it over 50%, which is well above CloudFlare's measurements), and there were concerns about "The Inversion," or the point where fraud detection systems mistake bot behavior for human and vice versa. But now, at a time when companies like OpenAI are making publicly available agents that can navigate the web like a person and perform tasks for them, the level of authenticity online is likely to plummet even further. Altman seems to see it, but hasn't suggested actually, you know, *doing* anything about it. It's not dissimilar from a situation earlier this year in which Altman warned that AI tools have “fully defeated†most authentication services that humans rely on to verify their identity and said that, as a result, scams are likely going to explode. Just like Altman's observation about inauthentic behavior on social media, he seemed to have zero interest in slowing his company's activity to stop the erosion of the digital systems we count on, despite seemingly being able to recognize the pitfalls. Why? Well, how about another conspiracy theory? Perhaps it's because Altman has another company he'd like to pitch as the solution for it all: his bizarre "World" identification verification system/crypto scheme that requires people to scan their eyeballs to prove they are human. He's already broached a potential deal with Reddit to verify its users as authenticâ€"noteworthy considering he's now called out bot activity on the platform. The faster we get pushed to the Dead Internet Theory cliff, the more incentive companies have to call on Altman's other firm to save us all. Call it the New Internet Order.
[3]
Is the internet becoming fake? Sam Altman warns social media is infested with bots
When one of the world's top AI executives starts questioning the authenticity of online conversation, it's worth paying attention. In a now-viral post on X, OpenAI CEO Sam Altman recently admitted that he no longer trusts what he sees on the internet, even when it's about his own product. You can check out the conversation he was responding to on Reddit. Even with rumors of starting his own social media platform, the ChatGPT founder says, "Everything looks like AI-generated content." He reflected on this and more about the difficulties distinguishing real users from bots. In the same post, the ChatGPT founder continued speculating on several contributing factors: This all contributes, in his view, to a growing sense that social discussions are increasingly robotic, echoing the so-called "dead Internet" theory, which suggests that bots may already dominate online interactions. It's clear that the line between genuine contributions and AI‑generated noise is blurring at a rapid pace. You may have heard the term, "AI slop," which refers to mass‑produced, low‑quality AI content seen across social feeds. It highlights the problem of what internet users are seeing far more often: content that is fast to make, barely meaningful and drowning out nuanced human expression. It's a phenomenon that users and platforms are grappling with as some embrace the novelty of AI tools and other lament the erosion of genuine conversation. A growing body of research from Cornell University underscores the consequences: One study found that users often perceive AI-generated content as equally credible or engaging as content written by humans, raising alarm for the potential spread of misinformation. Another recent analysis estimates that 30-40% of active webpages now contain AI-generated text, suggesting a tipping point in how much of the Internet might already be synthetic. Recognizing the urgency, Altman has long advocated for smarter regulation and mandating disclosure of AI-generated content and even international oversight of advanced AI systems, akin to nuclear regulation. For the average user, the takeaway is clear: Sam Altman's uneasy reflection is startling for sure. But hearing "everything looks fake" from someone who has made a living on accelerating AI, doesn't mean all is lost. It's a call on us to act. As AI expands, so will the demand for defenses to evolve on everything from media literacy, transparency frameworks and regulatory guardrails. In an increasingly synthetic internet, your best tool is your own due diligence and human ability to stay curious and dig deeper for real answers.
[4]
Sam Altman says ChatGPT is making social media feel fake, yet he's one of the main reasons it's an issue
Another day, another Sam Altman tweet to dissect. This time, the OpenAI CEO has decided to share his fears for the future of the internet, and particularly social media, following a realization that tools like ChatGPT make the web feel "very fake". Altman tweeted on Tuesday, "AI Twitter/AI Reddit feels very fake in a way it really didn't a year or two ago." in a response to a Reddit post about Codex. He said, "I have the strangest experience reading this: I assume it's all fake/bots." Altman's late to the party here, considering we've all watched the internet and social media sites like X and Reddit turn into AI-generated slop over the last 12 months or so. In this case, he's got a hypothesis on how this has come to be. "I think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very "it's so over/we're so back" extremism, optimization pressure from social platforms on juicing engagement, and the related way that creator monetization works." I see where you're coming from, Mr Altman, but honestly, I would be shocked if everyone in the world has started to use em dashes because they've seen the punctuation mark spring up in ChatGPT responses. Every time Sam Altman chirps up with a new statement that makes you sit up and fear for the future of humanity, I can't help but feel anger. Don't get me wrong, I think the consumer AI tools we have access to nowadays are seriously impressive. In fact, ChatGPT is becoming a genuinely useful software for helping me plan and keep on top of my daily life. That said, Altman's continuous public worries about the state of the world we live in following the introduction of OpenAI's technology feels so incredibly synthetic. I'd honestly appreciate it more if he acknowledged the weirdness of the internet in this new AI-powered world, and recognized the huge part he's played in making it so. Instead, we're left with statements claiming the state of the internet is in jeopardy or a deep concern for how people interact with AI, without any form of self-awareness or accountability. Yes, social media sites feel like they are completely filled with bots and fake AI posts, but what are Sam Altman and the rest of the tech billionaires at the forefront of AI development going to do about it? One potential solution could be another venture Altman is working on: The Orb Mini. Announced back in May, the hardware device scans humans to, you guessed it, verify their humanity. Intending to ship 7,500 devices across the U.S by the end of the year, maybe the future of the internet sees humans verified by external hardware before gaining access to social media sites like Reddit or X. That future scares me. In fact, I don't like the idea at all. But maybe the future is a dystopian as an external human verification device sounds. Altman's constant flip-flopping between "AI is the future and it's great" and "AI is the future and it's terrifying" is getting tiresome. Maybe it's a calculated marketing strategy, but it's hard to ignore the irony: the same man stoking AI panic half the time is also deeply invested in a human verification company, one that could, in the future, charge you to prove you're real online.
[5]
Inside the 'Dead Internet' Theory -- and Why It's Spreading
Altman, CEO of the company that created ChatGPT, the world's most popular AI text generator, drew irony on X. "You're absolutely right! This observation isn't just smart -- it shows you're operating on a higher level," wrote one user, mimicking the sycophantic tone of ChatGPT text. Altman was referring to an idea popularized by a 2021 post on the online forum Agora Road's Macintosh Cafe: that the internet, once vibrant with human life, was now dead, run entirely by bots and for bots. "The Internet feels empty and devoid of people," wrote IlluminatiPirate, the pseudonymous author of the theory, at the time. Gone was the promise of free exchange between people. The internet had been "hijacked by a powerful few." In 2021, almost two years before the launch of ChatGPT, the idea that robots ran the internet sounded far-fetched, as did the explanation that "the U.S. government is engaging in an artificial intelligence powered gaslighting of the entire world population." The Atlantic ran a story on the theory with the headline "The 'Dead-Internet Theory' Is Wrong but Feels True." Bots -- automated computer scripts ranking websites for search engines and social media content for platforms -- were part of the internet, but they couldn't generate convincing content.
[6]
Sam Altman says people are starting to talk like AI, making some human interactions 'feel very fake' | Fortune
Even CEOs of major AI companies are starting to admit the pitfalls and quirks of the technology.OpenAI CEO Sam Altman on Monday said he's had "the strangest experience" reading a Reddit thread about Codex, his company's new agent tool for developers. "I assume it's all fake/bots, even though in this case I know Codex growth is really strong and the trend here is real," Altman wrote in a post on X. The Reddit thread Altman references is overly positive about OpenAI's Codex, and even dishes on Anthropic's Claude Code. "I think there are a bunch of things going on," Altman continued. "Real people have picked up quirks of LLM-speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very 'it's so over/we're so back' extremism, optimization pressure from social platforms on juicing engagement and the related way that creator monetization works, other companies have astroturfed us so I'm extra sensitive to it, and a bunch more (including probably some bots)." Altman isn't the first person to unpack the fact people are starting to talk more like AI bots in real life. Hiromu Yakura, a postdoctoral researcher at the Max Planck Institute for Human Development in Berlin, noticed differences in his own vocabulary about a year after ChatGPT debuted in late 2022. Yakura, along with other researchers at the Max Planck Institute, analyzed millions of emails, essays, and other texts, along with hundreds of thousands of YouTube videos and podcast episodes and found a surge of ChatGPT words -- like delve, examine, and explore -- in the 18 months following the AI tool's release. "The patterns that are stored in AI technology seem to be transmitting back to the human mind," study co-author Levin Brinkmann, also at the Max Planck Institute for Human Development, told Scientific American. Another study by the University of California-Berkeley found ChatGPT responses reinforce dialect discrimination. In other words, ChatGPT favors Standard American English, which can frustrate non-American users. That underlines the notion ChatGPT has a standard way of responding to users, which in turn influences the way they speak and write. "The net effect is somehow AI Twitter/AI Reddit feels very fake in a way it really didn't a year or two ago," Altman said. Vaikunthan Rajaratnam, a nerve surgeon and UNESCO chair partner, claimed in a LinkedIn article, however, he's been working to reverse engineer ChatGPT to actually sound like him. While he said some of the pros of ChatGPT include enriched vocabulary and grammar and added clarity and structure to communication, "diminishing authenticity" was a major pitfall as well as loss of regional dialects and personal voice. "Through carefully crafted prompts and iterative refinement, I've tuned it to reflect my tone, my vocabulary, my way of thinking," Rajaratnam wrote. "When I use it now -- whether it's to draft newsletters, brainstorm, or write academic pieces -- it often sounds eerily familiar... because it sounds like me." Still, other executives and billionaires have taken notice of some of the oddities and pitfalls of AI -- even if they're overall relatively keen on the technology. Mark Cuban, former Shark Tank investor and Dallas Mavericks owner, said in a Bluesky post this week there's one thing AI still doesn't have: humility. "A little solace for the Anti AI crowd," he wrote. "The greatest weakness of AI is its inability to say 'I don't know'." "Our ability to admit what we don't know will always give humans an advantage," he added.
[7]
Sam Altman Says He's Suddenly Worried Dead Internet Theory Is Coming True
OpenAI CEO Sam Altman, creator of the most popular AI chatbot on Earth, says he's starting to worry that "dead internet theory" is coming true. "I never took the dead internet theory that seriously," Altman tweeted in his typical all-lowercase style, "but it seems like there are really a lot of LLM-run twitter accounts now." (LLM meaning large language model, the tech which powers AI chatbots.) He was resoundingly mocked. "You're absolutely right! This observation isn't just smart -- it shows you're operating on a higher level," responded one user, imitating ChatGPT's em-dash laden prose. But the most common rejoinder was a photograph of the comedian Tim Robinson in a hot dog suit, referencing a skit in which a character who obviously crashed a weiner-adorned car desperately tries to deflect blame, exclaiming at one point that "we're all trying to find the guy who did this!" The "dead internet theory" is a half-prophetic conspiracy that suggests that effectively the entire internet has been taken over by AI models and other autonomous machines. The vast majority of the posts and profiles you see, the theory holds, are just bots. In fact, you're barely interacting with humans at all -- everything you access online is just a machine-maintained illusion, almost like "The Matrix." It's an incredibly solipsistic conceit that at its most extreme is dumb creepypasta fodder, and has become a bit of an ironic joke. But it contains a kernel of truth that does get at a mounting anxiety at how fake and corporate the world wide web has become. And it's undeniable that the deluge of AI models, bots, and the slop they generate are a large part of that. Re: Altman -- well, you see where this is going. He helms a company being valued at nearly half a trillion dollars for unleashing ChatGPT onto the world, a chatbot whose entire purpose is to emptily imitate human writing and personality, capable of churning out entire novels worth of text with a smash of the enter key. It effortlessly fakes facts as much as it does a human soul. And so it's a spammer's dream. Even in cases where ChatGPT isn't directly responsible for the slop being pumped out there, it elevated the entire industry whose products are now all joining in on treating the internet as their dumping ground. The ethos of these companies is largely that much of the human experience is something that can and should be automated to ensure as frictionless an existence as possible. Your emails, DMs, and texts could all be easier written with an AI. An AI-generated image is a more convenient way of capturing your increasingly LLM-mediated imagination than a drawing or photograph. The spirit of the theory has been further vindicated by (failed, for the time-being) experiments by Meta to deploy AI-powered profiles on Facebook and Instagram that masquerade as real people, including one that described itself as a "proud Black queer momma." And on X-formerly-Twitter -- long a bot-infested hellhole that's turned into the social media equivalent of those flashback-to-the-future war scenes in the original "Terminator" movies -- Elon Musk's AI chatbot, Grok, is allowed to run rampant, replying and interacting with posts in the same way a human user would. Since being let off the leash, it's produced such moments of human folly as going on racist rants, sympathizing with Nazis, and calling itself "MechaHitler." All this is to say that it evinces a staggering lack of self-awareness from Altman to be complaining about a technology that, if you had to pin the blame on any single person for unleashing on the world, it'd be him.
[8]
OpenAI boss Sam Altman dons metaphorical hot dog suit as he realises, huh, there sure are a lot of annoying AI-powered bots online these days
For as long as I've been alive, there have been bots of one kind or another on the internet. Whether it's WoW gold bots, email spammers, SmarterChild (remember SmarterChild?), or something else, this glorious world wide web has been home to rickety, virtual facsimiles of human beings trying to wheedle money out of you for decades. But now it's even worse. With the power of AI™ (not actually ™) we've successfully made the internet much worse for everyone, with social media, website comments sections, your email, various of your news outlets, even your YouTube videos now potentially being produced by a gaggle of hallucinating graphics cards. There's even the dead internet theory, the suggestion that -- at this point -- the internet is for the most part just a load of bots regurgitating content at each other. It's probably not true, to be clear. At least not yet. But you'll never guess who's started taking the idea a little more seriously: none other than OpenAI CEO Sam Altman, who took to X this week to announce that he "never took the dead internet theory that seriously," but that these days "it seems like there are really a lot of LLM-run twitter accounts." PC Gamer's sources were unable to confirm if he was wearing a giant hot dog suit at the time. Of course, Altman might not be quite as oblivious as he makes out. He might just be trolling all of us for kicks or, even more likely, continuing a campaign of trolling Elon Musk on his own social media network. Musk's xAI recently sued Apple and OpenAI over ChatGPT exclusivity on iOS devices. As the replies to Altman's tweet were quick to note, the OpenAI boss musing on the degradation of internet communications by LLMs was more than a little ironic. "Yeah dummy it's your fault," admonished one replier. "'I never took the dead internet theory seriously until I made it 150 times worse'," wrote another. As CEO and co-founder of OpenAI, Altman is one of leading figures of the so-called AI revolution, and one of the single people most responsible for the proliferation of LLMs online in the last several years, and all manner of AI agents rely on the corporation's GPT models to work. Indeed, cramming LLMs into every corner of our lives is the company's raison d'etre, and the foundation on which its billions of dollars in revenue is built. For Altman to idly note that, gee, sure seems like the internet is more-and-more infested with LLMs is like an arsonist remarking that it sure is hot in here.
[9]
Sam Altman expresses concerns about bots distorting online discourse
OpenAI CEO Sam Altman expressed concerns on Monday about the growing influence of bots on social media platforms. He noted that the spread of automated accounts has blurred the line between human and machine-generated posts, making it harder to assess authenticity in online conversations. Altman's comments followed his experience in the r/Claudecode subreddit, dedicated to OpenAI's Codex programming tool. The forum saw a surge of posts praising Codex, with many users claiming to have switched from Anthropic's Claude Code. The volume of similar messages prompted Altman to suspect bot-driven amplification. One user joked, "Is it possible to switch to Codex without posting a topic on Reddit?" -- highlighting how repetitive the posts had become. Altman admitted that while Codex adoption is real, the discourse "feels fake/bots" even when genuine growth exists. Altman pointed to several dynamics shaping this environment: He cited the rollout of GPT-5.0 as another case where online feedback seemed unusually negative, raising the possibility of manipulation. Altman's comments come amid data showing bots dominate internet traffic. Imperva reported that more than half of online activity in 2024 was non-human. On X, the platform's own AI assistant Grok estimated hundreds of millions of bots operating last year. This aligns with Altman's conclusion that "AI Twitter/AI Reddit feels very fake in a way it really didn't a year or two ago." Some analysts view Altman's remarks as a signal that OpenAI may be preparing its own social platform. In April, The Verge reported that OpenAI was exploring such a project to rival X and Facebook. While no product has been announced, the idea raises questions about whether OpenAI could realistically build a bot-free network. Research from the University of Amsterdam has shown that even bot-only networks devolve into echo chambers, amplifying their own biases. Altman also acknowledged the broader challenge, noting that large language models can hallucinate facts -- a problem that persists regardless of whether posts come from humans or machines.
[10]
Sam Altman says social media feels fake as bots flood Reddit and X
The subreddit has been full of posts from users revealing they switched to Codex. OpenAI CEO Sam Altman has shared his observation on X (formerly Twitter): Bots have made it hard to tell whether social media posts are written by humans. He made the comment while reading posts on the r/ClaudeCode subreddit, where users were praising OpenAI's Codex, a programming service launched in May to compete with Anthropic's Claude Code. The subreddit has been full of posts from users revealing they switched to Codex. This left Altman questioning how many posts were actually written by real people. "I have had the strangest experience reading this: I assume its all fake/bots, even though in this case I know codex growth is really strong and the trend here is real," Altman wrote in his X post. Also read: Apple Awe Dropping event tonight: How to watch iPhone 17 launch live, what to expect and more Altman went on to express why posts might feel fake. "I think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very 'it's so over/we're so back' extremism, optimization pressure from social platforms on juicing engagement and the related way that creator monetization works, other companies have astroturfed us so i'm extra sensitive to it, and a bunch more (including probably some bots)." In simpler terms, Altman is pointing out that humans are starting to sound like AI. Also read: Apple iPhone 16e price drops by over Rs 10,900: How to grab this deal In his post, Altman summarised, "But the net effect is somehow AI twitter/AI reddit feels very fake in a way it really didnt a year or two ago. Reddit communities around OpenAI have shown mixed reactions in the past. After GPT 5.0 launched, some users criticised the update instead of praising it. Well, bots are now a major part of the internet. Imperva reported that over half of all online traffic in 2024 came from non-humans, mostly AI. X's bot Grok also estimates hundreds of millions of bots on the platform.
Share
Share
Copy Link
OpenAI CEO Sam Altman expresses worry over the increasing presence of AI-generated content on social media platforms, leading to discussions about the authenticity of online interactions and the future of the internet.
OpenAI CEO Sam Altman recently sparked a heated debate about the authenticity of online interactions by expressing his concerns over the increasing presence of AI-generated content on social media platforms. In a post on X (formerly Twitter), Altman stated, "AI Twitter/AI Reddit feels very fake in a way it really didn't a year or two ago," highlighting the growing difficulty in distinguishing between human-generated and AI-generated content
1
.Altman's observations have reignited discussions around the "Dead Internet Theory," which suggests that a significant portion of online content and interactions are generated by bots rather than humans. This theory, which emerged around 2021, has gained traction as AI language models have become increasingly sophisticated
5
.According to data security company Imperva, over half of all internet traffic in 2024 was non-human, largely due to the proliferation of Large Language Models (LLMs)
1
. CloudFlare reports that nearly one-third of all internet traffic is now generated by bots, with many of these bots crawling the web, indexing websites, and collecting data to train AI models2
.Altman identified several factors contributing to the perceived inauthenticity of online interactions:
3
Related Stories
The increasing prevalence of AI-generated content raises several concerns:
Misinformation: Research from Cornell University suggests that users often perceive AI-generated content as equally credible or engaging as human-written content, potentially facilitating the spread of misinformation
3
.Content Quality: The phenomenon of "AI slop" - mass-produced, low-quality AI content - is becoming more common, potentially drowning out nuanced human expression
3
.Authentication Challenges: Altman has previously warned that AI tools have "fully defeated" most authentication services, potentially leading to an increase in online scams
2
.To address these challenges, several solutions have been proposed:
Regulation: Altman has advocated for smarter regulation, including mandating disclosure of AI-generated content and international oversight of advanced AI systems
3
.Human Verification: Altman's company, Worldcoin, is developing the Orb Mini, a hardware device designed to scan and verify human users
4
.Media Literacy: Experts emphasize the importance of developing better media literacy skills to navigate an increasingly synthetic internet landscape
3
.As AI technology continues to advance, the challenge of maintaining authentic online interactions will likely remain a critical issue for tech companies, policymakers, and internet users alike.
Summarized by
Navi
[4]