4 Sources
4 Sources
[1]
Reddit will require "fishy" accounts to verify they are run by a human
Reddit will require accounts that exhibit "automated or otherwise fishy behavior" to verify that a human runs them, Reddit CEO Steve Huffman said in a Reddit post today. The verification process aims to combat unwanted bots from flooding Reddit at a time when AI bots are poised to take over the Internet. "As AI becomes a bigger part of the Internet, we want to make sure that when you're on Reddit, you know when you're talking to a person and when you're not," Huffman said. Human verification will only occur if Reddit suspects that an account is a bot. This is "rare" and won't apply to "most users," Huffman emphasized. If the account cannot prove that it's human, it "may be restricted," he said. Reddit will check if an account is run by a human by using third-party tools that Huffman said won't expose users' true identity, Reddit username, or Reddit activity. Current methods that Reddit is exploring include passkeys, which Huffman said are a great starting point but don't provide any "proof of individuality or anything other than 'a human probably did something.'" Reddit is also looking into third-party biometric services, like World ID, which uses iris-scanning tech. "I think the Internet needs verification solutions like this, where your account information, usage data, and identity never mix," Huffman said. A last resort may be third-party government ID services, which Reddit is already required to use in some geographies, like the UK. Huffman said this is "the least secure, least private, and least preferred" method for human verification on Reddit. "When we are forced to do this, we design the integrations so that we never actually see your ID information, so your Reddit data cannot be tied to you," he added. Additionally, Huffman announced that accounts that use bots in permitted ways will get an App label. Reddit has posted information about how developers can get their apps labeled. The announcement comes amid concern from some industry commentators that AI bot traffic online could surpass human traffic soon. Web agents are becoming more prevalent and flocking to social media sites. A relaunched Digg, for example, shut down its open beta after three months due to an "unprecedent bot problem" led by "sophisticated AI agents and automated accounts," CEO Justin Mezzell said in March. Ensuring that Reddit isn't overtaken by bots is in Reddit's best interest financially. It positions itself to users as a place to have conversations with real people about human topics and points of interest. The social media platform has also been increasingly selling itself to advertisers as a way to push products to real people. And Reddit has made millions by allowing AI companies to train large language models on its years' worth of human-generated content. Reddit has sued and blocked companies that it believes has wrongfully scraped content without paying. Reddit already removes an average of 100,000 accounts per day that use nefarious bots and post spam, per Huffman, who said that the removals often happen before users see the accounts. Reddit also plans to make it easier for Reddit users to report accounts that they think are bots. AI-generated content still allowed Reddit is exploring ways to limit bots on the platform but restraining from going after humans who employ chatbots to create posts and comments. Reddit hasn't confirmed how much content on the site is AI-generated, but battling AI slop on Reddit has proven challenging for moderators, even when subreddits ban the use of generative AI. "We'll monitor its usage and see what happens as we crack down even more on automated accounts. As always, communities can set their own standards if they want," Huffman said of AI-generated content. Disclosure: Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.
[2]
Reddit takes on the bots with new 'human verification' requirements for fishy behavior | TechCrunch
Would-be Reddit competitor Digg just shut down because it couldn't get a handle on the bots overrunning its site. On Wednesday, Reddit said it's taking on the challenge itself. The company will begin labeling automated accounts that are providing a service to users, similar to how the "good bots" are labeled on X, and it will now require accounts that are suspected of being bots to verify if they're human. Reddit stresses this is not going to be a sitewide verification requirement, and will only occur if something suggests that the account isn't human, including its activity on the site or other technical markers. If the account can't pass the test, it may be restricted, Reddit said. To identify potential bots, Reddit is using specialized tooling that looks at account-level signals and other factors -- like how quickly the account is attempting to write or post content. Using AI to write posts or comments, however, is not against its policies (though community moderators may set their own rules). To verify an account is human, Reddit will leverage third-party tools like passkeys from Apple, Google, YubiKey, and other third-party biometric services, like Face ID or even Sam Altman's World ID -- or, in some countries, the use of government IDs. Reddit notes this last category may be required in some countries like the U.K. and Australia and some U.S. states, because of local regulations on age verification, but it's not the company's preferred method. "If we need to verify an account is human, we'll do it in a privacy-first way," Reddit co-founder and CEO Steve Huffman wrote in the announcement Wednesday. "Our aim is to confirm there is a person behind the account, not who that person is. The goal is to increase transparency of what is what on Reddit while preserving the anonymity that makes Reddit unique. You shouldn't have to sacrifice one for the other." The changes are meant to address the growing problem of bots engaging on social platforms and the web more broadly, where they're often used to influence politics, spread misinformation, inflate popularity, secretly market products, generate fake ad clicks, and more. According to Cloudflare, the traffic from bots will exceed human traffic by 2027, when you include bots like web crawlers and AI agents in the mix. Reddit, in particular, has become a popular destination for bots that attempt to manipulate narratives, astroturf to shill for companies or their products, repost links, post spam, drive traffic, conduct research, and more. Plus, because Reddit's content is used for AI training thanks to lucrative deals with AI model providers, there's suspicion that bots are even posting questions on the site to generate more training data, particularly in areas where AI is lacking information. Reddit's other co-founder, Alexis Ohanian, has also addressed a related problem known as the "dead internet theory," a conjecture that bots outnumber humans online and that the vast majority of content, interactions, and web activity on the internet is automated or AI-generated, rather than from people. In the age of AI agents, the theory is becoming a reality. The company announced last year that it would begin to require human verification in response to the growing number of bots and the need to meet "evolving regulatory requirements." But the company today notes that the current solutions, which Huffman recently discussed on the TBPN podcast, aren't the best. "The best long-term solutions will be decentralized, individualized, private, and ideally not require an ID at all," Huffman wrote in today's announcement. Alongside these changes, Reddit said it would continue to remove bots and spam, where it averages 100,000 account removals per day, and rely on reports of suspected bots, with improved tooling still to come. Developers running so-called good bots can learn more about labeling them with the new "APP" label in the r/redditdev community.
[3]
Reddit accounts with 'fishy' bot-like behavior will soon need to prove they're human
Reddit is taking new steps to identify bots on the platform -- a process that may require some users to confirm that they're human. In a post on Wednesday, Reddit CEO Steve Huffman writes that the company will introduce a labeling system for accounts registered as bots, and ask users with "automated" or "fishy behavior" to verify that they're human using methods like fingerprint scanning or submitting their ID. With this update, developers can register automated accounts with Reddit, which will then receive an "[APP]" label. However, Reddit also notes that it will be on the lookout for unlabeled accounts with suspicious behavior. "If something suggests an account isn't human, including automation (hi, web agents), we may ask it to confirm there's a person behind it," Huffman writes, adding that these cases "will be rare and will not apply to most users." Reddit will ask users behind suspected bot accounts to verify that they're human, and is exploring several verification methods to do so without actually identifying who the person is. That includes asking users to complete a passkey check, such as scanning their fingerprint on a smartphone, or entering a PIN. It's also looking into using third-party biometric services, like the Sam Altman-backed World ID, which uses an eyeball-scanning orb to verify humanness. Huffman brings up third-party ID verification services as well, which he says are "the least secure, least private, and least preferred" verification method. He adds that the UK, Australia, and some US states already require it to support this type of verification. Suspected bot accounts that are unable to verify their humanness "may be restricted," according to Huffman. Last year, Reddit began testing account verification for brands and individual users. Huffman hinted at launching a bot verification system in a letter to shareholders in February, and floated the idea of using Face ID to verify a user's humanness during an interview on TBPN this week. Along with this update, Huffman says Reddit is going to make reporting suspected bots "easier and more flexible" -- though the platform isn't going to come down too hard on all accounts using AI to write. "We'll monitor its usage and see what happens as we crack down even more on automated accounts," Huffman says. "Our current focus is to ensure there is a real, live human behind the accounts you're seeing."
[4]
Reddit will prompt some accounts to 'verify humanness' in latest bot crackdown
Reddit CEO Steve Huffman has detailed the company's latest plan to fight bots and it means that some accounts will need to "verify humanness," though the company is stopping short of widespread identity verification. In an update, Huffman said that in "rare" cases accounts that seem "fishy" will be prompted for additional verification. Such prompts "will not apply to most users," according to Huffman, but will apply to accounts where Reddit detects signs of automated posting or bot-like behavior. If the account doesn't pass the verification test, it may be "restricted" from the platform. For now, verification will take the form of on-device methods, including FaceID and passkeys. But the company is considering alternative methods, including World ID, the face-scanning orb company run by Sam Altman. "I think the internet needs verification solutions like this, where your account information, usage data, and identity never mix," Huffman writes. As part of the new policy, Reddit is also adding an "[APP]" label to existing "good" bots on the platform and making it easier for users to report suspected "bad" bots. The company is also grappling with a growing number of age verification laws. Reddit is "exploring" ways to "comply with these regulations without compromising user privacy," Huffmans said. The company is clearly trying to walk a careful line in how it approaches verification. Huffman notes that Reddit intends to "confirm humanness" rather than verify users' actual identities, which would erode the anonymity that Reddit is known for. But the rise of agentic AI has meant that Reddit is contending with the same sorts of bot-driven spam that took down the short-lived reboot of Digg. Of course, Reddit is also filled with AI-generated material that's shared by actual humans but may be considered spammy by other users. The company has no plans to crack down on such content, at least for now, according to Huffman. "For better or worse, using AI to write is part of how people will communicate in the future (albeit annoying), so our current focus is to ensure there is a real, live human behind the accounts you're seeing."
Share
Share
Copy Link
Reddit CEO Steve Huffman announced that accounts exhibiting automated or fishy behavior must verify humanness using methods like passkeys, Face ID, or World ID. The move aims to combat bots flooding the platform as AI bot traffic threatens to overtake human activity online. Reddit already removes 100,000 bot accounts daily.
Reddit is taking decisive action against the growing threat of bots by requiring accounts that exhibit automated behavior or suspicious activity to verify humanness, CEO Steve Huffman announced in a post on Wednesday
1
2
. The new policy marks a significant escalation in the social media platform's ongoing bot crackdown as AI bot traffic threatens to overwhelm human interactions across the internet. Huffman emphasized that human verification will only occur in "rare" cases and won't apply to "most users," targeting instead those accounts where something suggests automated or fishy behavior3
. If an account cannot prove it's human, it "may be restricted" from the platform4
.
Source: Engadget
Reddit will leverage third-party tools to verify humanness without exposing users' true identity, Reddit username, or activity data. Current methods being explored include passkeys from Apple, Google, and YubiKey, which Huffman described as "a great starting point" though they only provide proof that "a human probably did something"
1
. The platform is also evaluating biometric services like Face ID and World ID, the iris-scanning technology backed by Sam Altman. "I think the Internet needs verification solutions like this, where your account information, usage data, and identity never mix," Huffman stated1
. As a last resort, Reddit may use third-party government IDs, which Huffman called "the least secure, least private, and least preferred" method, though some geographies like the UK and Australia already require it for age verification2
.
Source: TechCrunch
The timing of this announcement reflects growing industry concern that bots could soon dominate online spaces. According to Cloudflare, bot traffic will exceed human traffic by 2027 when including web crawlers and AI agents
2
. The recently relaunched Digg shut down its open beta after just three months due to an "unprecedented bot problem" driven by "sophisticated AI agents and automated accounts," CEO Justin Mezzell said in March1
. This reality has given credence to the dead internet theory, a conjecture that bots outnumber humans online and that most content is automated rather than human-generated2
. For Reddit, ensuring user authenticity is critical both for maintaining its value proposition to advertisers and protecting its lucrative AI training data deals, which have generated millions in revenue1
.
Source: Ars Technica
Related Stories
Alongside the verification requirements, Reddit announced that accounts using bots in permitted ways will receive an App label, similar to how "good bots" are labeled on X
2
3
. Developers can register their automated accounts with Reddit to receive this designation. The platform already removes an average of 100,000 accounts per day that use nefarious bots and post spam, often before users even see them, according to Huffman1
. Reddit also plans to make reporting suspected bot accounts "easier and more flexible" for users3
.While Reddit is aggressively targeting automated accounts, the platform is taking a more lenient approach to AI-generated content created by verified humans. "For better or worse, using AI to write is part of how people will communicate in the future (albeit annoying), so our current focus is to ensure there is a real, live human behind the accounts you're seeing," Huffman explained
4
. The company will monitor usage patterns as it intensifies efforts against automated accounts, though individual communities can set their own standards regarding AI-generated posts1
. This distinction matters as Reddit grapples with concerns that bots may be posting questions to generate more AI training data, particularly in areas where information is lacking2
. The challenge ahead involves balancing user anonymity with verification needs, as agentic AI continues to blur the lines between human and machine-generated activity across the internet4
.Summarized by
Navi
[2]
1
Technology

2
Entertainment and Society

3
Policy and Regulation
