7 Sources
7 Sources
[1]
Reddit will require "fishy" accounts to verify they are run by a human
Reddit will require accounts that exhibit "automated or otherwise fishy behavior" to verify that a human runs them, Reddit CEO Steve Huffman said in a Reddit post today. The verification process aims to combat unwanted bots from flooding Reddit at a time when AI bots are poised to take over the Internet. "As AI becomes a bigger part of the Internet, we want to make sure that when you're on Reddit, you know when you're talking to a person and when you're not," Huffman said. Human verification will only occur if Reddit suspects that an account is a bot. This is "rare" and won't apply to "most users," Huffman emphasized. If the account cannot prove that it's human, it "may be restricted," he said. Reddit will check if an account is run by a human by using third-party tools that Huffman said won't expose users' true identity, Reddit username, or Reddit activity. Current methods that Reddit is exploring include passkeys, which Huffman said are a great starting point but don't provide any "proof of individuality or anything other than 'a human probably did something.'" Reddit is also looking into third-party biometric services, like World ID, which uses iris-scanning tech. "I think the Internet needs verification solutions like this, where your account information, usage data, and identity never mix," Huffman said. A last resort may be third-party government ID services, which Reddit is already required to use in some geographies, like the UK. Huffman said this is "the least secure, least private, and least preferred" method for human verification on Reddit. "When we are forced to do this, we design the integrations so that we never actually see your ID information, so your Reddit data cannot be tied to you," he added. Additionally, Huffman announced that accounts that use bots in permitted ways will get an App label. Reddit has posted information about how developers can get their apps labeled. The announcement comes amid concern from some industry commentators that AI bot traffic online could surpass human traffic soon. Web agents are becoming more prevalent and flocking to social media sites. A relaunched Digg, for example, shut down its open beta after three months due to an "unprecedent bot problem" led by "sophisticated AI agents and automated accounts," CEO Justin Mezzell said in March. Ensuring that Reddit isn't overtaken by bots is in Reddit's best interest financially. It positions itself to users as a place to have conversations with real people about human topics and points of interest. The social media platform has also been increasingly selling itself to advertisers as a way to push products to real people. And Reddit has made millions by allowing AI companies to train large language models on its years' worth of human-generated content. Reddit has sued and blocked companies that it believes has wrongfully scraped content without paying. Reddit already removes an average of 100,000 accounts per day that use nefarious bots and post spam, per Huffman, who said that the removals often happen before users see the accounts. Reddit also plans to make it easier for Reddit users to report accounts that they think are bots. AI-generated content still allowed Reddit is exploring ways to limit bots on the platform but restraining from going after humans who employ chatbots to create posts and comments. Reddit hasn't confirmed how much content on the site is AI-generated, but battling AI slop on Reddit has proven challenging for moderators, even when subreddits ban the use of generative AI. "We'll monitor its usage and see what happens as we crack down even more on automated accounts. As always, communities can set their own standards if they want," Huffman said of AI-generated content. Disclosure: Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.
[2]
Reddit takes on the bots with new 'human verification' requirements for fishy behavior | TechCrunch
Would-be Reddit competitor Digg just shut down because it couldn't get a handle on the bots overrunning its site. On Wednesday, Reddit said it's taking on the challenge itself. The company will begin labeling automated accounts that are providing a service to users, similar to how the "good bots" are labeled on X, and it will now require accounts that are suspected of being bots to verify if they're human. Reddit stresses this is not going to be a sitewide verification requirement, and will only occur if something suggests that the account isn't human, including its activity on the site or other technical markers. If the account can't pass the test, it may be restricted, Reddit said. To identify potential bots, Reddit is using specialized tooling that looks at account-level signals and other factors -- like how quickly the account is attempting to write or post content. Using AI to write posts or comments, however, is not against its policies (though community moderators may set their own rules). To verify an account is human, Reddit will leverage third-party tools like passkeys from Apple, Google, YubiKey, and other third-party biometric services, like Face ID or even Sam Altman's World ID -- or, in some countries, the use of government IDs. Reddit notes this last category may be required in some countries like the U.K. and Australia and some U.S. states, because of local regulations on age verification, but it's not the company's preferred method. "If we need to verify an account is human, we'll do it in a privacy-first way," Reddit co-founder and CEO Steve Huffman wrote in the announcement Wednesday. "Our aim is to confirm there is a person behind the account, not who that person is. The goal is to increase transparency of what is what on Reddit while preserving the anonymity that makes Reddit unique. You shouldn't have to sacrifice one for the other." The changes are meant to address the growing problem of bots engaging on social platforms and the web more broadly, where they're often used to influence politics, spread misinformation, inflate popularity, secretly market products, generate fake ad clicks, and more. According to Cloudflare, the traffic from bots will exceed human traffic by 2027, when you include bots like web crawlers and AI agents in the mix. Reddit, in particular, has become a popular destination for bots that attempt to manipulate narratives, astroturf to shill for companies or their products, repost links, post spam, drive traffic, conduct research, and more. Plus, because Reddit's content is used for AI training thanks to lucrative deals with AI model providers, there's suspicion that bots are even posting questions on the site to generate more training data, particularly in areas where AI is lacking information. Reddit's other co-founder, Alexis Ohanian, has also addressed a related problem known as the "dead internet theory," a conjecture that bots outnumber humans online and that the vast majority of content, interactions, and web activity on the internet is automated or AI-generated, rather than from people. In the age of AI agents, the theory is becoming a reality. The company announced last year that it would begin to require human verification in response to the growing number of bots and the need to meet "evolving regulatory requirements." But the company today notes that the current solutions, which Huffman recently discussed on the TBPN podcast, aren't the best. "The best long-term solutions will be decentralized, individualized, private, and ideally not require an ID at all," Huffman wrote in today's announcement. Alongside these changes, Reddit said it would continue to remove bots and spam, where it averages 100,000 account removals per day, and rely on reports of suspected bots, with improved tooling still to come. Developers running so-called good bots can learn more about labeling them with the new "APP" label in the r/redditdev community.
[3]
Reddit accounts with 'fishy' bot-like behavior will soon need to prove they're human
Reddit is taking new steps to identify bots on the platform -- a process that may require some users to confirm that they're human. In a post on Wednesday, Reddit CEO Steve Huffman writes that the company will introduce a labeling system for accounts registered as bots, and ask users with "automated" or "fishy behavior" to verify that they're human using methods like fingerprint scanning or submitting their ID. With this update, developers can register automated accounts with Reddit, which will then receive an "[APP]" label. However, Reddit also notes that it will be on the lookout for unlabeled accounts with suspicious behavior. "If something suggests an account isn't human, including automation (hi, web agents), we may ask it to confirm there's a person behind it," Huffman writes, adding that these cases "will be rare and will not apply to most users." Reddit will ask users behind suspected bot accounts to verify that they're human, and is exploring several verification methods to do so without actually identifying who the person is. That includes asking users to complete a passkey check, such as scanning their fingerprint on a smartphone, or entering a PIN. It's also looking into using third-party biometric services, like the Sam Altman-backed World ID, which uses an eyeball-scanning orb to verify humanness. Huffman brings up third-party ID verification services as well, which he says are "the least secure, least private, and least preferred" verification method. He adds that the UK, Australia, and some US states already require it to support this type of verification. Suspected bot accounts that are unable to verify their humanness "may be restricted," according to Huffman. Last year, Reddit began testing account verification for brands and individual users. Huffman hinted at launching a bot verification system in a letter to shareholders in February, and floated the idea of using Face ID to verify a user's humanness during an interview on TBPN this week. Along with this update, Huffman says Reddit is going to make reporting suspected bots "easier and more flexible" -- though the platform isn't going to come down too hard on all accounts using AI to write. "We'll monitor its usage and see what happens as we crack down even more on automated accounts," Huffman says. "Our current focus is to ensure there is a real, live human behind the accounts you're seeing."
[4]
Reddit will prompt some accounts to 'verify humanness' in latest bot crackdown
Reddit CEO Steve Huffman has detailed the company's latest plan to fight bots and it means that some accounts will need to "verify humanness," though the company is stopping short of widespread identity verification. In an update, Huffman said that in "rare" cases accounts that seem "fishy" will be prompted for additional verification. Such prompts "will not apply to most users," according to Huffman, but will apply to accounts where Reddit detects signs of automated posting or bot-like behavior. If the account doesn't pass the verification test, it may be "restricted" from the platform. For now, verification will take the form of on-device methods, including FaceID and passkeys. But the company is considering alternative methods, including World ID, the face-scanning orb company run by Sam Altman. "I think the internet needs verification solutions like this, where your account information, usage data, and identity never mix," Huffman writes. As part of the new policy, Reddit is also adding an "[APP]" label to existing "good" bots on the platform and making it easier for users to report suspected "bad" bots. The company is also grappling with a growing number of age verification laws. Reddit is "exploring" ways to "comply with these regulations without compromising user privacy," Huffmans said. The company is clearly trying to walk a careful line in how it approaches verification. Huffman notes that Reddit intends to "confirm humanness" rather than verify users' actual identities, which would erode the anonymity that Reddit is known for. But the rise of agentic AI has meant that Reddit is contending with the same sorts of bot-driven spam that took down the short-lived reboot of Digg. Of course, Reddit is also filled with AI-generated material that's shared by actual humans but may be considered spammy by other users. The company has no plans to crack down on such content, at least for now, according to Huffman. "For better or worse, using AI to write is part of how people will communicate in the future (albeit annoying), so our current focus is to ensure there is a real, live human behind the accounts you're seeing."
[5]
Reddit cracks down on bots with new labels and human verification
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. TL;DR: Reddit is introducing new measures to label bots and verify human users, aiming to curb automation on a platform that has become a popular hub for automated accounts. The company announced a system to identify and label bots that provide services to users, while requiring some suspected bot accounts to confirm they are human. The move comes amid growing concern about the role bots are playing in reshaping online activity across the web. The move comes just weeks after social aggregator Digg, which once aimed to rival Reddit, shut down its app, citing an inability to control a surge of bots. Reddit, by contrast, appears determined to tackle the problem head-on. Starting this year, Reddit will introduce new labels for automated accounts that provide legitimate services, echoing the "good bot" tags used on X. More importantly, the platform will begin requiring certain accounts that appear suspicious to verify that they are human. Reddit stresses that this verification will not be sitewide. Checks will be triggered only when its systems detect signs of automation such as unusually rapid posting or technically anomalous activity. Accounts that fail the verification process could face restrictions. The initiative focuses on identity verification without compromising anonymity. "If we need to verify an account is human, we'll do it in a privacy-first way," Reddit co-founder and CEO Steve Huffman said in the announcement. "Our aim is to confirm there is a person behind the account, not who that person is. The goal is to increase transparency of what is what on Reddit while preserving the anonymity that makes Reddit unique. You shouldn't have to sacrifice one for the other." Reddit intends to rely on external credentialing rather than building its own ID system. Accepted tools will include passkeys from Apple, Google, and YubiKey; biometric options such as Face ID; and identity verifiers like Sam Altman's World ID. In countries that mandate age or identity verification - including the UK, Australia, and some US states - government-issued IDs may also be required, though Reddit emphasized that this is "not the company's preferred method." "The best long-term solutions will be decentralized, individualized, private, and ideally not require an ID at all," Huffman said. The announcement highlights how automation is reshaping online interaction at scale. Cloudflare recently projected that by 2027, overall bot traffic - including legitimate web crawlers and AI agents - will surpass human internet traffic. That trend has already altered the information landscape, enabling everything from spam campaigns to synthetic political content. On Reddit, automated accounts have multiplied for years, driving misinformation, reposting links, marketing products, and even conducting research experiments without users' knowledge. The company's lucrative deals allowing AI firms to train models on Reddit data have added new layers of complexity - particularly amid speculation that bots now post content to enrich those same datasets. The issue has sparked broader debate about what some technologists, including Reddit co-founder Alexis Ohanian, have labeled the "dead internet theory." This theory suggests that bots and AI-generated text increasingly constitute a majority of online activity - a conjecture now edging toward reality as agent-based automation becomes mainstream. Reddit has already begun implementing tighter controls. The company said it currently removes around 100,000 bot or spam accounts per day and plans to continue investing in improved detection tools.
[6]
Reddit will require some accounts to verify they are run by a human
AI is here, and already internet is getting flooded with all kinds of AI-generated content. Reddit will require accounts that exhibit "automated or otherwise fishy behavior" to verify that a human runs them. This new policy was stated by Reddit CEO Steve Huffman in a Reddit post, and reported by Ars Technica. The idea is that people managing Reddit "want to make sure that when you're on Reddit, you know when you're talking to a person and when you're not". It must be said, that human verification will only happen, if Reddit suspects that an account is a bot. This is "rare" and won't apply to "most users". If the account cannot prove that it's human, it "may be restricted". Reddit will check if an account is run by a human by using third-party tools. It is made clear that these measures will not expose users' true identity, Reddit username, or Reddit activity. These methods currently explored are passkeys, which can help determine that "a human probably did something". Reddit is also looking into third-party biometric services, like World ID, which uses iris-scanning tech. And if nothing else, the last resort might be third-party government ID services, which are already in use in some geographic areas, like the UK. But there's more to this. Accounts that use bots in permitted ways will get an App label. Reddit has posted information about how developers can get their apps labelled. Verification process is clearly needed, since Reddit already removes an average of 100,000 accounts per day that "use nefarious bots and post spam". Reddit also plans to make it easier for users to report accounts that they think are bots. Reddit hasn't confirmed how much of the content on its site is AI-generated.
[7]
Reddit Wants You to Prove You Are Human | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. According to a Thursday (March 25) report by TechCrunch, accounts flagged by automated signals, including posting speed and activity patterns, must now confirm they are operated by a real person. The announcement followed the collapse of Digg, a Reddit competitor that shut down earlier this month after failing to contain a bot infestation. Reddit's rollout is not a blanket verification requirement. As reported by TechCrunch, said verification triggers only when account-level signals suggest automated behavior, and the company commits to a privacy-first approach that confirms a person exists behind the account without revealing who that person is. To verify, Reddit said it will support authentication methods such as passkeys and other verification tools designed to confirm human users while protecting privacy. In some regions, verification may intersect with local age and identity requirements. Platforms have long prized anonymity as a design principle, and as reported by Mashable, mandatory ID verification has historically drawn resistance from communities that depend on pseudonymous participation. Reddit CEO Steve Huffman addressed the trade-off directly in his announcement post, stating that the goal is to confirm a person exists, not to unmask them. Verification friction, however, cuts both ways: it deters bots and bad actors, but it also deters the real users platforms need for growth. Bot activity on Reddit has intensified beyond simple spam. As Wired reported in February, AI bots now drive significant web traffic, functioning not as a nuisance but as a structural feature of the post-ChatGPT web. On Reddit specifically, bots are deployed to manipulate narratives, astroturf on behalf of companies, repost links at scale and, in documented cases, generate questions designed to extract AI training data from the platform's organic user responses. The operational and financial costs of weak identity systems extend well beyond social media. According to PYMNTS Intelligence data, in collaboration with Trulioo, companies lose approximately 3.1% of annual revenue to identity gaps, totaling roughly $95 billion in losses. The report found that 59% of firms face bot-driven fraud as an active threat, while 90% contend with harmful bot traffic in some form. For enterprises, the downstream damage compounds quickly: inflated engagement metrics mislead product and marketing decisions, fraudulent accounts corrupt behavioral data and synthetic sentiment distorts the signals executives and investors use to gauge platform health. The pressure Reddit is responding to shows a structural shift reordering how the internet functions. According to CNBC, the line between bot behavior and human behavior online is blurring, with AI agents now mimicking conversational patterns, posting cadences and engagement habits that previously served as exclusive markers of human activity. Engagement metrics, long the currency of platform value, are losing reliability as a measure of real human interest. The timeline for majority-bot internet traffic is tighter than most anticipated. As reported by TechCrunch, Cloudflare CEO Matthew Prince told SXSW this month that bot traffic will exceed human traffic online by 2027, driven by AI agents that visit thousands of websites to fulfill a single user query. Before the generative AI era, bots accounted for roughly 20% of internet traffic, Prince said. That share now accelerates at a rate with no precedent in prior platform cycles. For platforms built on anonymity, like Reddit, this creates a fine balance. The internet has long operated on the assumption that users can participate without revealing their real-world identities, but as AI agents scale, that model is beginning to fracture. The tension is giving rise to growing demand for proof of personhood, a shift that comes as platforms also tighten control over how external AI agents access their systems.
Share
Share
Copy Link
Reddit CEO Steve Huffman announced new measures requiring accounts with automated or fishy behavior to verify humanness. The platform will use passkeys, biometrics, and third-party tools like World ID to combat bots without compromising user anonymity. Reddit already removes 100,000 bot accounts daily as AI-driven automation threatens to overtake human activity online.
Reddit will require accounts exhibiting automated behavior or otherwise fishy accounts to verify they are run by humans, CEO Steve Huffman announced in a post on the platform
1
2
. The move comes as AI bots threaten to flood the internet and social media platforms struggle with automated content. "As AI becomes a bigger part of the Internet, we want to make sure that when you're on Reddit, you know when you're talking to a person and when you're not," Huffman said1
.
Source: PYMNTS
The verification process will only trigger when Reddit detects suspicious accounts through specialized tooling that analyzes account-level signals and technical markers, including how quickly an account attempts to write or post content
2
. Huffman emphasized that these cases "will be rare and will not apply to most users"3
. Accounts that fail to verify humanness may face restrictions on the platform4
.
Source: TechCrunch
Reddit plans to verify humanness using third-party tools that won't expose users' true identity, Reddit username, or activity data. Current methods under exploration include passkeys from Apple, Google, and YubiKey, though Huffman noted these only provide "proof of individuality or anything other than 'a human probably did something'"
1
. The platform is also considering third-party biometric services like Face ID and Sam Altman's World ID, which uses iris-scanning technology3
."I think the Internet needs verification solutions like this, where your account information, usage data, and identity never mix," Huffman stated
1
. As a last resort, Reddit may use third-party government ID services in geographies like the UK and Australia where regulations require age verification, though Huffman called this "the least secure, least private, and least preferred" method1
5
.Alongside the bot crackdown, Reddit introduced an APP label for accounts that use bots in permitted ways, similar to the "good bots" labeled on X
2
5
. Developers can register their automated accounts with Reddit to receive this label, helping users distinguish between legitimate service bots and suspicious accounts3
. The platform also plans to make reporting suspected bots "easier and more flexible" with improved tooling2
.Source: TechSpot
Reddit already removes an average of 100,000 accounts per day that use nefarious bots and post spam, often before users even see them
1
5
. The urgency behind these measures became clear when would-be Reddit competitor Digg shut down its open beta after just three months due to an "unprecedented bot problem" driven by "sophisticated AI agents and automated accounts"1
.Related Stories
While Reddit is cracking down on automated accounts, the platform isn't restricting humans who use AI to write posts and comments. "For better or worse, using AI to write is part of how people will communicate in the future (albeit annoying), so our current focus is to ensure there is a real, live human behind the accounts you're seeing," Huffman explained
4
. Individual communities can still set their own standards regarding AI-generated content1
.The initiative addresses growing concerns about the dead internet theory, a conjecture that bots outnumber humans online and that most content and interactions are automated rather than human-generated
2
. Reddit co-founder Alexis Ohanian has previously addressed this theory, which is becoming reality in the age of AI agents2
. According to Cloudflare projections, bot traffic will exceed human traffic by 2027 when including web crawlers and AI agents2
5
.Combating bots serves Reddit's financial interests as it positions itself as a place for conversations with real people and sells access to advertisers seeking authentic engagement
1
. The platform has also profited from lucrative deals allowing AI companies to train large language models on its human-generated content, creating suspicion that bots may be posting questions to generate more training data in areas where AI lacks information2
.Summarized by
Navi
[2]
1
Policy and Regulation

2
Technology

3
Policy and Regulation
