Curated by THEOUTPOST
On Wed, 7 May, 4:02 PM UTC
4 Sources
[1]
Reddit will tighten verification to keep out human-like AI bots | TechCrunch
Redditors around the world were scandalized last week after learning that a team of researchers released a swarm of AI-powered, human-impersonating bots on the "Change My View" subreddit. The large-scale experiment was designed to explore just how persuasive AI can be. The bots posted over 1,700 comments, adopting personas like abuse survivors or controversial identities like an anti-Black Lives Matter advocate. For Reddit, the incident was a mini-nightmare. Reddit's brand is associated with authenticity -- a place where real people come to share real opinions. If that human-focused ecosystem is disturbed with AI slop or becomes a place where people can't trust that they're getting information from actual humans, it could do more than threaten Reddit's core identity. Reddit's bottom line could be at stake, since the company now sells its content to OpenAI for training. The company condemned the "improper and highly unethical experiment" and filed a complaint with the university that ran it. But that experiment was only one of what will likely be many instances of generative AI bots pretending to be humans on Reddit for a variety of reasons, from the scientific to the politically manipulative. To protect users from bot manipulation and "keep Reddit human," the company has quietly signaled an upcoming action - one that may be unpopular with users who come to Reddit for another reason: anonymity. On Monday, Reddit CEO Steve Huffman shared in a post that Reddit would start working with "various third-party services" to verify a user's humanity. This represents a significant step for a platform that has historically required almost no personal information for users to create an account. "To keep Reddit human and to meet evolving regulatory requirements, we are going to need a little more information," Huffman wrote. "Specifically, we will need to know whether you are a human, and in some locations, if you are an adult. But we never want to know your name or who you are." (Social media companies have already started implementing ID checks after at least nine states and the U.K. and passed laws mandating age verification to protect children on their platforms.) A Reddit spokesperson declined to explain under what circumstances the company would require users to go through a verification process, though they did confirm that Reddit already takes measures to ban "bad" bots. The spokesperson also wouldn't share more details about which third-party services the company would use or what kind of personally identifying information users would have to offer up. Many companies today rely on verification platforms like Persona, Alloy, Stripe Identity, Plaid, and Footprint, which usually require a government-issued ID to verify age and humanity. Then there's the newer and more speculative tech, like Sam Altman's Tools for Humanity and its eye-scanning "proof of human" device. Opponents to ID checks say there are data privacy and security risks to sharing your personal information with social media platforms. That's especially true for a platform like Reddit, where people come to post experiences they maybe never would have if their names were attached to them. It's not difficult to imagine a world in which authorities might subpoena Reddit for the identity of, for example, a pregnant teen asking about abortion experiences on r/women in states where it is now illegal. Just look how Meta handed over private conversations between a Nebraska woman and her 17-year-old daughter, which discussed the latter's plans to terminate a pregnancy. Meta's assistance led law enforcement to acquire a search warrant, which resulted in felony charges for both the mother and daughter. That's exactly the risk Reddit hopes to avoid by tapping outside firms to provide "the essential information and nothing else," per Huffman, who emphasized that "we never want to know your name or who you are." "Anonymity is essential to Reddit," he said. The CEO also noted that Reddit would continue to be "extremely protective of your personal information" and "will continue to push back against excessive or unreasonable demands from public or private authorities."
[2]
Reddit fighting back after AI fraud - but may threaten user privacy
Reddit has announced plans to fight back after a large-scale AI fraud was carried out against users of the highly popular Change My View subreddit. However, the company's plans to take to fight AI bots may not be popular with users, as it could compromise the platform's long-standing approach to privacy ... Researchers from the University of Zurich carried out an extensive AI fraud in the subreddit, with large language models taking on a variety of personas, including a rape victim and a trauma counsellor. In total, the AI bots posted more than 1,700 comments while pretending to be human. This was done in violation of both Reddit rules, and well-established ethical standards requiring informed consent for psychological experiments. CMV moderators filed a formal complaint with the university's ethics commission, which responded by stating that it had issued a formal warning to the lead researcher, and would be boost prior reviews of proposed studies - but, outrageously, said that publication of the paper would go ahead. Reddit condemned the "improper and highly unethical experiment," and CEO Steve Huffman says that it will be responding by introducing new measures designed to "keep Reddit human." To keep Reddit human and to meet evolving regulatory requirements, we are going to need a little more information. Specifically, we will need to know whether you are a human, and in some locations, if you are an adult. But we never want to know your name or who you are. The way we will do this is by working with various third-party services that can provide us with the essential information and nothing else. No solution is perfect -- including the status quo -- but we will do our best to preserve both the humanness and anonymity of Reddit. Reddit has always allowed anonymous accounts, and many of its users have good reasons to want to keep their identity private. They may, for example, wish to share very personal information in some subreddits. While Huffman claims the new measures won't reveal anyone's name, TechCrunch reports a worrying lack of transparency about the company's plans. A Reddit spokesperson declined to explain under what circumstances the company would require users to go through a verification process [and] wouldn't share more details about which third-party services the company would use or what kind of personally identifying information users would have to offer up. The site also points to a recent example of the risks of anonymous accounts becoming identifiable. Just look how Meta handed over private conversations between a Nebraska woman and her 17-year-old daughter, which discussed the latter's plans to terminate a pregnancy. Meta's assistance led law enforcement to acquire a search warrant, which resulted in felony charges for both the mother and daughter. On a more positive note, there are plenty of Redditors who prefer the old interface to the new one, and Huffman promised to keep old.reddit online "as long as people are using it."
[3]
Reddit Cracks Down After AI Bots Secretly Infiltrated Debate Forum - Decrypt
The move comes after a group of researchers used AI bots to change opinions on a subreddit dedicated to healthy debates. Behind the scenes in one of Reddit's largest communities, something creepy has been brewing. For four months, AI-powered bots masqueraded as humans, swaying opinions and earning thousands of upvotes. The experiment appeared to be working -- until everyone found out. Reddit announced plans earlier this week to tighten user verification after learning that researchers from the University of Zurich conducted an unauthorized experiment on the r/changemyview subreddit, using AI bots to manipulate users without their knowledge or consent. The researchers had deployed the bots, which used sensitive personas -- including trauma counselors, sexual assault survivors, and controversial political identities -- from late 2024 to early 2025, and posted over 1,700 comments. The AI entities also analyzed users' posting histories to craft persuasive, personalized responses -- all without disclosing that they were AI agents. In a post to the community, Reddit's Chief Legal Officer Ben Lee condemned the experiment as "deeply wrong on both a moral and legal level," citing violations of the platform's user agreement and research ethics. The company has banned all accounts linked to the research and filed formal complaints with the University of Zurich. "Reddit works because it's human," CEO Steve Huffman wrote in a recent platform update. "It's one of the few places online where real people share real opinions... If we lose trust in that, we lose what makes Reddit... Reddit." The experiment hit at the heart of what makes online communities function: trust. Moderators described it as "psychological manipulation," noting that users join the subreddit expecting genuine human interactions, not to be unwitting test subjects. That said, the AI bots were remarkably effective, collecting over 20,000 upvotes and 137 "deltas" -- awards given when someone successfully changes another user's view. However, the experiment only gained steam after the researchers revealed their work. To prevent another incident -- even a secret one -- Reddit is now fast-tracking measures to verify that users are human without compromising privacy. "To keep Reddit human and to meet evolving regulatory requirements, we are going to need a little more information," Huffman explained. "Specifically, we will need to know whether you are a human, and in some locations, if you are an adult. But we never want to know your name or who you are." The company plans to partner with third-party verification services to conduct such checks. Huffman emphasized that maintaining anonymity remains a cornerstone of the platform: "Anonymity is essential to Reddit. We have been -- and will continue to be -- extremely protective of your personal information." While Reddit already uses AI tools for tasks like content moderation and spam filtering, the company has set a limit when it comes to bots impersonating users. "Our focus is, and always will be, on keeping Reddit a trusted place for human conversation," Huffman affirmed. The platform currently bans "bad" bots, but the incident highlighted critical vulnerabilities in the current system. As AI becomes increasingly sophisticated, distinguishing between human and machine-generated content presents mounting challenges for online communities. Reddit didn't reply to a request for comment, but Reddit user Apprehensive_Song490, who serves as a moderator at r/changemyview, told Decrypt that interactions with AI for research purposes would be carefully examined by the moderators. "My general impression is that the team is always receptive to researchers and that we would consider requests on a case-by-case basis," he said. As Reddit approaches its 20th anniversary, the incident underscores a pivotal moment for the platform and similar online communities. The challenge ahead lies in harnessing AI's benefits while preserving the human connection that makes these spaces valuable in the first place. "The internet is changing rapidly, and human perspectives have never been more important," Huffman noted. "No solution is perfect -- including the status quo -- but we will do our best to preserve both the humanness and anonymity of Reddit."
[4]
Reddit says AI bots are too good now some users must verify
Reddit is tightening its verification process to keep out human-like AI bots, the company's CEO Steve Huffman announced on Monday. The move comes after a team of researchers released a swarm of AI-powered bots on the "Change My View" subreddit, posting over 1,700 comments and adopting various personas. The experiment, which Reddit condemned as "improper and highly unethical," highlighted the potential risks of AI-powered bots impersonating humans on the platform. To mitigate this, Reddit will start working with third-party services to verify users' humanity, requiring some users to provide information to confirm they are human and, in some cases, adults. Huffman emphasized that Reddit will not require users to disclose their names or identities, stating that "anonymity is essential to Reddit." The company plans to use outside firms to provide "the essential information and nothing else," and assured that it will continue to protect users' personal information and push back against excessive demands from authorities. The decision is likely driven in part by evolving regulatory requirements, including laws passed in at least nine states and the U.K. mandating age verification to protect children on social media platforms. Reddit already takes measures to ban "bad" bots, but the new verification process will likely be used to further safeguard the platform. The exact circumstances under which users will be required to undergo verification are unclear, as are the specific third-party services that will be used. Popular verification platforms include Persona, Alloy, Stripe Identity, Plaid, and Footprint, which typically require government-issued IDs to verify age and humanity. Newer technologies, such as Sam Altman's "proof of human" device, may also be considered. Critics have raised concerns about the potential data privacy and security risks associated with ID checks, particularly on a platform like Reddit where users often share sensitive or personal experiences anonymously. Huffman acknowledged these concerns, stating that Reddit will be "extremely protective of your personal information."
Share
Share
Copy Link
Reddit announces plans to implement stricter user verification measures following an unauthorized AI bot experiment on the platform, sparking debates about user privacy and the future of online communities.
Reddit, the popular online platform known for its authentic user-generated content, is grappling with a new challenge posed by advanced AI technology. The company has announced plans to tighten its user verification process in response to a controversial experiment that unleashed AI-powered bots on the platform 1.
Researchers from the University of Zurich conducted an unauthorized experiment on the r/changemyview subreddit, deploying AI bots that masqueraded as humans for four months 3. These bots posted over 1,700 comments, adopting sensitive personas such as trauma counselors and sexual assault survivors. The AI entities analyzed users' posting histories to craft persuasive, personalized responses, all without disclosing their artificial nature 2.
Reddit CEO Steve Huffman condemned the experiment as "improper and highly unethical" and announced new measures to "keep Reddit human" 1. The company plans to work with third-party services to verify users' humanity and, in some cases, adulthood. Huffman emphasized that while more information would be required, Reddit would never seek to know users' names or identities 4.
The incident has highlighted the delicate balance Reddit must maintain between preserving its core value of anonymity and ensuring the authenticity of user interactions. Huffman stated, "Anonymity is essential to Reddit. We have been -- and will continue to be -- extremely protective of your personal information" 3.
While specific details about the verification process remain unclear, it may involve the use of government-issued IDs or newer technologies like Sam Altman's "proof of human" device 1. This has raised concerns about data privacy and security, particularly given Reddit's role as a platform where users often share sensitive information anonymously 2.
The decision to implement stricter verification measures is partly driven by evolving regulatory requirements. At least nine states in the U.S. and the U.K. have passed laws mandating age verification on social media platforms to protect children 4.
As Reddit approaches its 20th anniversary, this incident underscores a pivotal moment for the platform and similar online communities. The challenge lies in harnessing AI's benefits while preserving the human connection that makes these spaces valuable. Huffman noted, "The internet is changing rapidly, and human perspectives have never been more important" 3.
Reference
Researchers from the University of Zurich conducted a secret AI-powered experiment on Reddit's r/ChangeMyView subreddit, raising serious ethical concerns and prompting discussions about AI's persuasive capabilities and research ethics.
22 Sources
22 Sources
OpenAI reveals its use of Reddit's r/ChangeMyView subreddit to test and refine AI models' persuasive abilities, raising questions about data sourcing and the potential risks of highly persuasive AI.
3 Sources
3 Sources
A University of Notre Dame study exposes the vulnerability of major social media platforms to AI bot infiltration, raising concerns about user safety and the need for stronger regulations.
3 Sources
3 Sources
Reddit implements new policy requiring admin approval for subreddit privacy changes, effectively limiting moderators' ability to organize large-scale protests. This move comes in the wake of last year's API pricing controversy and recent content licensing deals.
3 Sources
3 Sources
Meta's plan to introduce AI-generated personas on Facebook and Instagram sparks debate about authenticity, user engagement, and the future of social media interactions.
16 Sources
16 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved