Curated by THEOUTPOST
On Tue, 15 Oct, 4:05 PM UTC
2 Sources
[1]
Millions of People Are Using Abusive AI 'Nudify' Bots on Telegram
Bots that "remove clothes" from images have run rampant on the messaging app, allowing people to create nonconsensual deepfake images even as lawmakers and tech companies try to crack down. In early 2020, deepfake expert Henry Ajder uncovered one of the first Telegram bots built to "undress" photos of women using artificial intelligence. At the time, Ajder recalls, the bot had been used to generate more than 100,000 explicit photos -- including those of children -- and its development marked a "watershed" moment for the horrors deepfakes could create. Since then, deepfakes have become more prevalent, more damaging, and easier to produce. Now, a WIRED review of Telegram communities involved with the explicit nonconsensual content has identified at least 50 bots that claim to create explicit photos or videos of people with only a couple of clicks. The bots vary in capabilities, with many suggesting they can "remove clothes" from photos while others claim to create images depicting people in various sexual acts. The 50 bots list more than 4 million "monthly users" combined, according to WIRED's review of the statistics presented by each bot. Two bots listed more than 400,000 monthly users each, while another 14 listed more than 100,000 members each. The findings illustrate how widespread explicit deepfake creation tools have become and reinforce Telegram's place as one of the most prominent locations where they can be found. However, the snapshot, which largely encompasses English-language bots, is likely a small portion of the overall deepfake bots on Telegram. "We're talking about a significant, orders-of-magnitude increase in the number of people who are clearly actively using and creating this kind of content," Ajder says of the Telegram bots. "It is really concerning that these tools -- which are really ruining lives and creating a very nightmarish scenario primarily for young girls and for women -- are still so easy to access and to find on the surface web, on one of the biggest apps in the world." Explicit nonconsensual deepfake content, which is often referred to as nonconsensual intimate image abuse (NCII), has exploded since it first emerged at the end of 2017, with generative AI advancements helping fuel recent growth. Across the internet, a slurry of "nudify" and "undress" websites sit alongside more sophisticated tools and Telegram bots, and are being used to target thousands of women and girls around the world -- from Italy's prime minister to school girls in South Korea. In one recent survey, a reported 40 percent of US students were aware of deepfakes linked to their K-12 schools in the last year. The Telegram bots identified by WIRED are supported by at least 25 associated Telegram channels -- where people can subscribe to newsfeed-style updates -- that have more than 3 million combined members. The Telegram channels alert people about new features provided by the bots and special offers on "tokens" that can be purchased to operate them, and often act as places where people using the bots can find links to new ones if they are removed by Telegram.
[2]
'Nudify' bots to create naked AI images in seconds rampant on...
Online chatbots are generating nude images of real people at users' requests, prompting concern from experts who worry the explicit deepfakes will create "a very nightmarish scenario." A Wired investigation on the messaging app Telegram unearthed dozens of AI-powered chatbots that allegedly "create explicit photos or videos of people with only a couple clicks," the outlet reported. Some "remove clothes" from images provided by users, according to Wired, while others say they can manufacture X-rated photos of people engaging in sexual activity. The outlet estimated that approximately 4 million users per month take advantage of the deepfake capabilities from the chatbots, of which there were an estimated 50. Such generative AI bots promised to deliver "anything you want about the face or clothes of the photo you give me," Wired reported. "We're talking about a significant, orders-of-magnitude increase in the number of people who are clearly actively using and creating this kind of content," deepfake expert Henry Ajder, who was one of the first to discover the underground world of explicit Telegram chatbots four years ago, told Wired. "It is really concerning that these tools -- which are really ruining lives and creating a very nightmarish scenario primarily for young girls and for women -- are still so easy to access and to find on the surface web, on one of the biggest apps in the world." While celebrities have fallen victim to the rise of pornographic deepfakes -- from Taylor Swift to Jenna Ortega -- there have also been recent reports of teen girls being used to create deepfake nude photos, some of which have been used in cases of "sextortion." A recent survey even revealed that 40% of US students reported the circulation of deepfakes in their schools. Deepfake sites have flourished amid advancements in AI technology, according to Wired, but have been met with intense scrutiny from lawmakers. In August, the San Francisco attorney's office sued more than a dozen "undressing" websites. On Telegram, bots can be used for translations, games and alerts -- or, in this case, creating dangerous deepfakes. When contacted by Wired about the explicit chatbot content, the company did not respond with comment, but the bots and associated channels suddenly disappeared, although creators vowed to "make another bot" the next day. "These types of fake images can harm a person's health and well-being by causing psychological trauma and feelings of humiliation, fear, embarrassment, and shame," Emma Pickering, the head of technology-facilitated abuse and economic empowerment at the UK-based domestic abuse organization Refuge, told Wired. "While this form of abuse is common, perpetrators are rarely held to account, and we know this type of abuse is becoming increasingly common in intimate partner relationships." Elena Michale, the director and co-founder of the advocacy group #NotYourPorn, told Wired that it's "concerning" how challenging it is "to track and monitor" applications on Telegram that could be promoting this type of explicit imagery. "Imagine if you were a survivor who's having to do that themselves, surely the burden shouldn't be on an individual," she said. "Surely the burden should be on the company to put something in place that's proactive rather than reactive." Non-consenual deepfake pornography has been banned in multiple states, but experts say Telegram's terms of service are vague on X-rated content. "I would say that it's actually not clear whether nonconsensual intimate image creation or distribution is prohibited on the platform," Kate Ruane, the director of the Center for Democracy and Technology's free expression project, told Wired. Earlier this year, Telegram CEO Pavel Durov was arrested and charged with facilitating child pornography, although he vowed "little has changed" in how his app operates and its privacy policy since his arrest. In a recent statement, he claimed the platform routinely cooperated with law enforcement when requested to do so, vowing that the company does "not allow criminals to abuse our platform or evade justice." "Using laws from the pre-smartphone era to charge a CEO with crimes committed by third parties on the platform he manages is a misguided approach," Durov wrote in a Telegram post. "Building technology is hard enough as it is. No innovator will ever build new tools if they know they can be personally held responsible for potential abuse of those tools." Experts, however, say Telegram should be held responsible. "Telegram provides you with the search functionality, so it allows you to identify communities, chats, and bots," Ajder said. "It provides the bot-hosting functionality, so it's somewhere that provides the tooling in effect. Then it's also the place where you can share it and actually execute the harm in terms of the end result."
Share
Share
Copy Link
Millions of users are accessing AI-powered bots on Telegram to create nonconsensual deepfake nude images, sparking concerns about privacy, consent, and the potential for widespread abuse.
In a disturbing trend, AI-powered 'nudify' bots have proliferated on the messaging app Telegram, allowing users to generate nonconsensual deepfake nude images with alarming ease. A recent investigation by WIRED has uncovered at least 50 such bots, collectively boasting over 4 million monthly users 1.
These bots vary in capabilities, with many claiming to "remove clothes" from photos, while others purport to create explicit images depicting individuals in various sexual acts. Two of the most popular bots reported over 400,000 monthly users each, highlighting the widespread nature of this issue 1.
Deepfake expert Henry Ajder, who first discovered such a bot in early 2020, notes a significant increase in both the number of users and the sophistication of these tools. The initial bot he uncovered had been used to generate over 100,000 explicit photos, including those of minors 1.
The proliferation of these bots has raised serious concerns about privacy, consent, and the potential for abuse. Emma Pickering from Refuge, a UK-based domestic abuse organization, warns that such fake images can cause psychological trauma, humiliation, and shame 2.
The issue extends beyond individual targets, with a reported 40% of US students aware of deepfakes linked to their K-12 schools in the past year. Victims range from high-profile figures like Italy's prime minister to schoolgirls in South Korea 1.
While some states have banned nonconsensual deepfake pornography, experts argue that Telegram's terms of service remain vague on explicit content. Kate Ruane from the Center for Democracy and Technology points out the lack of clear prohibition on nonconsensual intimate image creation or distribution on the platform 2.
When contacted about the explicit chatbot content, Telegram did not comment, but the bots and associated channels suddenly disappeared. However, creators vowed to "make another bot" the next day, highlighting the ongoing challenge of regulation 2.
Experts like Henry Ajder argue that Telegram should be held responsible for providing the infrastructure that enables these bots to operate and spread. The platform's search functionality, bot-hosting capabilities, and sharing features all contribute to the problem 2.
Reference
[2]
A South Korean AI company's unsecured database exposed tens of thousands of AI-generated explicit images, including child sexual abuse material, highlighting the urgent need for regulation in the AI industry.
3 Sources
3 Sources
A new report reveals thousands of AI chatbots being used for child exploitation and other harmful activities, raising serious concerns about online safety and the need for stronger AI regulations.
3 Sources
3 Sources
The rapid proliferation of AI-generated child sexual abuse material (CSAM) is overwhelming tech companies and law enforcement. This emerging crisis highlights the urgent need for improved regulation and detection methods in the digital age.
9 Sources
9 Sources
The rise of AI-generated fake nude images is becoming a significant issue, affecting women and teenagers. Victims are calling for stronger laws and better enforcement to combat this form of online abuse.
2 Sources
2 Sources
South Korean schools are grappling with a surge in deepfake pornography, causing distress among students and challenging educators. The crisis highlights the urgent need for digital literacy and stricter regulations.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved