AI-Powered 'Nudify' Bots on Telegram Raise Alarm Over Deepfake Abuse

Curated by THEOUTPOST

On Tue, 15 Oct, 4:05 PM UTC

2 Sources

Share

Millions of users are accessing AI-powered bots on Telegram to create nonconsensual deepfake nude images, sparking concerns about privacy, consent, and the potential for widespread abuse.

The Rise of AI 'Nudify' Bots on Telegram

In a disturbing trend, AI-powered 'nudify' bots have proliferated on the messaging app Telegram, allowing users to generate nonconsensual deepfake nude images with alarming ease. A recent investigation by WIRED has uncovered at least 50 such bots, collectively boasting over 4 million monthly users 1.

Scope and Functionality

These bots vary in capabilities, with many claiming to "remove clothes" from photos, while others purport to create explicit images depicting individuals in various sexual acts. Two of the most popular bots reported over 400,000 monthly users each, highlighting the widespread nature of this issue 1.

Historical Context and Growth

Deepfake expert Henry Ajder, who first discovered such a bot in early 2020, notes a significant increase in both the number of users and the sophistication of these tools. The initial bot he uncovered had been used to generate over 100,000 explicit photos, including those of minors 1.

Impact and Concerns

The proliferation of these bots has raised serious concerns about privacy, consent, and the potential for abuse. Emma Pickering from Refuge, a UK-based domestic abuse organization, warns that such fake images can cause psychological trauma, humiliation, and shame 2.

Widespread Affect

The issue extends beyond individual targets, with a reported 40% of US students aware of deepfakes linked to their K-12 schools in the past year. Victims range from high-profile figures like Italy's prime minister to schoolgirls in South Korea 1.

Legal and Regulatory Challenges

While some states have banned nonconsensual deepfake pornography, experts argue that Telegram's terms of service remain vague on explicit content. Kate Ruane from the Center for Democracy and Technology points out the lack of clear prohibition on nonconsensual intimate image creation or distribution on the platform 2.

Telegram's Response and Responsibility

When contacted about the explicit chatbot content, Telegram did not comment, but the bots and associated channels suddenly disappeared. However, creators vowed to "make another bot" the next day, highlighting the ongoing challenge of regulation 2.

Expert Opinions

Experts like Henry Ajder argue that Telegram should be held responsible for providing the infrastructure that enables these bots to operate and spread. The platform's search functionality, bot-hosting capabilities, and sharing features all contribute to the problem 2.

Continue Reading
AI Image Generator's Exposed Database Reveals Widespread

AI Image Generator's Exposed Database Reveals Widespread Misuse for Explicit Content

A South Korean AI company's unsecured database exposed tens of thousands of AI-generated explicit images, including child sexual abuse material, highlighting the urgent need for regulation in the AI industry.

Wired logotheregister.com logoFuturism logo

3 Sources

Wired logotheregister.com logoFuturism logo

3 Sources

AI Chatbots Exploited for Child Exploitation: A Growing

AI Chatbots Exploited for Child Exploitation: A Growing Concern in Online Safety

A new report reveals thousands of AI chatbots being used for child exploitation and other harmful activities, raising serious concerns about online safety and the need for stronger AI regulations.

Fast Company logoMashable logoAnalytics Insight logo

3 Sources

Fast Company logoMashable logoAnalytics Insight logo

3 Sources

AI-Generated Child Sexual Abuse Material: A Growing Threat

AI-Generated Child Sexual Abuse Material: A Growing Threat Outpacing Tech Regulation

The rapid proliferation of AI-generated child sexual abuse material (CSAM) is overwhelming tech companies and law enforcement. This emerging crisis highlights the urgent need for improved regulation and detection methods in the digital age.

Mashable ME logoMashable SEA logoMashable logoNBC News logo

9 Sources

Mashable ME logoMashable SEA logoMashable logoNBC News logo

9 Sources

AI-Driven Explosion of Fake Nudes: A Growing Concern for

AI-Driven Explosion of Fake Nudes: A Growing Concern for Women and Teenagers

The rise of AI-generated fake nude images is becoming a significant issue, affecting women and teenagers. Victims are calling for stronger laws and better enforcement to combat this form of online abuse.

Sky News logo

2 Sources

Sky News logo

2 Sources

Deepfake Porn Crisis Plagues South Korean Schools

Deepfake Porn Crisis Plagues South Korean Schools

South Korean schools are grappling with a surge in deepfake pornography, causing distress among students and challenging educators. The crisis highlights the urgent need for digital literacy and stricter regulations.

Khaleej times logoFrance 24 logoInternational Business Times logo

3 Sources

Khaleej times logoFrance 24 logoInternational Business Times logo

3 Sources

TheOutpost.ai

Your one-stop AI hub

The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.

© 2025 TheOutpost.AI All rights reserved