2 Sources
[1]
AI 'Nudify' Websites Are Raking in Millions of Dollars
For years, so-called "nudify" apps and websites have mushroomed online, allowing people to create nonconsensual and abusive images of women and girls, including child sexual abuse material. Despite some lawmakers and tech companies taking steps to limit the harmful services, every month, millions of people are still accessing the websites, and the sites' creators may be making millions of dollars each year, new research suggests. An analysis of 85 nudify and "undress" websites -- which allow people to upload photos and use AI to generate "nude" pictures of the subjects with just a few clicks -- has found that most of the sites rely on tech services from Google, Amazon, and Cloudflare to operate and stay online. The findings, revealed by Indicator, a publication investigating digital deception, say that the websites had a combined average of 18.5 million visitors for each of the past six months and collectively may be making up to $36 million per year. Alexios Mantzarlis, a cofounder of Indicator and an online safety researcher, says the murky nudifier ecosystem has become a "lucrative business" that "Silicon Valley's laissez-faire approach to generative AI" has allowed to persist. "They should have ceased providing any and all services to AI nudifiers when it was clear that their only use case was sexual harassment," Mantzarlis says of tech companies. It is increasingly becoming illegal to create or share explicit deepfakes. According to the research, Amazon and Cloudflare provide hosting or content delivery services for 62 of the 85 websites, while Google's sign-on system has been used on 54 of the websites. The nudify websites also use a host of other services, such as payment systems, provided by mainstream companies. Amazon Web Services spokesperson Ryan Walsh says AWS has clear terms of service that require customers to follow "applicable" laws. "When we receive reports of potential violations of our terms, we act quickly to review and take steps to disable prohibited content," Walsh says, adding that people can report issues to its safety teams. "Some of these sites violate our terms, and our teams are taking action to address these violations, as well as working on longer-term solutions," Google spokesperson Karl Ryan says, pointing out that Google's sign-in system requires developers to agree to its policies that prohibit illegal content and content that harasses others. Cloudflare had not responded to WIRED's request for comment at the time of writing. WIRED is not naming the nudifier websites in this story, as not to provide them with further exposure. Nudify and undress websites and bots have flourished since 2019, after originally spawning from the tools and processes used to create the first explicit "deepfakes." Networks of interconnected companies, as Bellingcat has reported, have appeared online offering the technology and making money from the systems. Broadly, the services use AI to transform photos into nonconsensual explicit imagery; they often make money by selling "credits" or subscriptions that can be used to generate photos. They have been supercharged by the wave of generative AI image generators that have appeared in the past few years. Their output is hugely damaging. Social media photos have been stolen and used to create abusive images; meanwhile, in a new form of cyberbullying and abuse, teenage boys around the world have created images of their classmates. Such intimate image abuse is harrowing for victims, and images can be difficult to scrub from the web.
[2]
Tech services are failing to take nudify AI tools offline.
A report about how much money these websites are making found that 62 of the 85 websites it examined had hosting or content delivery services provided by Amazon and Cloudflare. Google's sign-on system was also used on 54 of the websites, alongside several other services and payment systems provided by mainstream tech companies. "They should have ceased providing any and all services to AI nudifiers when it was clear that their only use case was sexual harassment," said Indicator co-founder Alexios Mantzarlis.
Share
Copy Link
AI-powered 'nudify' websites, which generate non-consensual explicit images, are thriving despite legal and ethical concerns. Major tech companies are inadvertently supporting these services, raising questions about responsibility and regulation in the AI era.
In recent years, the internet has witnessed a disturbing trend: the proliferation of so-called "nudify" websites and apps. These platforms use artificial intelligence to generate non-consensual explicit images of women and girls, including potential child sexual abuse material. Despite efforts by some lawmakers and tech companies to curb these harmful services, new research suggests that these websites continue to thrive, attracting millions of users and potentially generating substantial revenue 1.
An analysis of 85 nudify and "undress" websites revealed shocking statistics. These platforms collectively averaged 18.5 million visitors per month over the past six months and may be generating up to $36 million annually. More alarmingly, the research found that a majority of these websites rely on services provided by major tech companies to operate 1.
Source: Wired
These nudify services employ AI algorithms to transform regular photos into explicit imagery. They often monetize their operations by selling "credits" or subscriptions for generating these non-consensual images. The recent advancements in generative AI have further fueled the capabilities of these harmful tools 1.
The consequences of these AI-powered nudify tools are far-reaching and deeply troubling. Victims have reported stolen social media photos being used to create abusive images. In a disturbing trend, teenage boys worldwide have been creating explicit images of their classmates, leading to a new form of cyberbullying and abuse 1.
It's worth noting that the creation and sharing of explicit deepfakes are increasingly becoming illegal in many jurisdictions. However, the global nature of the internet makes enforcement challenging.
Alexios Mantzarlis, co-founder of Indicator and an online safety researcher, criticizes the tech industry's approach to this issue. He argues that Silicon Valley's laissez-faire attitude towards generative AI has allowed this "lucrative business" to persist. Mantzarlis strongly believes that tech companies should have ceased providing any services to AI nudifiers once their harmful nature became apparent 2.
When confronted with these findings, tech giants offered varying responses:
As the AI landscape continues to evolve rapidly, the case of nudify websites serves as a stark reminder of the potential for misuse and the urgent need for robust regulation and responsible development in the field of artificial intelligence.
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather, potentially improving the protection of Earth's critical infrastructure from solar storms.
5 Sources
Technology
7 hrs ago
5 Sources
Technology
7 hrs ago
Meta introduces an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
8 Sources
Technology
23 hrs ago
8 Sources
Technology
23 hrs ago
OpenAI CEO Sam Altman reveals plans for GPT-6, focusing on memory capabilities to create more personalized and adaptive AI interactions. The upcoming model aims to remember user preferences and conversations, potentially transforming the relationship between humans and AI.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago
Chinese AI companies DeepSeek and Baidu are making waves in the global AI landscape with their open-source models, challenging the dominance of Western tech giants and potentially reshaping the AI industry.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago
A comprehensive look at the emerging phenomenon of 'AI psychosis', its impact on mental health, and the growing concerns among experts and tech leaders about the psychological risks associated with AI chatbots.
3 Sources
Technology
7 hrs ago
3 Sources
Technology
7 hrs ago