Russian Hacking Group FIN7 Exploits AI Nude Generator Trend to Spread Malware

5 Sources

Share

The notorious Russian hacking group FIN7 has launched a network of fake AI-powered deepnude generator sites to infect visitors with information-stealing malware, exploiting the growing interest in AI-generated content.

News article

FIN7's Resurgence and New Tactics

The notorious Russian hacking group FIN7, previously believed to be defunct, has resurfaced with a new malware campaign exploiting the growing interest in AI-generated content. Cybersecurity firm Silent Push has uncovered a network of fake "AI Deepnude" generator websites linked to FIN7, designed to spread malware to unsuspecting users

1

.

The Malware Campaign

FIN7 has set up approximately 4,000 fake domains and subdomains, including at least seven "deepnude generator" websites described as "honeypots of malware"

1

. These sites, with names like easynude(.)website and ai-nude(.)cloud, promise to create nude images from user-uploaded pictures using AI technology

2

.

How the Scam Works

Users are lured to these sites through search engine queries and advertisements. When visitors attempt to download the "free" AI nude generator software, they are redirected to a new domain featuring a Dropbox link or another source hosting malicious payloads

1

4

. The downloaded files contain information-stealing malware such as Lumma Stealer, Redline Stealer, and D3F@ck Loader

4

.

Impact and Scope

The malware is designed to steal passwords, internet cookies, cryptocurrency wallets, and other sensitive data from infected PCs

2

. While the seven identified sites have been taken down, cybersecurity experts warn that new sites following similar patterns are likely to emerge

2

4

.

FIN7's History and Evolution

FIN7, also known as Carbanak, has been active since 2012 and is believed to have caused $3 billion in damage worldwide

1

. The group has previously targeted various industries, particularly the hospitality and food sectors, to steal customer data and make fraudulent bank transfers

1

. They have even set up fake security companies to recruit unwitting cybersecurity professionals

1

.

Legal and Ethical Implications

The use of deepfake technology for creating nonconsensual explicit images has raised significant legal and ethical concerns. In response to the growing problem, the city of San Francisco recently filed a lawsuit against 18 illegal deepfake websites and apps offering to "nudify" women and girls

1

. These sites collectively received over 200 million visits in the first six months of 2024

1

.

Cybersecurity Experts' Perspectives

Ahmed Banafa, a professor at San Jose State University College of Engineering, emphasizes the challenge of detecting and preventing such malware attacks. He notes that even if server farms are confiscated, it's relatively easy for hackers to set up new operations

1

. Cybersecurity experts stress that human behavior remains the weakest point in network security

1

3

.

Broader Implications for AI and Cybersecurity

This incident highlights the ongoing challenges at the intersection of AI technology and cybersecurity. As AI tools become more sophisticated and widely available, they are increasingly being exploited by cybercriminals for malicious purposes. The case of FIN7's AI nude generator scam serves as a stark reminder of the need for improved cybersecurity measures and increased public awareness about the risks associated with emerging technologies

5

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo