Apple and Google actively promote nudify apps despite policies banning them, report reveals

Reviewed byNidhi Govil

11 Sources

Share

A new Tech Transparency Project report reveals Apple and Google are not just hosting AI-powered nudify apps—they're actively directing users to them through search suggestions and sponsored ads. The apps have generated $122 million in revenue and been downloaded 483 million times, with many rated suitable for children despite creating nonconsensual sexualized images.

Apple and Google Face Scrutiny for Promoting Harmful AI Tools

Apple and Google are actively steering users toward nudify apps that create deepfake nude images, despite having explicit app store policies banning such content, according to a new report published Wednesday by the Tech Transparency Project

1

. The investigation reveals a troubling pattern: both tech giants are not merely failing to block these AI-powered nudify apps but are actually promoting them through autocomplete features and sponsored search results

2

.

Searching for terms like "nudify," "undress," and "deepnude" in the Apple and Google app stores gives users access to software designed to alter images of people—often without their consent—to make them appear nude or partially undressed

1

. The Tech Transparency Project, a research arm of the nonprofit Campaign for Accountability, identified 18 apps with nudifying capabilities in the Apple App Store and 20 in the Google Play Store

3

.

Source: Digit

Source: Digit

Platform Responsibility and the Scale of the Problem

The financial scope of this issue is staggering. Apps identified by the Tech Transparency Project have collectively been downloaded 483 million times and generated $122 million in revenue, according to estimates from market researcher AppMagic

1

. "It's not just that the companies are failing to actually appropriately review these apps and continue to approve them and profit from them," Katie Paul, director of the Tech Transparency Project, explained in an interview. "They are actually directing users to the apps themselves"

1

.

What makes this situation particularly alarming is that many of these apps were rated "E" for Everyone, meaning children can legally download and use them

2

. This rating classification suggests a fundamental breakdown in content moderation and app rating processes at both companies.

How Search Systems Amplify Harmful Content

Both Apple and Google app stores don't just host these applications—their systems actively help users discover them. When users typed "AI NS" as part of a search, the App Store suggested "image to video ai nsfw," which then returned several nudify apps in the top ten results

5

.

Source: Digit

Source: Digit

Additionally, both platforms ran sponsored ads for nudifying apps in search results, with some appearing as the first result for relevant searches

5

.

One app identified in the Google Play Store, Video Face Swap AI: DeepFace, advertised face-swapping capabilities but contained a category called "Girls" where users could paste faces onto video templates of women in sexualized poses

1

. The app, rated "E" for Everyone, had been downloaded over 1 million times

1

.

Enforcement Gaps and Policy Violations

Apple's App Store guidelines explicitly ban "overtly sexual or pornographic material," while the Google Play Store specifically prohibits "apps that degrade or objectify people, such as apps that claim to undress people or see through clothing, even if labeled as prank or entertainment apps"

1

. Yet enforcement remains inconsistent.

After Bloomberg reached out about the Tech Transparency Project report, Apple removed 15 apps and contacted developers of six others to alert them to policy violations

1

. Google stated that many apps referenced in the report had been suspended and an investigation was ongoing

1

. However, this pattern has repeated itself—earlier this year, both companies removed apps flagged by the Tech Transparency Project, only for dozens of similar ones to appear months later

1

.

Source: Analytics Insight

Source: Analytics Insight

Deepfake Technology and Nonconsensual Imagery Concerns

These apps leverage generative AI models similar to those powering popular image generators. Users upload a photo, and the AI predicts what a nude version might look like, often producing disturbingly realistic results

3

. The potential for creating nonconsensual sexualized images raises serious ethical and legal concerns, particularly as deepfake technology becomes more sophisticated.

Anne Helmond, a professor at Utrecht University in the Netherlands and director of the App Studies Initiative, noted that enforcement efforts are "uneven and largely opaque." She explained that "if an app presents itself as a generic image generator, it may pass review, even if it can be misused in practice"

1

. This highlights a critical gap in content safety measures: apps marketed as general photo editors can easily bypass review processes while offering harmful capabilities.

Regulatory Response and Future Implications

The proliferation of these harmful AI tools has prompted governments worldwide to consider stricter regulations. The UK's Children's Commissioner recently called for a ban on AI deepfake apps that create nude or sexual images of children

2

. The US and other countries have proposed or enacted laws banning explicit deepfakes, and California's Attorney General recently sent Elon Musk's X a cease and desist order over Grok's explicit deepfakes

2

.

For users and parents, the immediate concern centers on protecting vulnerable individuals from nonconsensual imagery. The fact that face-swapping apps rated for everyone can generate pornographic content with minimal restrictions suggests that current app store policies need fundamental restructuring. As generative AI continues to advance, the challenge of distinguishing legitimate photo-editing tools from those designed for harm will only intensify, demanding more proactive and transparent content moderation from platform holders.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo