Apple and Google caught promoting nudify apps that violate their own app store policies

Reviewed byNidhi Govil

13 Sources

Share

Apple and Google are actively directing users to nudify apps that create nonconsensual deepfake images, despite explicit policies banning such content. A Tech Transparency Project report reveals these apps have generated $122 million in revenue and been downloaded 483 million times. Many were rated 'E for Everyone,' making them accessible to children.

Apple and Google Promote Banned Nudify Apps Despite Explicit Policies

Apple and Google are not just hosting nudify apps that violate their own policies—they're actively promoting them. A Tech Transparency Project report published Wednesday reveals that both tech giants have been directing users to apps designed to create nonconsensual deepfake images, with some appearing in automated search suggestions and ad carousels

1

2

. The Tech Transparency Project (TTP), a research arm of the nonprofit Campaign for Accountability, identified 18 nudify apps in the App Store and 20 in the Google Play Store

3

.

Source: Digit

Source: Digit

These undress apps use generative AI tools to alter images of people—predominantly women—to make them appear nude or partially clothed, creating what constitutes nonconsensual imagery. When users search for terms like "nudify," "undress," and "deepnude," both platforms surface these apps and even suggest additional ones through autocomplete features

2

. Google went further by creating carousels of ads for some of the most sexually explicit apps encountered during the investigation

1

.

Source: Digit

Source: Digit

Platform Responsibility and Revenue Conflicts

Both Apple and Google maintain app store policies that explicitly prohibit this content. Apple's guidelines ban "overtly sexual or pornographic material," while Google Play specifically prohibits "apps that claim to undress people or see through clothing, even if labeled as prank or entertainment apps"

2

. Yet these policies appear to conflict with financial incentives. Analytics firm AppMagic found that the identified apps generated more than $122 million in lifetime app revenue and were downloaded 483 million times

1

3

.

Source: Analytics Insight

Source: Analytics Insight

Apple and Google profit from app developers through advertising and subscription revenue shares, which may explain their inconsistent enforcement. "It's not just that the companies are failing to actually appropriately review these apps and continue to approve them and profit from them," Katie Paul, director of the Tech Transparency Project, told Bloomberg. "They are actually directing users to the apps themselves"

2

3

.

Child Safety Concerns with Apps Rated E for Everyone

A particularly alarming finding involves content moderation failures affecting child safety. Many of these apps were rated "E" for Everyone, meaning children could legally download and access tools capable of creating sexualized images

3

4

. The TTP report identified 31 apps rated E for Everyone that had not been adequately vetted

5

.

One example, Video Face Swap AI: DeepFace, advertised face-swapping capabilities but contained a "Girls" category where users could paste faces onto video templates of partially undressed women. Despite this content, the app maintained an "E" rating and accumulated over 1 million downloads from the Google Play Store

2

3

.

Enforcement Remains Inconsistent Despite Removals

After Bloomberg reached out about the Tech Transparency Project report, Apple removed 15 apps and contacted developers of six others to alert them to policy violations

2

. Google confirmed it suspended seven apps and stated that investigations are ongoing

1

. Apple also said it blocked several search terms flagged in the report

1

.

However, this marks the second time in months that TTP has exposed these apps. In January, the organization first revealed that Apple and Google app stores hosted over 100 nudify apps

1

. While some were removed then, the April investigation found that dozens of similar apps had reappeared

4

. This pattern suggests that enforcement efforts are "uneven and largely opaque," according to Anne Helmond, a professor at Utrecht University and director of the App Studies Initiative

2

.

Growing AI Deepfake Crisis and Regulatory Response

The nudify app problem reflects broader challenges with deepfake technology and nonconsensual content. Earlier this year, Grok users created 1.4 million sexualized AI deepfake images over just nine days

1

. Apple privately contacted Grok to express concerns about abusive AI capabilities and threatened removal, yet the app remains available in both stores and reportedly still capable of generating explicit content

1

.

Governments are beginning to respond. The UK's Children's Commissioner has called for bans on AI deepfake apps that create nude or sexual images of children

3

. The UK government has initiated attempts to ban these apps entirely, particularly given damage already occurring in schools

5

. California's Attorney General recently sent Elon Musk's X a cease and desist order over Grok's explicit deepfakes

3

.

As generative AI becomes more sophisticated and accessible, the tension between user safety and platform profitability will likely intensify. The question remains whether Apple and Google will implement systemic changes to their app rating and review processes, or continue reactive removals that allow harmful apps to resurface within months.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved