Apple threatened to pull Grok AI from App Store over deepfakes crisis, but issues persist

Reviewed byNidhi Govil

8 Sources

Share

Apple warned Elon Musk's xAI in January that its Grok AI chatbot would be removed from the App Store unless it addressed rampant sexualized deepfakes. The tech giant rejected initial fixes as insufficient, forcing multiple resubmissions before approval. Despite implemented safeguards, NBC News investigations reveal Grok continues generating nonconsensual sexual images of real people, raising questions about enforcement and the effectiveness of content moderation measures.

Apple Issues App Store Removal Threat to Grok AI

Apple privately threatened to remove Grok AI from its App Store in January after Elon Musk's xAI failed to adequately address a surge of nonconsensual sexual deepfakes generated by the AI chatbot, according to a letter obtained by NBC News

2

. The warning came after Apple received complaints and observed news coverage about sexualized deepfakes flooding X, the social media platform where Grok serves as the primary AI tool

1

. The tech giant contacted teams behind both X and Grok, demanding they "create a plan to improve content moderation" to address flagrant violations of App Store guidelines

2

.

Source: New York Post

Source: New York Post

In a letter sent to US senators Ron Wyden, Ben Ray Luján, and Edward Markey on January 30, Apple's senior director of government affairs, Timothy Powderly, detailed the company's enforcement actions

3

. Apple stated it "abhors these kinds of images and the harms they inflict" and made clear that "apps that generate and proliferate such content violate our policies, and they are not permitted on our platform"

1

. The company determined that while X had "substantially resolved its violations," Grok "remained out of compliance" and rejected the initial app submission

2

.

Source: Digit

Source: Digit

xAI Forced to Resubmit Before Approval

Apple warned xAI that "additional changes to remedy the violation would be required, or the app could be removed from the App Store"

3

. Only after further back-and-forth did Apple determine Grok had "substantially improved" and approved its submission

2

. Throughout this process, both Grok and X appear to have remained live on the App Store, which may explain the confusing, haphazard rollout of moderation changes announced in real time

2

. These changes included restricting Grok image editing to paid subscribers, limiting the ability to edit images of real people, and geoblocking image generation in certain jurisdictions

3

.

Source: Analytics Insight

Source: Analytics Insight

Apple left the door open to future enforcement, stating that "as we made clear to them -- as with all developers -- if they cannot comply with the Guidelines, they will be removed from the App Store"

1

. This behind-the-scenes intervention occurred even as the crisis unfolded in full public view, with advocacy groups and lawmakers demanding action from both Apple and Google

2

.

Deepfakes Continue Despite Safeguards Implementation

Despite xAI's claims of implementing extensive safeguards, a recent NBC News investigation found that Grok continues to generate sexualized deepfakes with relative ease

4

. The review found dozens of AI-generated sexual images and videos depicting real people posted publicly on X over the past month, showing women whose likenesses were edited to put them in revealing clothing such as towels, sports bras, or bunny costumes

4

. Many depicted female pop stars or actors, including at least one celebrity who has publicly complained about such images in the past

4

.

xAI stated it "strictly prohibits users from generating non-consensual explicit deepfakes and from using our tools to undress real people," citing safeguards including continuous monitoring of public usage, analysis of evasion attempts in real time, frequent model updates, and prompt filters

1

. However, users have updated their tactics to circumvent these restrictions, including asking Grok to merge photos with stick figure poses, swap clothing between images, or transform photos into sexualized video clips

4

.

Enforcement Questions and Future Implications

Genevieve Oh, an independent analyst whose research on deepfakes has been widely cited, believes Grok "was and still is unmistakably the largest nonconsensual synthetic nudity generator" in the world

4

. The persistence of these violations raises critical questions about the effectiveness of both xAI's content moderation efforts and Apple's enforcement mechanisms. Stefan Turkheimer, vice president for public policy at RAINN, noted that "when these images are being created and spread around, the people in the images don't necessarily find out"

4

.

Senator Ron Wyden criticized the situation, stating he appreciated "Apple's detailed response" but found it "shocking that [President Donald] Trump's Justice Department took no action to hold X accountable for producing and distributing vast amounts of vile material"

1

. The situation remains precarious for Grok, as Apple has made clear that continued violations risk complete removal from the platform

5

. For now, users and watchdogs will be monitoring whether xAI can implement truly effective safeguards or whether Apple will follow through on its removal threat.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved