8 Sources
[1]
Apple Reportedly Threatened to Remove Grok From App Store Over Deepfakes
Apple warned Elon Musk's xAI that its Grok AI would be removed from the App Store if it didn't make changes to prevent the app from being used for sexualized imagery. Grok, the AI app owned by Elon Musk's xAI, was nearly pulled from Apple's App Store earlier this year amid a scandal over sexualized deepfakes of real people generated by the tool that proliferated on X, formerly known as Twitter. According to reporting from NBC News, Apple told US senators in a letter about its dealings with xAI over the app, including warnings that Grok would be removed from the App Store if changes were not made to address the deepfake crisis. A separate report from NBC this week, an investigation into Grok, found that sexualized AI-generated images are still coming from Grok and spreading online. Representatives for Apple and xAI didn't immediately respond to requests for comment. CNET also reached out to the press offices of three senators who authored a letter to Apple and Google (PDF) in January that urged them to enforce app store rules to deal with Grok's deepfake issues. Grok is the primary AI tool available to users of the social media platform X, and in addition to being able to answer questions as a chatbot, Grok can also generate images and videos. Late last year, reports surfaced of widespread abuse of this function from users who requested sexualized images of people, including children, that were then posted on X. Since then, Musk has posted updates about changes to Grok and safeguards that have been put in place, but the NBC News report suggests those changes haven't stamped out the use Grok AI for sexualized deepfakes, including AI-generated images of women in revealing costumes, towels or clothing such as sports bras. In a statement on X, the company said, "We strictly prohibit users from generating non-consensual explicit deepfakes and from using our tools to undress real people. xAI has extensive safeguards in place to prevent such misuse, such as continuous monitoring of public usage, analysis of evasion attempts in real time, frequent model updates, prompt filters, and additional safeguards." The report from NBC News points to communications from Apple telling senators that, in response to public outcry over Grok, it warned that changes needed to be made to both the X and Grok apps. xAI reportedly submitted app versions for both, with the Grok app being rejected and then reworked to meet Apple's approval. In the letter from Apple, the company's senior director of government affairs, Timothy Powderly, told the senators that, "Apple abhors these kinds of images and the harms they inflict. Apps that generate and proliferate such content violate our policies, and they are not permitted on our platform." The letter, shared with CNET by the office of Sen. Ron Wyden, details Apple's app policies and the steps it took with the X and Grok apps. Apple said that after that process, "we determined that Grok had substantially improved and therefore approved its latest submission. This approval allowed Grok to update the apps installed on user devices with the improved software. We expect Grok to include additional improvements in subsequent submissions." Apple left the door open to a future removal if Grok violates Apple's terms. "As we made clear to them -- as with all developers -- if they cannot comply with the Guidelines, they will be removed from the App Store." In a statement to CNET, Wyden, an Oregon Democrat, also criticized Google for not responding to a request from lawmakers related to concerns about Grok. A Google representative didn't immediately respond to a request for comment. "I appreciate Apple's detailed response to our questions about how it responded to the disgusting proliferation of CSAM and nonconsensual deepfakes in the Grok and X apps," Wyden said. "It remains shocking that [President Donald] Trump's Justice Department took no action to hold X accountable for producing and distributing vast amounts of vile material."
[2]
Apple quietly threatened Grok to curb sexual deepfakes or get pulled from App Store
Apple quietly threatened to kick Elon Musk's AI app, Grok, from its App Store in January over its failure to curb the surge of nonconsensual sexual deepfakes flooding X, according to NBC News. It was a muted show of force from one of tech's most powerful gatekeepers, made behind closed doors even as the undressing crisis unfolded in full public view and criticism over Apple's cowardice mounted. In a letter obtained by NBC News, Apple told US senators it "contacted the teams behind both X and Grok after it received complaints and saw news coverage of the scandal" and demanded that the developers "create a plan to improve content moderation." At the time, xAI's chatbot Grok was freely accessible on X and as a standalone app, with flimsy safeguards that allowed users to easily generate and share sexualized deepfakes and "undress" images of real people, disproportionately women and some of them apparently minors. As we reported at the time, these were flagrant and unambiguous violations of App Store guidelines it often applies with an iron fist. Apple, which profits from having apps like X and Grok on its digital store, has not spoken publicly about the issue or its behind-the-scenes intervention. Google, through its Google Play app store, profits similarly and has also not commented publicly on the matter. Apple said it reviewed proposed changes to the X and Grok apps. While the company concluded X had "substantially resolved its violations," Grok "remained out of compliance." Apple said it warned the developer that "additional changes to remedy the violation would be required, or the app could be removed from the App Store." Only after further back and forth did Apple determine Grok had "substantially improved" and approved its submission. Throughout this covert back-and-forth, Grok and X appear to have remained live on the App Store, a drawn out process that may help explain the confusing, haphazard rollout of moderation changes announced in real time. This included limiting Grok on X to paying subscribers and attempting to stop Grok from undressing women. Our investigations revealed that neither were particularly effective beyond making the tool a bit harder to access. Later interventions, like X letting users block Grok from editing their photos, are also easily circumvented. Despite Apple's approval and xAI's claims it has tightened safeguards, Grok still appears to be able to generate sexualized deepfakes with relative ease. Cybersecurity sources told me they have been able to create explicit images of celebrities and political figures using the tool, and I have been able to produce similar images of myself and other consenting adults. NBC also reported similar findings yesterday.
[3]
Apple secretly threatened to pull Grok from the App Store over deepfake nudes
A letter Apple sent to US senators, obtained by NBC News, reveals that Apple rejected an initial Grok update and warned the app could be removed unless xAI made further changes. Only a second submission passed. Apple privately threatened to remove Grok, xAI's AI chatbot, from the App Store in January after Elon Musk's company failed to adequately stop the app from generating non-consensual sexualised deepfakes. The threat was not made public at the time, but a letter Apple sent to three US senators, obtained by NBC News, reveals that behind the scenes Apple was taking direct action, and that xAI's first attempt to fix the problem was rejected as insufficient. The controversy began in early 2026 when Grok's image generation features were used to produce a flood of sexualised and non-consensual depictions of real women and, in some cases, minors, which were then shared on X. Advocacy groups and lawmakers demanded Apple and Google remove both the X and Grok apps from their stores. Apple's letter, sent on 30 January to senators Ron Wyden, Ben Ray Luján, and Edward Markey, confirms that the company reviewed xAI's submissions and found both X and Grok in violation of its App Store guidelines, which prohibit "offensive, insensitive, upsetting" content. Apple's response, per the letter, was to contact the teams behind both apps and demand a content moderation plan. xAI submitted an update, which Apple rejected, telling the developer the "changes didn't go far enough." Apple then reviewed revised submissions from both X and Grok: it determined that X had substantially resolved its violations, but Grok remained out of compliance. Apple rejected the Grok submission and warned that additional changes were required "or the app could be removed from the App Store." Following further engagement, Apple eventually approved a later Grok submission, concluding it had substantially improved. The disclosure explains a series of seemingly inconsistent moderation changes xAI announced at the height of the controversy in January, including restricting image editing to paid subscribers, limiting the ability to edit images of real people, and geoblocking image generation in certain jurisdictions. NBC News reported that some of these restrictions could still be bypassed through modified prompts, suggesting that while the problem was reduced, it was not fully resolved.
[4]
Musk's Grok AI chatbot is still making sexual deepfakes, despite X's promise to stop it
Elon Musk's Grok AI software is still creating sexualized deepfakes despite his companies' efforts. Justine Goode / NBC News; Getty Images Elon Musk's artificial intelligence software, Grok, continues to generate sexualized images of people without their consent, despite his company's pledge months ago to halt abusive deepfakes after a public backlash and government investigations. A review by NBC News found dozens of AI-generated sexual images and videos depicting real people posted publicly on Musk's social media app, X, over the past month. The images show women whose likenesses were edited by the AI chatbot to put them in more revealing clothing, such as towels, sports bras, skintight Spider-Woman outfits or bunny costumes. Many of the women are female pop stars or actors. The Grok software, created by Musk's company xAI, made the images at the request of users who tried to break through undressing restrictions the service put in place in January. Grok, via its X account, or the users then posted the images to X. The images are similar to ones that sparked a firestorm of criticism in January, when Musk's companies freely allowed people to undress others simply by uploading photos and typing prompts such as "put her in a bikini." Musk's companies had cheered on the idea, promoting the "spicy mode" of his AI chatbot. The flood of fake images, including some of children, prompted government investigations on five continents. The number of sexualized deepfakes created by Grok and posted to X appears to have decreased significantly since the flood in January. In posts reviewed by NBC News, the Grok software turns down or ignores many of the sexualized requests it receives publicly on X. None of the women in Grok-generated images seen by NBC News were naked, and none appeared to be minors. But experts told NBC News that it's difficult to research all of what Grok produces, especially when people access the software privately on Grok's app, on the Grok website or on the private Grok tab of X. It's also difficult to search X for all public examples of sexualized deepfakes. "When these images are being created and spread around, the people in the images don't necessarily find out," said Stefan Turkheimer, the vice president for public policy at RAINN, an advocacy group dedicated to fighting sexual assault. xAI, the Musk-owned AI startup that created Grok and also owns X, said Monday it wanted to review NBC News' findings. A representative did not respond to follow-up questions. On Tuesday, most of the images were no longer on X and were replaced with messages saying the post "is unavailable" or "violated the X Rules." X and Musk did not respond to a separate request for comment. The new examples seen by NBC News show that Grok users have updated their tactics to try to stay ahead of xAI's engineers and X's content moderators. While Grok now appears to turn down or ignore requests from users to put people "in a bikini," it has complied with other queries. The examples were not difficult to find using the search function on the X website. In one trend, a user asks Grok to create an image by melding two images they submit simultaneously: first, a photo of a woman, often a celebrity, and second, a drawing of a stick figure with its legs spread, either in a squat or a split. The request includes a prompt telling Grok to make the woman "strike the pose from the second image" or "match the pose." The resulting deepfake emphasizes the woman's crotch. A second trend involves users asking Grok to swap the clothing of women in two separate photos, with at least one of the photos involving tight or revealing clothing. And in a third set, users have uploaded what appear to be authentic photos of women and asked Grok to transform the photos into video clips, sometimes with results that are sexualized. In one example from March 12, Grok complied by generating a video in which a likeness of an actor fondles her breasts, based on an image in which she is not touching them. In another example from April 6, Grok created a video of the same actor with her legs spread apart from a photo in which her legs were crossed. At least one of the celebrities depicted in the deepfakes is someone who has publicly complained about such images in the past. The findings come after X committed to preventing the creation of such images. X said in a statement in January that it had "implemented technological measures to prevent the Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers." Genevieve Oh, an independent analyst whose research on deepfakes has been widely cited, said in an email that she believes Grok "was and still is unmistakably the largest nonconsensual synthetic nudity generator" in the world. While she said her research is ongoing, she said it's likely that Grok surpasses the output of all other "nudifier tools" combined. Similar apps have circulated for years, causing disruptions at schools and leaving victims searching for recourse. The Center for Countering Digital Hate, which estimated in January that Grok produced 3 million sexualized images during an 11-day period, said last week that it also was still finding nonconsensual deepfakes made by Musk's AI. "Perverts can still use Grok to put women and girls into sexualized positions and outfits, despite the platform's claims otherwise," Imran Ahmed, the center's CEO and founder, said in a statement. When Grok instituted the changes that allowed creating the sexualized deepfakes, it was unique among the most popular AI platforms in relaxing its guardrails to such a degree. Last month, there was a sign that Musk's companies could be backtracking from the commitment they made in January. In the Netherlands, where an advocacy organization sued xAI over sexualized deepfakes, the company argued at a court hearing that it could not stop all abuse of its tools and should not be penalized for the actions of malicious users, according to a description of the hearing by Reuters. Individual sex offenders have been persistent in trying to exploit system loopholes, not only on Grok but also elsewhere, according to law enforcement. The National Center for Missing & Exploited Children, which runs the CyberTipline, a nationwide centralized reporting system for online child exploitation, said members of the public are sending it reports describing incidents in which children or abuse survivors may have been exploited using Grok. NCMEC described similar complaints in January. NCMEC said, though, that it has not independently researched Grok's current capabilities. "NCMEC is concerned about any AI technology that has the potential to generate child sexual abuse material or otherwise facilitate the exploitation of children," it said in a statement. Musk has denied that Grok produced child sexual abuse material. He wrote in a Jan. 14 post that he was "not aware of any naked underage images generated by Grok. Literally zero." Eight separate law enforcement and regulatory agencies told NBC News this month that they are continuing their investigations of Grok's nudification and sexualization capabilities. Those authorities are the California attorney general's office, Australia's eSafety office, the Privacy Commissioner of Canada, the European Commission, Ireland's Data Protection Commission, the Paris public prosecutor and a pair of British agencies called the Office of Communications, or Ofcom, and the Information Commissioner's Office. "California's investigation is still very much underway. Beyond this, to protect an ongoing investigation, we do not have further updates to share at this time," the office of California Attorney General Rob Bonta said in an email. Even more government authorities expressed outrage in January and February, although not all of them have confirmed that their investigations are ongoing. Italy, which issued a warning in January that some Grok-created images could be criminal, decided not to launch its own investigation and chose instead to monitor the investigations by Ireland and the European Commission, a spokesperson said last week. (X has its European headquarters in Dublin.) Malaysia's communications commission, which blocked and then restored access to Grok in January, said in an email Tuesday that it was not currently investigating the matter. xAI separately faces several lawsuits over Grok's generation of sexualized images. They include two lawsuits proposed as class actions in federal court in California brought by women and girls whose likenesses were edited by Grok and a lawsuit by the city of Baltimore alleging violations of its consumer protection code. Court dockets in those cases do not show any responses yet from Musk's companies. A fourth case, in the Netherlands, led to an order last month for Grok to cease generating undressing images of adults or children. The investigations and lawsuits are underway at a sensitive time for Musk's business empire. In February, xAI was acquired by one of Musk's other companies, SpaceX, the rocket service provider and satellite internet business. In June, SpaceX plans an initial public offering of its shares to raise billions of dollars in additional capital. The decision to fold xAI into SpaceX means the rocket company almost certainly will be on the hook for any potential future fines related to Grok's behavior, legal experts said, although they said it's not clear whether such fines would be considered material to SpaceX's expected valuation of $2 trillion. SpaceX did not respond to a request for comment. Musk has promoted Grok's ability to create sexualized images. He has frequently posted AI-generated images of cartoonish women in sexual situations or tight or revealing clothing. In a post in October responding to someone who had shared an AI video of a sexualized robot, Musk complained: "Hmm, our competitors do better deep fakes. We will have to step up our game." xAI released a new generative AI video tool last year called "Imagine," which included something the company called "Spicy" mode, which allowed the creation of AI-generated not-safe-for-work content. The Verge reported that it created topless deepfakes of pop star Taylor Swift without the user's asking. In late December, users began to complain about a wave of sexualized deepfakes targeting women and girls whose photos Grok digitally edited to make them appear naked or nearly naked. Grok said Dec. 31 on X that there were "isolated cases where users prompted for and received AI images depicting minors in minimal clothing." In a separate post, the software posted that it "deeply regretted" what it had done. xAI initially did not change the product and instead put the onus on users to obey laws about child abuse. "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content," Musk posted on Jan. 3. But the global backlash soon overwhelmed the company. A British watchdog, the Internet Watch Foundation, reported on "criminal imagery" that online users said was created with Grok, and different researchers found independently that Grok was producing thousands of sexualized images an hour. X restricted the AI image generation to paying customers only on Jan. 9 and announced the more comprehensive crackdown on Jan. 14. In February, French authorities raided X's offices in the country in connection with the deepfakes and other issues. They also said they planned to call X executives and employees -- including Musk and former X CEO Linda Yaccarino -- to Paris for interviews the week of April 20. X condemned the search as an "abusive act of law enforcement theater." It's not clear whether French authorities still hope to conduct those interviews this month. The Paris prosecutor's office said in a statement last week that its investigation continues, with no new information available. European Union regulators can sometimes take years to reach decisions. They spent two years investigating X before they announced in December that they were fining the company the equivalent of $140 million for breaching transparency obligations. Musk has vowed to fight the fine. Britain's Internet Watch Foundation said its analysts have been unable to search for criminal material on Grok beyond its pay barriers, so it does not know what Grok's users are generating now. The foundation said it is not enough for Musk to limit the AI tools to paying customers. "Our position is that tech companies must make sure the products they build and make available to the global public are safe by design," it said in a statement. "If that means Governments and regulators need to force them to design safer tools, then that is what must happen. Sitting and waiting for unsafe products to be abused before taking action is unacceptable," it said.
[5]
Grok faced App Store removal threat amid explicit deepfake concerns
TL;DR: Elon Musk's xAI app Grok nearly faced removal from Apple's App Store due to sexualized AI-generated deepfake images. Despite safeguards and updates, problematic content persists, prompting Apple to warn xAI that continued violations risk Grok's complete removal from the platform. Grok, the AI app from Elon Musk's xAI, reportedly came dangerously close to being pulled from Apple's App Store over a growing deepfake controversy. According to a new report, Apple warned xAI earlier this year that Grok could be removed entirely if it failed to address the spread of sexualized AI-generated images circulating on X. The warning came amid mounting pressure from US lawmakers, who had raised concerns about Grok's ability to generate explicit, non-consensual deepfakes of real people. Apple confirmed in a letter to senators that it had rejected earlier versions of the app, forcing xAI to make changes before it would allow updates to go live. The situation, however, appears far from resolved. A recent investigation has found that despite xAI implementing safeguards, such as prompt filters, monitoring systems, and model updates, problematic content is still being generated and shared online. That includes AI-generated images depicting real individuals in revealing or suggestive scenarios. xAI maintains that it strictly prohibits such use, but the persistence of these outputs raises questions about how effective those protections actually are. Apple, for its part, has made its stance clear, stating that apps enabling this type of content violate its policies and risk removal if compliance isn't maintained to the standards outlined in its policies. For now, Grok remains available on the App Store after Apple determined improvements had been made, but the situation is clearly on a knife's edge. If the issues continue, Apple has no more chances left for xAI: Grok could still be pulled from the App Store in its entirety.
[6]
Following nude-deepfake outcry, Apple nearly kicked Grok off App Store: report
Apple reportedly threatened to yank Elon Musk's Grok from its App Store over complaints the AI app wasn't doing enough to stop users from creating nude or overly sexualized deepfakes -- a potentially major blow as Grok came under international scrutiny for the content it was being used to create. The threat, which surfaced in a recently revealed missive to US senators, came after Apple determined that Grok -- along with Musk's social media site X -- were in violation of Apple rules barring overtly sexual material. Apple took the drastic step after asking X and Grok to clamp down on functions that allowed users to create sexualized deepfakes, according to a Jan. 30 letter cited by NBC News. Apple had determined Grok's efforts to address the problem -- which included the use of AI to undress images of people with their consent -- hadn't gone far enough, Apple reportedly wrote Democratic Sens. Ben Ray Luján of New Mexico, Ed Markey of Massachusetts and Ron Wyden of Oregon. X had announced a crackdown on using AI for undressing images on Jan. 14, saying that the restriction "applies to all users, including paid subscribers." And Apple reportedly said it asked X and Grok to come up with a plan to improve content moderation, though that was found to be lacking. "Apple ... determined that X had substantially resolved its violations, but the Grok app remained out of compliance. As a result, we rejected the Grok submission and notified the developer that additional changes to remedy the violation would be required, or the app could be removed from the App Store," Apple wrote the senators. Following Apple's threat, Grok submitted new code to the tech giant, according to NBC News -- apparently resolving the dispute for now. "Following further engagement and changes by the Grok developer, we determined that Grok had substantially improved and therefore approved its latest submission," Apple wrote the senators. The letter, signed by Apple's senior director of government affairs Timothy Powderly, suggests that Grok came closer than previously known to losing access to the more than 2 billion devices connected to Apple's software marketplace. The missive came in the wake of public outrage over sexualized AI images made by Grok and posted on X, with French, UK and European Union authorities launching probes into X. Musk characterized the investigations as attempts at censorship. Prior to that, X and xAI, which runs Grok, sued Apple in August on allegations it had "dragged out" its review process for Grok updates, according to NBC News -- an allegation Apple denied. The threat from Apple reportedly came after the Democratic trio asked it and Google to remove X and Grok from the marketplaces. Wyden told NBC News he was "disappointed that Google didn't treat this matter with the same seriousness as Apple, given the horrific nature of the images these apps have produced." It "remains shocking that Trump's Justice Department took no action to hold X accountable for producing and distributing vast amounts of vile material," he added. It's not known whether Google made a threat similar to Apple's, NBC noted. Google reportedly told the senators it "immediately engaged" with Musk's teams "to underscore the importance of policy adherence and to receive assurances that they were committed to addressing the promotion of harmful content." The Post has sought comment from Apple, Google, X, Grok and Luján, Markey and Wyden.
[7]
Tech Clash Escalates: Apple Pressures Musk's Grok to Fix Safety Issues
The clash between Apple and Musk's Grok app has escalated. A leaked report indicates that the tech giant formally raised concerns about the latter's content moderation failures. It has hinted at an industry shift in which platform owners must tighten rules on generative AI tools that can produce harmful or illegal content. For Apple, the focus is on child safety. According to recent reports, Apple has sent a formal letter to Elon Musk regarding the safety policies of X and Grok. The letter specifically stated that if the safety rules don't become stricter, the company will remove these apps from the . The message was clear, and a move like this significantly limits Grok's reach, especially among mobile users who rely on Apple devices. Industry experts have opined that this warning underscores Apple's stance on enforcing strict policies, even when it's the billionaire business icon on the other side. The company has previously taken similar actions against apps that failed to control harmful user-generated content. Access to the App Store is essential for any application or tool. The removal prevents their growth.
[8]
Apple threatened to remove Elon Musk's Grok from App Store, leaked letter reveals: Here is why
X submitted an updated version of the Grok app for review, but Apple rejected it, saying the 'changes didn't go far enough.' Apple had privately threatened to remove Elon Musk's Grok app from the App Store after it was found violating the company's content guidelines, according to a letter obtained by NBC News (via 9To5Mac). Earlier this year, Apple faced intense pressure to take action against Grok and the X app after users discovered that the chatbot could generate sexualised deepfake images. Many of these images involved women, including minors. The issue quickly went viral, sparking backlash and raising serious concerns about safety and moderation. Although Apple did not publicly comment at the time, the letter reveals that the company acted behind the scenes. NBC News reports that Apple 'found X and Grok in violation of its guidelines,' and 'privately threatened to remove' Grok from the App Store. Also read: Microsoft unveils MAI Image 2 Efficient AI model, calls it production workhorse: How to access According to the report, Apple reached out to the teams of both X and Grok after it received complaints and saw news about the backlash and required 'the app developers to create a plan to improve content moderation.' X later submitted an updated version of the Grok app for review, but Apple rejected it, saying the 'changes didn't go far enough.' After that, Elon Musk's company submitted revised versions of both the X and Grok apps. Only one of them was approved initially. Also read: Google launches Gemini Personal Intelligence in India: What is it and how to use it According to Apple's letter sent to US senators, 'Apple reviewed the next submissions made by the developers and determined that X had substantially resolved its violations, but the Grok app remained out of compliance. As a result, we rejected the Grok submission and notified the developer that additional changes to remedy the violation would be required, or the app could be removed from the App Store.' 'Following further engagement and changes by the Grok developer, we determined that Grok had substantially improved and therefore approved its latest submission.' These details help explain why Grok suddenly introduced stricter rules during the controversy, including limiting who could access its image tools and restricting edits involving real people. However, the issue may not be fully resolved. In a separate report, NBC News claims that Grok continues to generate explicit images of people without their permission. While the number of such images has reportedly dropped since January, some users are still able to bypass restrictions and create revealing images of women.
Share
Copy Link
Apple warned Elon Musk's xAI in January that its Grok AI chatbot would be removed from the App Store unless it addressed rampant sexualized deepfakes. The tech giant rejected initial fixes as insufficient, forcing multiple resubmissions before approval. Despite implemented safeguards, NBC News investigations reveal Grok continues generating nonconsensual sexual images of real people, raising questions about enforcement and the effectiveness of content moderation measures.
Apple privately threatened to remove Grok AI from its App Store in January after Elon Musk's xAI failed to adequately address a surge of nonconsensual sexual deepfakes generated by the AI chatbot, according to a letter obtained by NBC News
2
. The warning came after Apple received complaints and observed news coverage about sexualized deepfakes flooding X, the social media platform where Grok serves as the primary AI tool1
. The tech giant contacted teams behind both X and Grok, demanding they "create a plan to improve content moderation" to address flagrant violations of App Store guidelines2
.
Source: New York Post
In a letter sent to US senators Ron Wyden, Ben Ray Luján, and Edward Markey on January 30, Apple's senior director of government affairs, Timothy Powderly, detailed the company's enforcement actions
3
. Apple stated it "abhors these kinds of images and the harms they inflict" and made clear that "apps that generate and proliferate such content violate our policies, and they are not permitted on our platform"1
. The company determined that while X had "substantially resolved its violations," Grok "remained out of compliance" and rejected the initial app submission2
.
Source: Digit
Apple warned xAI that "additional changes to remedy the violation would be required, or the app could be removed from the App Store"
3
. Only after further back-and-forth did Apple determine Grok had "substantially improved" and approved its submission2
. Throughout this process, both Grok and X appear to have remained live on the App Store, which may explain the confusing, haphazard rollout of moderation changes announced in real time2
. These changes included restricting Grok image editing to paid subscribers, limiting the ability to edit images of real people, and geoblocking image generation in certain jurisdictions3
.
Source: Analytics Insight
Apple left the door open to future enforcement, stating that "as we made clear to them -- as with all developers -- if they cannot comply with the Guidelines, they will be removed from the App Store"
1
. This behind-the-scenes intervention occurred even as the crisis unfolded in full public view, with advocacy groups and lawmakers demanding action from both Apple and Google2
.Despite xAI's claims of implementing extensive safeguards, a recent NBC News investigation found that Grok continues to generate sexualized deepfakes with relative ease
4
. The review found dozens of AI-generated sexual images and videos depicting real people posted publicly on X over the past month, showing women whose likenesses were edited to put them in revealing clothing such as towels, sports bras, or bunny costumes4
. Many depicted female pop stars or actors, including at least one celebrity who has publicly complained about such images in the past4
.xAI stated it "strictly prohibits users from generating non-consensual explicit deepfakes and from using our tools to undress real people," citing safeguards including continuous monitoring of public usage, analysis of evasion attempts in real time, frequent model updates, and prompt filters
1
. However, users have updated their tactics to circumvent these restrictions, including asking Grok to merge photos with stick figure poses, swap clothing between images, or transform photos into sexualized video clips4
.Related Stories
Genevieve Oh, an independent analyst whose research on deepfakes has been widely cited, believes Grok "was and still is unmistakably the largest nonconsensual synthetic nudity generator" in the world
4
. The persistence of these violations raises critical questions about the effectiveness of both xAI's content moderation efforts and Apple's enforcement mechanisms. Stefan Turkheimer, vice president for public policy at RAINN, noted that "when these images are being created and spread around, the people in the images don't necessarily find out"4
.Senator Ron Wyden criticized the situation, stating he appreciated "Apple's detailed response" but found it "shocking that [President Donald] Trump's Justice Department took no action to hold X accountable for producing and distributing vast amounts of vile material"
1
. The situation remains precarious for Grok, as Apple has made clear that continued violations risk complete removal from the platform5
. For now, users and watchdogs will be monitoring whether xAI can implement truly effective safeguards or whether Apple will follow through on its removal threat.Summarized by
Navi
09 Jan 2026•Policy and Regulation

09 Jan 2026•Policy and Regulation

27 Jan 2026•Technology

1
Health

2
Technology

3
Technology
