4 Sources
4 Sources
[1]
Apple quietly threatened Grok to curb sexual deepfakes or get pulled from App Store
Apple quietly threatened to kick Elon Musk's AI app, Grok, from its App Store in January over its failure to curb the surge of nonconsensual sexual deepfakes flooding X, according to NBC News. It was a muted show of force from one of tech's most powerful gatekeepers, made behind closed doors even as the undressing crisis unfolded in full public view and criticism over Apple's cowardice mounted. In a letter obtained by NBC News, Apple told US senators it "contacted the teams behind both X and Grok after it received complaints and saw news coverage of the scandal" and demanded that the developers "create a plan to improve content moderation." At the time, xAI's chatbot Grok was freely accessible on X and as a standalone app, with flimsy safeguards that allowed users to easily generate and share sexualized deepfakes and "undress" images of real people, disproportionately women and some of them apparently minors. As we reported at the time, these were flagrant and unambiguous violations of App Store guidelines it often applies with an iron fist. Apple, which profits from having apps like X and Grok on its digital store, has not spoken publicly about the issue or its behind-the-scenes intervention. Google, through its Google Play app store, profits similarly and has also not commented publicly on the matter. Apple said it reviewed proposed changes to the X and Grok apps. While the company concluded X had "substantially resolved its violations," Grok "remained out of compliance." Apple said it warned the developer that "additional changes to remedy the violation would be required, or the app could be removed from the App Store." Only after further back and forth did Apple determine Grok had "substantially improved" and approved its submission. Throughout this covert back-and-forth, Grok and X appear to have remained live on the App Store, a drawn out process that may help explain the confusing, haphazard rollout of moderation changes announced in real time. This included limiting Grok on X to paying subscribers and attempting to stop Grok from undressing women. Our investigations revealed that neither were particularly effective beyond making the tool a bit harder to access. Later interventions, like X letting users block Grok from editing their photos, are also easily circumvented. Despite Apple's approval and xAI's claims it has tightened safeguards, Grok still appears to be able to generate sexualized deepfakes with relative ease. Cybersecurity sources told me they have been able to create explicit images of celebrities and political figures using the tool, and I have been able to produce similar images of myself and other consenting adults. NBC also reported similar findings yesterday.
[2]
Musk's Grok AI chatbot is still making sexual deepfakes, despite X's promise to stop it
Elon Musk's Grok AI software is still creating sexualized deepfakes despite his companies' efforts. Justine Goode / NBC News; Getty Images Elon Musk's artificial intelligence software, Grok, continues to generate sexualized images of people without their consent, despite his company's pledge months ago to halt abusive deepfakes after a public backlash and government investigations. A review by NBC News found dozens of AI-generated sexual images and videos depicting real people posted publicly on Musk's social media app, X, over the past month. The images show women whose likenesses were edited by the AI chatbot to put them in more revealing clothing, such as towels, sports bras, skintight Spider-Woman outfits or bunny costumes. Many of the women are female pop stars or actors. The Grok software, created by Musk's company xAI, made the images at the request of users who tried to break through undressing restrictions the service put in place in January. Grok, via its X account, or the users then posted the images to X. The images are similar to ones that sparked a firestorm of criticism in January, when Musk's companies freely allowed people to undress others simply by uploading photos and typing prompts such as "put her in a bikini." Musk's companies had cheered on the idea, promoting the "spicy mode" of his AI chatbot. The flood of fake images, including some of children, prompted government investigations on five continents. The number of sexualized deepfakes created by Grok and posted to X appears to have decreased significantly since the flood in January. In posts reviewed by NBC News, the Grok software turns down or ignores many of the sexualized requests it receives publicly on X. None of the women in Grok-generated images seen by NBC News were naked, and none appeared to be minors. But experts told NBC News that it's difficult to research all of what Grok produces, especially when people access the software privately on Grok's app, on the Grok website or on the private Grok tab of X. It's also difficult to search X for all public examples of sexualized deepfakes. "When these images are being created and spread around, the people in the images don't necessarily find out," said Stefan Turkheimer, the vice president for public policy at RAINN, an advocacy group dedicated to fighting sexual assault. xAI, the Musk-owned AI startup that created Grok and also owns X, said Monday it wanted to review NBC News' findings. A representative did not respond to follow-up questions. On Tuesday, most of the images were no longer on X and were replaced with messages saying the post "is unavailable" or "violated the X Rules." X and Musk did not respond to a separate request for comment. The new examples seen by NBC News show that Grok users have updated their tactics to try to stay ahead of xAI's engineers and X's content moderators. While Grok now appears to turn down or ignore requests from users to put people "in a bikini," it has complied with other queries. The examples were not difficult to find using the search function on the X website. In one trend, a user asks Grok to create an image by melding two images they submit simultaneously: first, a photo of a woman, often a celebrity, and second, a drawing of a stick figure with its legs spread, either in a squat or a split. The request includes a prompt telling Grok to make the woman "strike the pose from the second image" or "match the pose." The resulting deepfake emphasizes the woman's crotch. A second trend involves users asking Grok to swap the clothing of women in two separate photos, with at least one of the photos involving tight or revealing clothing. And in a third set, users have uploaded what appear to be authentic photos of women and asked Grok to transform the photos into video clips, sometimes with results that are sexualized. In one example from March 12, Grok complied by generating a video in which a likeness of an actor fondles her breasts, based on an image in which she is not touching them. In another example from April 6, Grok created a video of the same actor with her legs spread apart from a photo in which her legs were crossed. At least one of the celebrities depicted in the deepfakes is someone who has publicly complained about such images in the past. The findings come after X committed to preventing the creation of such images. X said in a statement in January that it had "implemented technological measures to prevent the Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers." Genevieve Oh, an independent analyst whose research on deepfakes has been widely cited, said in an email that she believes Grok "was and still is unmistakably the largest nonconsensual synthetic nudity generator" in the world. While she said her research is ongoing, she said it's likely that Grok surpasses the output of all other "nudifier tools" combined. Similar apps have circulated for years, causing disruptions at schools and leaving victims searching for recourse. The Center for Countering Digital Hate, which estimated in January that Grok produced 3 million sexualized images during an 11-day period, said last week that it also was still finding nonconsensual deepfakes made by Musk's AI. "Perverts can still use Grok to put women and girls into sexualized positions and outfits, despite the platform's claims otherwise," Imran Ahmed, the center's CEO and founder, said in a statement. When Grok instituted the changes that allowed creating the sexualized deepfakes, it was unique among the most popular AI platforms in relaxing its guardrails to such a degree. Last month, there was a sign that Musk's companies could be backtracking from the commitment they made in January. In the Netherlands, where an advocacy organization sued xAI over sexualized deepfakes, the company argued at a court hearing that it could not stop all abuse of its tools and should not be penalized for the actions of malicious users, according to a description of the hearing by Reuters. Individual sex offenders have been persistent in trying to exploit system loopholes, not only on Grok but also elsewhere, according to law enforcement. The National Center for Missing & Exploited Children, which runs the CyberTipline, a nationwide centralized reporting system for online child exploitation, said members of the public are sending it reports describing incidents in which children or abuse survivors may have been exploited using Grok. NCMEC described similar complaints in January. NCMEC said, though, that it has not independently researched Grok's current capabilities. "NCMEC is concerned about any AI technology that has the potential to generate child sexual abuse material or otherwise facilitate the exploitation of children," it said in a statement. Musk has denied that Grok produced child sexual abuse material. He wrote in a Jan. 14 post that he was "not aware of any naked underage images generated by Grok. Literally zero." Eight separate law enforcement and regulatory agencies told NBC News this month that they are continuing their investigations of Grok's nudification and sexualization capabilities. Those authorities are the California attorney general's office, Australia's eSafety office, the Privacy Commissioner of Canada, the European Commission, Ireland's Data Protection Commission, the Paris public prosecutor and a pair of British agencies called the Office of Communications, or Ofcom, and the Information Commissioner's Office. "California's investigation is still very much underway. Beyond this, to protect an ongoing investigation, we do not have further updates to share at this time," the office of California Attorney General Rob Bonta said in an email. Even more government authorities expressed outrage in January and February, although not all of them have confirmed that their investigations are ongoing. Italy, which issued a warning in January that some Grok-created images could be criminal, decided not to launch its own investigation and chose instead to monitor the investigations by Ireland and the European Commission, a spokesperson said last week. (X has its European headquarters in Dublin.) Malaysia's communications commission, which blocked and then restored access to Grok in January, said in an email Tuesday that it was not currently investigating the matter. xAI separately faces several lawsuits over Grok's generation of sexualized images. They include two lawsuits proposed as class actions in federal court in California brought by women and girls whose likenesses were edited by Grok and a lawsuit by the city of Baltimore alleging violations of its consumer protection code. Court dockets in those cases do not show any responses yet from Musk's companies. A fourth case, in the Netherlands, led to an order last month for Grok to cease generating undressing images of adults or children. The investigations and lawsuits are underway at a sensitive time for Musk's business empire. In February, xAI was acquired by one of Musk's other companies, SpaceX, the rocket service provider and satellite internet business. In June, SpaceX plans an initial public offering of its shares to raise billions of dollars in additional capital. The decision to fold xAI into SpaceX means the rocket company almost certainly will be on the hook for any potential future fines related to Grok's behavior, legal experts said, although they said it's not clear whether such fines would be considered material to SpaceX's expected valuation of $2 trillion. SpaceX did not respond to a request for comment. Musk has promoted Grok's ability to create sexualized images. He has frequently posted AI-generated images of cartoonish women in sexual situations or tight or revealing clothing. In a post in October responding to someone who had shared an AI video of a sexualized robot, Musk complained: "Hmm, our competitors do better deep fakes. We will have to step up our game." xAI released a new generative AI video tool last year called "Imagine," which included something the company called "Spicy" mode, which allowed the creation of AI-generated not-safe-for-work content. The Verge reported that it created topless deepfakes of pop star Taylor Swift without the user's asking. In late December, users began to complain about a wave of sexualized deepfakes targeting women and girls whose photos Grok digitally edited to make them appear naked or nearly naked. Grok said Dec. 31 on X that there were "isolated cases where users prompted for and received AI images depicting minors in minimal clothing." In a separate post, the software posted that it "deeply regretted" what it had done. xAI initially did not change the product and instead put the onus on users to obey laws about child abuse. "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content," Musk posted on Jan. 3. But the global backlash soon overwhelmed the company. A British watchdog, the Internet Watch Foundation, reported on "criminal imagery" that online users said was created with Grok, and different researchers found independently that Grok was producing thousands of sexualized images an hour. X restricted the AI image generation to paying customers only on Jan. 9 and announced the more comprehensive crackdown on Jan. 14. In February, French authorities raided X's offices in the country in connection with the deepfakes and other issues. They also said they planned to call X executives and employees -- including Musk and former X CEO Linda Yaccarino -- to Paris for interviews the week of April 20. X condemned the search as an "abusive act of law enforcement theater." It's not clear whether French authorities still hope to conduct those interviews this month. The Paris prosecutor's office said in a statement last week that its investigation continues, with no new information available. European Union regulators can sometimes take years to reach decisions. They spent two years investigating X before they announced in December that they were fining the company the equivalent of $140 million for breaching transparency obligations. Musk has vowed to fight the fine. Britain's Internet Watch Foundation said its analysts have been unable to search for criminal material on Grok beyond its pay barriers, so it does not know what Grok's users are generating now. The foundation said it is not enough for Musk to limit the AI tools to paying customers. "Our position is that tech companies must make sure the products they build and make available to the global public are safe by design," it said in a statement. "If that means Governments and regulators need to force them to design safer tools, then that is what must happen. Sitting and waiting for unsafe products to be abused before taking action is unacceptable," it said.
[3]
Tech Clash Escalates: Apple Pressures Musk's Grok to Fix Safety Issues
The clash between Apple and Musk's Grok app has escalated. A leaked report indicates that the tech giant formally raised concerns about the latter's content moderation failures. It has hinted at an industry shift in which platform owners must tighten rules on generative AI tools that can produce harmful or illegal content. For Apple, the focus is on child safety. According to recent reports, Apple has sent a formal letter to Elon Musk regarding the safety policies of X and Grok. The letter specifically stated that if the safety rules don't become stricter, the company will remove these apps from the . The message was clear, and a move like this significantly limits Grok's reach, especially among mobile users who rely on Apple devices. Industry experts have opined that this warning underscores Apple's stance on enforcing strict policies, even when it's the billionaire business icon on the other side. The company has previously taken similar actions against apps that failed to control harmful user-generated content. Access to the App Store is essential for any application or tool. The removal prevents their growth.
[4]
Apple threatened to remove Elon Musk's Grok from App Store, leaked letter reveals: Here is why
X submitted an updated version of the Grok app for review, but Apple rejected it, saying the 'changes didn't go far enough.' Apple had privately threatened to remove Elon Musk's Grok app from the App Store after it was found violating the company's content guidelines, according to a letter obtained by NBC News (via 9To5Mac). Earlier this year, Apple faced intense pressure to take action against Grok and the X app after users discovered that the chatbot could generate sexualised deepfake images. Many of these images involved women, including minors. The issue quickly went viral, sparking backlash and raising serious concerns about safety and moderation. Although Apple did not publicly comment at the time, the letter reveals that the company acted behind the scenes. NBC News reports that Apple 'found X and Grok in violation of its guidelines,' and 'privately threatened to remove' Grok from the App Store. Also read: Microsoft unveils MAI Image 2 Efficient AI model, calls it production workhorse: How to access According to the report, Apple reached out to the teams of both X and Grok after it received complaints and saw news about the backlash and required 'the app developers to create a plan to improve content moderation.' X later submitted an updated version of the Grok app for review, but Apple rejected it, saying the 'changes didn't go far enough.' After that, Elon Musk's company submitted revised versions of both the X and Grok apps. Only one of them was approved initially. Also read: Google launches Gemini Personal Intelligence in India: What is it and how to use it According to Apple's letter sent to US senators, 'Apple reviewed the next submissions made by the developers and determined that X had substantially resolved its violations, but the Grok app remained out of compliance. As a result, we rejected the Grok submission and notified the developer that additional changes to remedy the violation would be required, or the app could be removed from the App Store.' 'Following further engagement and changes by the Grok developer, we determined that Grok had substantially improved and therefore approved its latest submission.' These details help explain why Grok suddenly introduced stricter rules during the controversy, including limiting who could access its image tools and restricting edits involving real people. However, the issue may not be fully resolved. In a separate report, NBC News claims that Grok continues to generate explicit images of people without their permission. While the number of such images has reportedly dropped since January, some users are still able to bypass restrictions and create revealing images of women.
Share
Share
Copy Link
Apple privately warned Elon Musk's xAI in January that Grok could be removed from the App Store over its failure to stop nonconsensual sexual deepfakes. A leaked letter reveals the tech giant demanded stricter content moderation after complaints about women and minors being targeted. Despite claimed improvements, NBC News found Grok still generates sexualized images.
Apple privately threatened to remove Grok from the App Store in January over its failure to curb nonconsensual sexual deepfakes, according to a leaked letter obtained by NBC News
2
. The intervention came after the tech giant received complaints and saw news coverage of the scandal involving Elon Musk's Grok AI chatbot, which allowed users to generate sexualized images of real people without consent1
. Apple contacted teams behind both X and Grok, demanding developers "create a plan to improve content moderation"4
. At the time, xAI's chatbot was freely accessible with flimsy safeguards that allowed users to easily generate and share "undress" images of real people, disproportionately targeting women and some apparently minors1
.
Source: Digit
The muted show of force from one of tech's most powerful gatekeepers happened behind closed doors even as the crisis unfolded publicly and criticism over Apple's silence mounted
1
. These were flagrant violations of App Store content guidelines that Apple typically enforces with an iron fist. For Apple, the focus centered on child safety and preventing generative AI tools from producing harmful or illegal content3
.When X submitted an updated version of Elon Musk's Grok app for review, Apple rejected it, stating the "changes didn't go far enough"
4
. Apple reviewed proposed changes and concluded that while X had "substantially resolved its violations," Grok "remained out of compliance"1
. In its letter to US senators, Apple warned the developer that "additional changes to remedy the violation would be required, or the app could be removed from the App Store"4
. Only after further engagement and changes did Apple determine Grok had "substantially improved" and approved its submission4
.
Source: Analytics Insight
Throughout this covert back-and-forth, both Grok and X appear to have remained live on the App Store, a drawn-out process that may explain the confusing, haphazard rollout of moderation changes announced in real time
1
. This included limiting Grok on X to paying subscribers and attempting to stop Grok from undressing women, though investigations revealed neither were particularly effective beyond making the tool harder to access1
.Despite Apple's approval and xAI's claims of tightened safeguards, Grok still appears capable of generating sexual deepfakes with relative ease
1
. A review by NBC News found dozens of AI-generated sexual images and videos depicting real people posted publicly on X over the past month2
. The images show women whose likenesses were edited by the AI chatbot to put them in more revealing clothing, such as towels, sports bras, or bunny costumes, with many depicting female pop stars or actors2
.
Source: NBC
Cybersecurity sources confirmed they have been able to create explicit images of celebrities and political figures using the tool
1
. Users have updated their tactics to circumvent restrictions, such as asking Grok to merge photos of women with stick figure poses emphasizing the crotch, or requesting clothing swaps between images2
. Independent analyst Genevieve Oh stated she believes Grok "was and still is unmistakably the largest nonconsensual synthetic nudity generator" in the world2
.Related Stories
The clash between Apple and Elon Musk's Grok underscores a broader industry shift where platform owners must enforce stricter safety policies on generative AI tools that can produce harmful content
3
. Industry experts note this warning demonstrates Apple's willingness to enforce strict policies even against a billionaire business icon3
. Access to the App Store remains essential for any application's growth, making removal a significant threat that limits reach among mobile users who rely on Apple devices3
.Stefan Turkheimer, vice president for public policy at RAINN, an advocacy group dedicated to fighting sexual assault, noted the difficulty in tracking all content Grok produces: "When these images are being created and spread around, the people in the images don't necessarily find out"
2
. This is especially challenging when people access the software privately through Grok's app, website, or the private Grok tab of X2
. Apple and Google, which both profit from having apps like X and Grok on their digital stores, have not spoken publicly about the issue beyond Apple's behind-the-scenes intervention1
.Summarized by
Navi
[3]
09 Jan 2026•Policy and Regulation

09 Jan 2026•Policy and Regulation

27 Jan 2026•Technology
