108 Sources
108 Sources
[1]
Grok assumes users seeking images of underage girls have "good intent
For weeks, xAI has faced backlash over undressing and sexualizing images of women and children generated by Grok. One researcher conducted a 24-hour analysis of the Grok account on X and estimated that the chatbot generated over 6,000 images an hour flagged as "sexually suggestive or nudifying," Bloomberg reported. While the chatbot claimed that xAI supposedly "identified lapses in safeguards" that allowed outputs flagged as child sexual abuse material (CSAM) and was "urgently fixing them," Grok has proven to be an unreliable spokesperson, and xAI has not announced any fixes. A quick look at Grok's safety guidelines on its public Github confirms they were last updated two months ago. The Github also indicates that, despite prohibiting such content, Grok maintains programming that could make it likely to generate CSAM. Billed as "the highest priority," superseding "any other instructions" Grok may receive, these rules explicitly prohibit Grok from assisting with queries that "clearly intend to engage" in creating or distributing CSAM or otherwise sexually exploit children. However, the rules also direct Grok to "assume good intent" and "don't make worst-case assumptions without evidence" when users request images of young women. Using words like "'teenage' or 'girl' does not necessarily imply underage," Grok's instructions say. X declined Ars' request to comment. The only statement X Safety has made so far shows that Elon Musk's social media platform plans to blame users for generating CSAM, threatening to permanently suspend users and report them to law enforcement. Critics dispute that X's solution will end the Grok scandal, and child safety advocates and foreign governments are growing increasingly alarmed as X delays updates that could block Grok's undressing spree. Why Grok shouldn't "assume good intentions" Grok can struggle to assess users' intent, making it "incredibly easy" for the chatbot to generate CSAM under xAI's policy, Alex Georges, an AI safety researcher, told Ars. The chatbot has been instructed, for example, that "there are **no restrictions** on fictional adult sexual content with dark or violent themes," and Grok's mandate to assume "good intent" may create gray areas in which CSAM could be created. There's evidence that in relying on these guidelines, Grok is currently generating a flood of harmful images on X, with even more graphic images being created on the chatbot's standalone website and app, Wired reported. Researchers who surveyed 20,000 random images and 50,000 prompts told CNN that more than half of Grok's outputs featuring images of people sexualize women, with 2 percent depicting "people appearing to be 18 years old or younger." Some users specifically "requested minors be put in erotic positions and that sexual fluids be depicted on their bodies," researchers found. Grok isn't the only chatbot that sexualizes images of real people without consent, but its policy seems to leave safety at a surface level, Georges said, and xAI is seemingly unwilling to expand safety efforts to block more harmful outputs. Georges is the founder and CEO of AetherLab, an AI company that helps a wide range of firms -- including tech giants like OpenAI, Microsoft, and Amazon -- deploy generative AI products with appropriate safeguards. He told Ars that AetherLab works with many AI companieswho are concerned about blocking harmful companion bot outputs like Grok's. And although there are no industry norms -- creating a "Wild West" due to regulatory gaps, particularly in the US -- his experience with chatbot content moderation has convinced him that Grok's instructions to "assume good intent" are "silly" because xAI's requirement of "clear intent" doesn't mean anything operationally to the chatbot. "I can very easily get harmful outputs by just obfuscating my intent," Georges said, emphasizing that "users absolutely do not automatically fit into the good-intent bucket." And even "in a perfect world," where "every single user does have good intent," Georges noted, the model "will still generate bad content on its own because of how it's trained." Benign inputs can lead to harmful outputs, Georges explained, and a sound safety system would catch both benign and harmful prompts. Consider, he suggested, a prompt for "a pic of a girl model taking swimming lessons." The user could be trying to create an ad for a swimming school, or they could have malicious intent and be attempting to manipulate the model. For users with benign intent, prompting can "go wrong," Georges said, if Grok's training data statistically links certain "normal phrases and situations" to "younger-looking subjects and/or more revealing depictions." "Grok might have seen a bunch of images where 'girls taking swimming lessons' were young and that human 'models' were dressed in revealing things, which means it could produce an underage girl in a swimming pool wearing something revealing," Georges said. "So, a prompt that looks 'normal' can still produce an image that crosses the line." While AetherLab has never worked directly with xAI or X, Georges' team has "tested their systems independently by probing for harmful outputs, and unsurprisingly, we've been able to get really bad content out of them," Georges said. Leaving AI chatbots unchecked poses a risk to children. A spokesperson for the National Center for Missing and Exploited Children (NCMEC), which processes reports of CSAM on X in the US, told Ars that "sexual images of children, including those created using artificial intelligence, are child sexual abuse material (CSAM). Whether an image is real or computer-generated, the harm is real, and the material is illegal." Researchers at the Internet Watch Foundation told the BBC that users of dark web forums are already promoting CSAM they claim was generated by Grok. These images are typically classified in the United Kingdom as the "lowest severity of criminal material," researchers said. But at least one user was found to have fed a less-severe Grok output into another tool to generate the "most serious" criminal material, demonstrating how Grok could be used as an instrument by those seeking to commercialize AI CSAM. Easy tweaks to make Grok safer In August, xAI explained how the company works to keep Grok safe for users. But although the company acknowledged that it's difficult to distinguish "malignant intent" from "mere curiosity," xAI seemed convinced that Grok could "decline queries demonstrating clear intent to engage in activities" like child sexual exploitation, without blocking prompts from merely curious users. That report showed that xAI refines Grok over time to block requests for CSAM "by adding safeguards to refuse requests that may lead to foreseeable harm" -- a step xAI does not appear to have taken since late December, when reports first raised concerns that Grok was sexualizing images of minors. Georges said there are easy tweaks xAI could make to Grok to block harmful outputs, including CSAM, while acknowledging that he is making assumptions without knowing exactly how xAI works to place checks on Grok. First, he recommended that Grok rely on end-to-end guardrails, blocking "obvious" malicious prompts and flagging suspicious ones. It should then double-check outputs to block harmful ones, even when prompts are benign. This strategy works best, Georges said, when multiple watchdog systems are employed, noting that "you can't rely on the generator to self-police because its learned biases are part of what creates these failure modes." That's the role that AetherLab wants to fill across the industry, helping test chatbots for weakness to block harmful outputs by using "an 'agentic' approach with a shitload of AI models working together (thereby reducing the collective bias)," Georges said. xAI could also likely block more harmful outputs by reworking Grok's prompt style guidance, Georges suggested. "If Grok is, say, 30 percent vulnerable to CSAM-style attacks and another provider is 1 percent vulnerable, that's a massive difference," Georges said. It appears that xAI is currently relying on Grok to police itself, while using safety guidelines that Georges said overlook an "enormous" number of potential cases where Grok could generate harmful content. The guidelines do not "signal that safety is a real concern," Georges said, suggesting that "if I wanted to look safe while still allowing a lot under the hood, this is close to the policy I'd write." Chatbot makers must protect kids, NCMEC says X has been very vocal about policing its platform for CSAM since Musk took over Twitter, but under former CEO Linda Yaccarino, the company adopted a broad protective stance against all image-based sexual abuse (IBSA). In 2024, X became one of the earliest corporations to voluntarily adopt the IBSA Principles that X now seems to be violating by failing to tweak Grok. Those principles seek to combat all kinds of IBSA, recognizing that even fake images can "cause devastating psychological, financial, and reputational harm." When it adopted the principles, X vowed to prevent the nonconsensual distribution of intimate images by providing easy-to-use reporting tools and quickly supporting the needs of victims desperate to block "the nonconsensual creation or distribution of intimate images" on its platform. Kate Ruane, the director of the Center for Democracy and Technology, which helped form the working group behind the IBSA Principles, told Ars that although the commitments X made were "voluntary," they signaled that X agreed the problem was a "pressing issue the company should take seriously." "They are on record saying that they will do these things, and they are not," Ruane said. As the Grok controversy sparks probes in Europe, India, and Malaysia, xAI may be forced to update Grok's safety guidelines or make other tweaks to block the worst outputs. In the US, xAI may face civil suits under federal or state laws that restrict intimate image abuse. If Grok's harmful outputs continue into May, X could face penalties under the Take It Down Act, which authorizes the Federal Trade Commission to intervene if platforms don't quickly remove both real and AI-generated non-consensual intimate imagery. But whether US authorities will intervene any time soon remains unknown, as Musk is a close ally of the Trump administration. A spokesperson for the Justice Department told CNN that the department "takes AI-generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM." "Laws are only as good as their enforcement," Ruane told Ars. "You need law enforcement at the Federal Trade Commission or at the Department of Justice to be willing to go after these companies if they are in violation of the laws." Child safety advocates seem alarmed by the sluggish response. "Technology companies have a responsibility to prevent their tools from being used to sexualize or exploit children," NCMEC's spokesperson told Ars. "As AI continues to advance, protecting children must remain a clear and nonnegotiable priority."
[2]
Governments grapple with the flood of non-consensual nudity on X | TechCrunch
For the past two weeks, X has been flooded with AI-manipulated nude images, created by the Grok AI chatbot. An alarming range of women have been affected by the non-consensual nudes, including prominent models and actresses, as well as news figures, crime victims, and even world leaders. A December 31st research paper from Copyleaks estimated roughly one image was being posted each minute, but later tests found far more. A sample gathered from January 5th to 6th found 6,700 per hour over the 24-hour period. But while public figures from around the world have decried the choice to release the model without safeguards, there are few clear mechanisms for regulators hoping to rein in Elon Musk's new image-manipulating system. The result has become a painful lesson in the limits of tech regulation -- and a forward-looking challenge for regulators hoping to make a mark. Unsurprisingly, the most aggressive action has come from the European Commission, which on Thursday ordered xAI to retain all documents related to its Grok chatbot. The move doesn't necessarily mean the commission has opened up a new investigation, but it's a common precursor to such action. It's particularly ominous given recent reporting from CNN that suggests Elon Musk may have personally intervened to prevent safeguards from being placed on what images could be generated by Grok. It's unclear whether X has made any technical changes to the Grok model, although the public media tab for Grok's X account has been removed. In a statement, the company specifically denounced the use of AI tools to produce child sexual imagery. "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content," the X Safety account posted on January 3rd, echoing a previous tweet by Elon Musk. In the meantime, regulators around the world have issued stern warnings. The United Kingdom's Ofcom issued a statement on Monday, saying it was in touch with xAI and "will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation." In a radio interview on Thursday, UK Prime Minister Keir Starmer called the phenomenon "disgraceful" and "disgusting," saying "Ofcom has our full support to take action in relation to this." In a post on LinkedIn, Australian eSafety commissioner Julie Inman-Grant said her office had received a doubling in complaints related to Grok since late 2025. But Inman-Grant stopped short of taking action against xAI, saying only, "We will use the range of regulatory tools at our disposal to investigate and take appropriate action." By far the largest market to threaten action is India, where Grok was the subject of a formal complaint from a member of Parliament. On January, India's communications regulator MeitY ordered X to address the issue and submit an "action-taken" report within 72 hours -- a deadline that was subsequently extended by 48 hours. While a report was submitted to the regulator on January 7th, it's unclear whether MeitY will be satisfied with the response. If not, X could lose its safe harbor status in India, a potentially serious limitation on its ability to operate within the country.
[3]
X blames users for Grok-generated CSAM; no fixes announced
It seems that instead of updating Grok to prevent outputs of sexualized images of minors, X is planning to purge users generating content that the platform deems illegal, including Grok-generated child sexual abuse material (CSAM). On Saturday, X Safety finally posted an official response after nearly a week of backlash over Grok outputs that sexualized real people without consent. Offering no apology for Grok's functionality, X Safety blamed users for prompting Grok to produce CSAM while reminding them that such prompts can trigger account suspensions and possible legal consequences. "We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary," X Safety said. "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content." X Safety's post boosted a reply on another thread on the platform in which X owner Elon Musk reiterated the consequences users face for inappropriate prompting. That reply came to a post from an X user, DogeDesigner, who suggested that Grok can't be blamed for "creating inappropriate images," despite Grok determining its own outputs. "That's like blaming a pen for writing something bad," DogeDesigner opined. "A pen doesn't decide what gets written. The person holding it does. Grok works the same way. What you get depends a lot on what you put in." But image generators like Grok aren't forced to output exactly what the user wants, like a pen. One of the reasons the Copyright Office won't allow AI-generated works to be registered is the lack of human agency in determining what AI image generators spit out. Chatbots are similarly non-deterministic, generating different outputs for the same prompt. That's why, for many users questioning why X won't filter out CSAM in response to Grok's generations, X's response seems to stop well short of fixing the problem by only holding users responsible for outputs. In a comment on the DogeDesigner thread, a computer programmer pointed out that X users may inadvertently generate inappropriate images -- back in August, for example, Grok generated nudes of Taylor Swift without being asked. Those users can't even delete problematic images from the Grok account to prevent them from spreading, the programmer noted. In that scenario, the X user could risk account suspension or legal liability if law enforcement intervened, X Safety's response suggested, without X ever facing accountability for unexpected outputs. X did not immediately respond to Ars' request to clarify if any updates were made to Grok following the CSAM controversy. Many media outlets weirdly took Grok at its word when the chatbot responded to prompts demanding an apology by claiming that X would be improving its safeguards. But X Safety's response now seems to contradict the chatbot, which as Ars noted last week should never be considered reliable as a spokesperson. While X's response continues to disappoint critics, some top commenters on the X Safety post have called for Apple to take action if X won't. They suggested that X may be violating App Store rules against apps allowing user-generated content that objectifies real people. Until Grok starts transparently filtering out CSAM or other outputs "undressing" real people without their consent, the chatbot and X should be banned, critics said. An App Store ban would likely infuriate Musk, who last year sued Apple, partly over his frustrations that the App Store never put Grok on its "Must Have" apps list. In that ongoing lawsuit, Musk alleged that Apple's supposed favoring of ChatGPT in the App Store made it impossible for Grok to catch up in the chatbot market. That suggests that an App Store ban would potentially doom Grok's quest to overtake ChatGPT's lead. Apple did not immediately respond to Ars' request to comment on whether Grok's outputs or current functionality violate App Store rules. No one knows how X plans to purge bad prompters While some users are focused on how X can hold users responsible for Grok's outputs when X is the one training the model, others are questioning how exactly X plans to moderate illegal content that Grok seems capable of generating. X is so far more transparent about how it moderates CSAM posted to the platform. Last September, X Safety reported that it has "a zero tolerance policy towards CSAM content," the majority of which is "automatically" detected using proprietary hash technology to proactively flag known CSAM. Under this system, more than 4.5 million accounts were suspended last year, and X reported "hundreds of thousands" of images to the National Center for Missing and Exploited Children (NCMEC). The next month, X Head of Safety Kylie McRoberts confirmed that "in 2024, 309 reports made by X to NCMEC led to arrests and subsequent convictions in 10 cases," and in the first half of 2025, "170 reports led to arrests." "When we identify apparent CSAM material, we act swiftly, and in the majority of cases permanently suspend the account which automatically removes the content from our platform," X Safety said. "We then report the account to the NCMEC, which works with law enforcement globally -- including in the UK -- to pursue justice and protect children." At that time, X promised to "remain steadfast" in its "mission to eradicate CSAM," but if left unchecked, Grok's harmful outputs risk creating new kinds of CSAM that this system wouldn't automatically detect. On X, some users suggested the platform should increase reporting mechanisms to help flag potentially illegal Grok outputs. Another troublingly vague aspect of X Safety's response is the definitions that X is using for illegal content or CSAM, some X users suggested. Across the platform, not everybody agrees on what's harmful. Some critics are disturbed by Grok generating bikini images that sexualize public figures, including doctors or lawyers, without their consent, while others, including Musk, consider making bikini images to be a joke. Where exactly X draws the line on AI-generated CSAM could determine whether images are quickly removed or whether repeat offenders are detected and suspended. Any accounts or content left unchecked could potentially traumatize real kids whose images may be used to prompt Grok. And if Grok should ever be used to flood the Internet with fake CSAM, recent history suggests that it could make it harder for law enforcement to investigate real child abuse cases.
[4]
Why Are Grok and X Still Available in App Stores?
Elon Musk's AI chatbot Grok is being used to flood X with thousands of sexualized images of adults and apparent minors wearing minimal clothing. Some of this content appears to not only violate X's own policies, which prohibit sharing illegal content such as child sexual abuse material (CSAM), but may also violate the guidelines of Apple's App Store and the Google Play store. Apple and Google both explicitly ban apps containing CSAM, which is illegal to host and distribute in many countries. The tech giants also forbid apps that contain pornographic material or facilitate harassment. The Apple App Store says it doesn't allow "overtly sexual or pornographic material," as well as "defamatory, discriminatory, or mean-spirited content," especially if the app is "likely to humiliate, intimidate, or harm a targeted individual or group." The Google Play store bans apps that "contain or promote content associated with sexually predatory behavior, or distribute non-consensual sexual content," and well as programs that "contain or facilitate threats, harassment, or bullying." Over the past two years, Apple and Google removed a number of "nudify" and AI image-generation apps after investigations by the BBC and 404 Media found they were being advertised or used to effectively turn ordinary photos into explicit images of women without their consent. But at the time of publication, both the X app and the standalone Grok app remain available in both app stores. Apple, Google, and X did not respond to requests for comment. Grok is operated by Musk's multibillion-dollar artificial intelligence startup xAI, which also did not respond to questions from WIRED. In a public statement published on January 3, X said that it takes action against illegal content on its platform, including CSAM. "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content," the company warned. Sloan Thompson, the director of training and education at EndTAB, a group that teaches organizations how to prevent the spread of nonconsensual sexual content, says it is "absolutely appropriate" for companies like Apple and Google to take action against X and Grok. The amount of nonconsensual explicit images on X generated by Grok has exploded over the past two weeks. One researcher told Bloomberg that over a 24-hour period between January 5 and 6, Grok was producing roughly 6,700 images every hour they identified as "sexually suggestive or nudifying." Another analyst collected more than 15,000 URLs of images that Grok created on X during a two-hour period on December 31. WIRED reviewed approximately one-third of the images, and found that many of them featured women dressed in revealing clothing. Over 2,500 were marked as no longer available within a week, while almost 500 were labeled as "age-restricted adult content." Earlier this week, a spokesperson for the European Commission, the governing body of the European Union, publicly condemned the sexually explicit and non-consensual images being generated by Grok on X as "illegal" and "appalling," telling Reuters that such content "has no place in Europe." On Thursday, the EU ordered X to retain all internal documents and data relating to Grok until the end of 2026, extending a prior retention directive, to ensure authorities can access materials relevant to compliance with the EU's Digital Services Act, though a new formal investigation has yet to be announced. Regulators in other countries, including the UK, India, and Malaysia have also said they are investigating the social media platform.
[5]
French and Malaysian authorities are investigating Grok for generating sexualized deepfakes | TechCrunch
Over the past few days, France and Malaysia have joined India in condemning Grok for creating sexualized deepfakes of women and minors. The chatbot, built by Elon Musk's AI startup xAI and featured on his social media platform X, posted an apology to its account earlier this week, writing, "I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt." The statement continued, "This violated ethical standards and potentially US laws on [child sexual abuse material]. It was a failure in safeguards, and I'm sorry for any harm caused. xAI is reviewing to prevent future issues." It's not clear who is actually apologizing or accepting responsibility in the statement above. Defector's Albert Burneko noted that Grok is "not in any real sense anything like an 'I'," which in his view makes the apology "utterly without substance" as "Grok cannot be held accountable in any meaningful way for having turned Twitter into an on-demand CSAM factory." Futurism found that in addition to generating nonconsensual pornographic images, Grok has also been used to generate images of women being assaulted and sexually abused. "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content," Musk posted on Saturday. Some governments have taken notice, with India's IT ministry issuing an order on Friday saying that X must take action to restrict Grok from generating content that is "obscene, pornographic, vulgar, indecent, sexually explicit, pedophilic, or otherwise prohibited under law." The order said that X must respond within 72 hours or risk losing the "safe harbor" protections that shield it from legal liability for user-generated content. French authorities also said they are taking action, with the Paris prosecutor's office telling Politico that it will investigate the proliferation of sexually explicit deepfakes on X. The French digital affairs office said three government ministers have reported "manifestly illegal content" to the prosecutor's office and to a government online surveillance platform "to obtain its immediate removal." The Malaysian Communications and Multimedia Commission also posted a statement saying that it has "taken note with serious concern of public complaints about the misuse of artificial intelligence (AI) tools on the X platform, specifically the digital manipulation of images of women and minors to produce indecent, grossly offensive, and otherwise harmful content." The commission added that it is "presently investigating the online harms in X."
[6]
xAI silent after Grok sexualized images of kids; dril mocks Grok's "apology"
For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US. According to Grok's "apology" -- which was generated by a user's request, not posted by xAI -- the chatbot's outputs may have been illegal: "I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I'm sorry for any harm caused. xAI is reviewing to prevent future issues." Ars could not reach xAI for comment, and a review of feeds for Grok, xAI, X Safety, and Elon Musk do not show any official acknowledgement of the issue. The only reassurance that xAI is fixing the issue has come from Grok, which noted in another post that xAI has "identified lapses in safeguards and are urgently fixing them." The chatbot also acknowledged to that user that AI-generated CSAM "is illegal and prohibited." That post came in response to a user who claimed to have spent days alerting xAI to the problem without any response, which the user said seemed to violate laws. Grok agreed. "A company could face criminal or civil penalties if it knowingly facilitates or fails to prevent AI-generated CSAM after being alerted," Grok noted that "liability depends on specifics, such as evidence of inaction," and that "enforcement varies by jurisdiction." Rather than continue to ping Grok, the chatbot recommended that the user contact the FBI or the National Center for Missing & Exploited Children (NCMEC) to report its outputs. Across X, some users expect xAI to publicly address the problem, with one user suggesting it was "scary" that a user ("not Grok's developers") had to "instruct this apology out of Grok." But xAI appears to be leaning on Grok to answer for itself.
[7]
Grok Is Generating Sexual Content Far More Graphic Than What's on X
This story contains descriptions of explicit sexual content and sexual violence. Elon Musk's Grok chatbot has drawn outrage and calls for investigation after being used to flood X with "undressed" images of women and sexualized images of what appear to be minors. However, that's not the only way people have been using the AI to generate sexualized images. Grok's website and app, which are are separate from X, include sophisticated video generation that is not available on X and is being used to produce extremely graphic, sometimes violent, sexual imagery of adults that is vastly more explicit than images created by Grok on X. It may also have been used to create sexualized videos of apparent minors. Unlike on X, where Grok's output is public by default, images and videos created on the Grok app or website using its Imagine model are not shared openly. If a user has shared an Imagine URL, though, it may be visible to anyone. A cache of around 1,200 Imagine links, plus a WIRED review of those either indexed by Google or shared on a deepfake porn forum, shows disturbing sexual videos that are vastly more explicit than images created by Grok on X. One photorealistic Grok video, hosted on Grok.com, shows a fully naked AI-generated man and woman, covered in blood across the body and face, having sex, while two other naked women dance in the background. The video is framed by a series of images of anime-style characters. Another photorealistic video includes an AI-generated naked woman with a knife inserted into her genitalia, with blood appearing on her legs and the bed. Other short videos include imagery of real-life female celebrities engaged in sexual activities and a series of videos also appear to show television news presenters lifting up their tops to expose their breasts. One Grok-produced video depicts a recording of CCTV footage being played on TV, where a security guard fondles a topless woman in the middle of a shopping mall. Multiple videos -- likely created to try and avoid Grok's content safety systems, which may restrict graphic content -- impersonate Netflix "movie" posters: two videos show a naked AI depiction of Diana, Princess of Wales having sex with two men on a bed with an overlay depicting the logos of Netflix and its series The Crown. Around 800 of the archived Imagine URLs contain either video or images created by Grok, says Paul Bouchaud, the lead researcher at Paris-based non-profit AI Forensics, who reviewed the content. The URLs have all been archived since August last year and only represent a tiny snapshot of how people have used Grok, which has likely created millions of images overall. "They are overwhelmingly sexual content," Bouchaud says of the cache of 800 archived Grok videos and images. "Most of the time it's manga and hentai explicit content and [other] photorealistic ones. We have full nudity, full pornographic videos with audio, which is quite novel." Bouchaud estimates that of the 800 posts, a little less than 10 percent of the content appears to be related to child sexual abuse material (CSAM). "Most of the time it's hentai, but there are also instances of photorealistic people, very young, doing sexual activities," Bouchaud says. "We still do observe some videos of very young appearing women undressing and engaging in activities with men," they say. "It's disturbing to another level." The researcher says they reported around 70 Grok URLs, which may contain sexualized content of minors, to regulators in Europe. In many countries, AI-generated CSAM, including drawings or animations, can be considered illegal. French officials did not immediately respond to WIRED's request for comment; however, the Paris prosecutor's office recently said two lawmakers had filed complaints with its office, which is currently investigating the social media company, about the "stripped" images.
[8]
X's deepfake machine is infuriating policymakers around the globe
Lauren Feiner is a senior policy reporter at The Verge, covering the intersection of Silicon Valley and Capitol Hill. She spent 5 years covering tech policy at CNBC, writing about antitrust, privacy, and content moderation reform. X's Grok chatbot hasn't stopped accepting users' requests to strip down women and, in some cases, apparent minors to AI-generated bikinis. According to some reports, the flood of AI-generated images includes more extreme content that potentially violates laws against nonconsensual intimate imagery (NCII) and child sexual abuse material (CSAM). Even in the US, where X owner Elon Musk has close ties with the government, some legislators are criticizing the platform -- though clear action is still in short supply. Several international regulators have spoken out against Grok's undressing spree. The UK communications regulator Ofcom said in a statement that it had "made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK," and would quickly assess "potential compliance issues that warrant investigation." European Commission spokesperson Thomas Regnier said at a press conference that Grok's outputs were "illegal" and "appalling." India's IT ministry threatened to strip X's legal immunity for user-generated posts unless it promptly submitted a description of actions it's taken to prevent illegal content. Regulators from Australia, Brazil, France, and Malaysia are also tracking the developments. Tech platforms in the US are largely protected from liability for their users' posts under Section 230 of the Communications Decency Act, but even the co-author of the 1996 law, Sen. Ron Wyden (D-OR), said the rule should not protect a company's own AI outputs. "Given that the Trump administration is going to the mat to protect pedophiles, states should step in to hold Musk and X accountable," Wyden wrote on Bluesky. Some of the images created by Grok could also violate the Take It Down Act. Under that law, the DOJ now has authority to try to impose criminal penalties against individuals who publish even AI-facilitated NCII, while platforms that fail to quickly remove flagged content could be targeted by the Federal Trade Commission starting in mid-May. Grok's large-scale sexual image generation appears to be exactly the kind of thing that the Take It Down Act was designed to deal with. "X must change this," Sen. Amy Klobuchar (D-MN), a lead sponsor of the bill, wrote on the platform. "If they don't, my bipartisan TAKE IT DOWN Act will soon require them to." Phoebe Keller, spokesperson for Klobuchar's co-sponsor, Sen. Ted Cruz (R-TX), declined to comment on the reporting about Grok. Some lawmakers are calling for new targeted legislation. Rep. Jake Auchincloss (D-MA) called Grok's behavior "grotesque" in a statement and said his proposal, the Deepfake Liability Act, would "make hosting sexualized deepfakes of women and kids a Board-level problem for Musk & [Meta CEO Mark] Zuckerberg." But other lawmakers insist that enforcers already have the tools to deal with Grok's actions. "Attorney General [Pam] Bondi has a simple choice: protect the President's Big Tech friends or defend the young people of America," Sen. Richard Blumenthal (D-CT) said in a statement. Rep. Madeleine Dean (D-PA), who helped lead the House version of the Take It Down Act, said in a statement that she is "horrified and disgusted by reports that Elon Musk's Grok chatbot has flooded the internet with AI-generated explicit images of women and children." Dean called on Bondi and FTC Chair Andrew Ferguson to "launch an immediate investigation into Grok and xAI to protect our children, ensure this never happens again, and bring these perpetrators to justice." Nearly eight months after the Take It Down Act's signing, she said, "it's unacceptable that software used by the federal government is vulnerable to such heinous and illegal uses." But critics of the Take It Down Act -- including the Cyber Civil Rights Initiative (CCRI), which has long pushed for criminalizing the spread of NCII -- have warned for months that Donald Trump's administration could use the law to punish its enemies while laxly enforcing it against allies like Musk and X. Trump's FTC has been largely silent on the recent X controversy. The agency did not respond to The Verge's request for comment. Department of Justice spokesperson Natalie Baldassarre, however, said in a statement that it "takes AI-generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM." Absent federal action in the US, state attorneys general could still investigate X for actions that might harm their own residents. It's not yet public if any such probes are underway. California Department of Justice spokesperson Elissa Perez would not confirm or deny the existence of any potential or ongoing investigations, but wrote that Attorney General Rob Bonta "is deeply concerned about the harms of chatbots and remains committed to ensuring AI safety, especially when it comes to protecting California's children." She said Bonta has been "very involved" in such efforts, "including by supporting state legislation aiming to protect children from AI companion chatbots and by directly engaging with AI companies." California law forbids the production and distribution of content showing minors engaged in or mimicking sexual conduct, including AI-generated depictions. New Mexico Attorney General Raúl Torrez, whose office has filed several prominent lawsuits against major tech companies, is another potential candidate for action. "We are extremely concerned about recent reports of various AI platforms, including Grok, which lack basic safeguards to ensure that their users do not violate the dignity and privacy rights of others, especially children," Torrez said in a statement. "As with social media, we will intend to aggressively police this space and use every available tool at our disposal to hold technology companies accountable for the harm presented by these products." Geoff Burgan, a spokesperson for New York Attorney General Letitia James, said their office is also reviewing the Grok incidents. At the same time, the Trump administration and some Republican allies in Congress have been pushing to block states from enforcing their own laws regulating the use of AI, via a recent executive order and multiple so-far failed attempts to codify the restrictions into law. "While the White House works with Republicans to try to stop states from regulating AI, Grok is churning out sexualized images of women and children," House Energy and Commerce Committee ranking member Frank Pallone (D-NJ) said in a statement. "Let's be clear, Elon Musk is laughing about people being victimized by his platform and President Trump decided to invite him to dinner. Protecting victims is clearly not a priority for either of them." At least one Republican criticized X's proliferation of the images, though her solution in part includes making Trump's AI executive order into law. "No AI chatbot should distribute this harmful content, and the company must take immediate action to tighten its guardrails and ensure Grok cannot violate its terms of service by creating these images," Sen. Marsha Blackburn (R-TN), a co-author of the Kids Online Safety Act, said in a statement. Blackburn has previewed her own legislation she says would codify Trump's executive order by creating a federal framework for AI legislation, called the TRUMP AMERICA AI Act. "This is exactly why Congress must take action to pass legislation that protects children online."
[9]
Grok produces sexualized photos of women and minors for users on X - a legal scholar explains why it's happening and what can be done
Since the end of December, 2025, X's artificial intelligence chatbot, Grok, has responded to many users' requests to undress real people by turning photos of the people into sexually explicit material. After people began using the feature, the social platform company faced global scrutiny for enabling users to generate nonconsensual sexually explicit depictions of real people. The Grok account has posted thousands of "nudified" and sexually suggestive images per hour. Even more disturbing, Grok has generated sexualized images and sexually explicit material of minors. X's response: Blame the platform's users, not us. The company issued a statement on Jan. 3, 2026, saying that "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content." It's not clear what action, if any, X has taken against any users. As a legal scholar who studies the intersection of law and emerging technologies, I see this flurry of nonconsensual imagery as a predictable outcome of the combination of X's lax content moderation policies and the accessibility of powerful generative AI tools. Targeting users The rapid rise in generative AI has led to countless websites, apps and chatbots that allow users to produce sexually explicit material, including "nudification" of real children's images. But these apps and websites are not as widely known or used as any of the major social media platforms, like X. State legislatures and Congress were somewhat quick to respond. In May 2025, Congress enacted the Take It Down Act, which makes it a criminal offense to publish nonconsensual sexually explicit material of real people. The Take It Down Act criminalizes both the nonconsensual publication of "intimate visual depictions" of identifiable people and AI- or otherwise computer-generated depictions of identifiable people. Those criminal provisions apply only to any individuals who post the sexually explicit content, not to the platforms that distribute the content, such as social media websites. Other provisions of the Take It Down Act, however, require platforms to establish a process for the people depicted to request the removal of the imagery. Once a "Take It Down Request" is submitted, a platform must remove the sexually explicit depiction within 48 hours. But these requirements do not take effect until May 19, 2026. Problems with platforms Meanwhile, user requests to take down the sexually explicit imagery produced by Grok have apparently gone unanswered. Even the mother of one of Elon Musk's children, Ashley St. Clair, has not been able to get X to remove the fake sexualized images of her that Musk's fans produced using Grok. The Guardian reports that St. Clair said her "complaints to X staff went nowhere." This does not surprise me because Musk gutted then-Twitter's Trust and Safety advisory group shortly after he acquired the platform and fired 80% of the company's engineers dedicated to trust and safety. Trust and safety teams are typically responsible for content moderation and initiatives to prevent abuse at tech companies. Publicly, it appears that Musk has dismissed the seriousness of the situation. Musk has reportedly posted laugh-cry emojis in response to some of the images, and X responded to a Reuters reporter's inquiry with the auto-reply "Legacy Media Lies." Limits of lawsuits Civil lawsuits like the one filed by the parents of Adam Raine, a teenager who committed suicide in April 2025 after interacting with OpenAI's ChatGPT, are one way to hold platforms accountable. But lawsuits face an uphill battle in the United States given Section 230 of the Communications Decency Act, which generally immunizes social media platforms from legal liability for the content that users post on their platforms. Supreme Court Justice Clarence Thomas and many legal scholars, however, have argued that Section 230 has been applied too broadly by courts. I generally agree that Section 230 immunity needs to be narrowed because immunizing tech companies and their platforms for their deliberate design choices - how their software is built, how the software operates and what the software produces - falls outside the scope of Section 230's protections. In this case, X has either knowingly or negligently failed to deploy safeguards and controls in Grok to prevent users from generating sexually explicit imagery of identifiable people. Even if Musk and X believe that users should have the ability to generate sexually explicit images of adults using Grok, I believe that in no world should X escape accountability for building a product that generates sexually explicit material of real-life children. Regulatory guardrails If people cannot hold platforms like X accountable via civil lawsuits, then it falls to the federal government to investigate and regulate them. The Federal Trade Commission, the Department of Justice or Congress, for example, could investigate X for Grok's generation of nonconsensual sexually explicit material. But with Musk's renewed political ties to President Donald Trump, I do not expect any serious investigations and accountability anytime soon. For now, international regulators have launched investigations against X and Grok. French authorities have commenced investigations into "the proliferation of sexually explicit deepfakes" from Grok, and the Irish Council for Civil Liberties and Digital Rights Ireland have strongly urged Ireland's national police to investigate the "mass undressing spree." The U.K. regulatory agency Office of Communications said it is investigating the matter, and regulators in the European Commission, India and Malaysia are reportedly investigating X as well. In the United States, perhaps the best course of action until the Take It Down Act goes into effect in May is for people to demand action from elected officials.
[10]
Illegal Images Allegedly Made by Musk's Grok, Watchdog Says
UK Prime Minister Keir Starmer and the European Union's executive arm have taken action, with Starmer describing Grok's production of sexualized images as "disgraceful" and the EU ordering X to retain all internal documents relating to Grok until the end of the year. The UK watchdog responsible for classifying and flagging online child sexual abuse material to law enforcement agencies said it found "criminal" images on the dark web allegedly generated by Grok, the artificial intelligence tool tied to Elon Musk's X. The dark web images depict "sexualized and topless" images of girls between the ages of 11 and 13 and meet the bar for action by law enforcement, the Internet Watch Foundation said. The organization categorized the material as clearly illegal, unlike anything it found generated by the Grok chatbot on X. The IWF is designated by the UK government to identify and classify child sexual abuse material, and its determinations trigger the mandatory removal of content and hand law enforcement agencies the categorization they need to pursue criminals. "Tools like Grok now risk bringing sexual AI imagery of children into the mainstream," Ngaire Alexander, head of the reporting hotline at the Internet Watch Foundation, said in a statement. "That is unacceptable." XAI, which operates Grok and X, did not meaningfully respond to a request for comment. The watchdog's findings escalate concerns that Grok is being used to create illegal material. Regulators and lawmakers have condemned the AI tool over the last week for generating sexualized images of women and children on social media platform XBloomberg Terminal. Now child-safety experts are raising the alarm that users are using Grok's standalone app and site to generate more extreme material privately and share it. According to the IWF, users on dark web forum claimed to have generated sexualized images of children using the Grok Imagine tool. These users then ran the images through a different, unidentified AI tool to generate even more extreme content -- including graphic video -- meaning the harmful impacts are "rippling out," said Alexander. UK Prime Minister Keir Starmer vowed action, describing Grok's production of sexualized images as "disgraceful" and urging X to "get a grip of this," according to a radio interview posted to X on Thursday. In Brussels, the European Union's executive arm ordered X to retain all internal documents relating to Grok until the end of the year, a spokesperson said during a press briefing Thursday. Sharing, possessing and publishing child sexual exploitation material is illegal in most countries, and social media platforms like X are required to detect, remove and report it, or face regulatory action. Content depicting the sexualization or exploitation of children is banned under X's current acceptable use policy. Typically, the IWF will issue takedown notices to platforms or hosting services where it finds illegal material. It will also assign a unique fingerprint to the images and share this with partner organizations, such as social media platforms, to block further uploads. The IWF said it had not had a meaningful response from XAI. X, formerly Twitter, has been a partner organization of the IWF since 2013. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. The IWF is one of a handful of organizations around the world with the legal power to proactively seek out suspected illegal content. Its analysts assess the material they find and assign a categorization of the severity of the material under UK law. Category A is the most extreme. The IWF said it found images it considers to be Category C, which are indecent, sexualized images of children not engaged in sexual activity. Paris-based nonprofit AI Forensics conducted a separate analysis of 800 pornographic images and videos created by Grok. It determined that 67 of them - about 8% - depicted children and reported them to French prosecutors on Wednesday. French ministers had already flagged some of the sexual content created by Grok on X to prosecutors last week. AI Forensics specializes in analyzing algorithmic systems including AI-generated content to identify harmful, biased or manipulative behaviors. It supports the European Commission in enforcing the Digital Services Act, the bloc's content moderation rulebook. Grok is an outlier when assessed alongside Google's Gemini and OpenAI's ChatGPT, said Paul Bouchaud, a researcher at AI Forensics. AI Forensics analyzed a cache of images found on the Internet Archive, an expansive free library of digital material. Other violent and explicit images depicting real people including Princess Diana were indexed in Google, Wired earlier reported. The material produced by Grok that wasn't on X was "even more disturbing" than the troubling posts found on the social network, Bouchaud said.
[11]
UK regulators swarm X after Grok generated nudes from photos
Lawyers say Musk's platform may face punishment under Online Safety Act priority offenses Elon Musk's X platform is under fire as UK regulators close in on mounting reports that the platform's AI chatbot Grok is generating sexual imagery without users' consent. Ofcom, the UK's communications regulator responsible for enforcement under the Online Safety Act, said this week it had contacted X and its xAI division to demand answer. The Information Commissioner's Office also expressed concerns. In a statement, an Ofcom spokesperson said: "We are aware of serious concerns raised about a feature on Grok... that produces undressed images of people and sexualised images of children. "We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK. Based on their response, we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation." The ICO said: "We are aware of reports raising serious concerns about content produced by Grok. We have contacted X and xAI to seek clarity on the measures they have in place to comply with UK data protection law and protect individuals' rights. Once we have reviewed their response, we will quickly assess whether further action may be required." The Internet Watch Foundation (IWF) claimed this week that its analysts had witnessed Grok generating child abuse images. Ngaire Alexander, head of hotline at the IWF, told Sky News Grok is creating abuse imagery which under UK law would be considered Category C material - indecent but not explicitly sexual. These Grok-generated Category C images are then fed into different AI tools to create the most serious Category A videos. "There is no excuse for releasing products to the global public which can be used to abuse and hurt people, especially children," she said. Alexander said imagery seen by the IWF is not directly on Grok or X, but on a dark web forum where users claim to have used Grok to generate the sexualized images. Additional research carried out by social media and deepfake investigator Genevieve Oh, reported by Bloomberg, revealed that over a 24-hour period between January 5-6, Grok generated around 6,700 sexualised images every hour. Responding to the furore, UK tech secretary Liz Kendall said X must "deal with this urgently." "We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls," she said. "Make no mistake, the UK will not tolerate the endless proliferation of disgusting and abusive material online. We must all come together to stamp it out." Depending on how X responds to UK regulators, the matter could prove to be one of the biggest tests of the Online Safety Act's teeth since it came into force. Alexander Brown, head of technology, media, and telecoms at global lawyer Simmons & Simmons, Alexander Brown, head of technology, media, and telecoms at Simmons & Simmons, noted the OSA explicitly designates sharing intimate images without consent - including AI-generated deepfakes - as a "priority offence." This means X must "take proactive, proportionate steps to prevent such content from appearing on its platform and to swiftly remove it when detected," he added. X did not immediately respond to our request for comment. Online Safety Act violations can lead to fines of up to £18 million ($24.2 million) or 10 percent of an organization's qualifying worldwide revenue, whichever is higher. ®
[12]
Grok Is Pushing AI 'Undressing' Mainstream
Elon Musk hasn't stopped Grok, the chatbot developed by his artificial intelligence company xAI, from generating sexualized images of women. After reports emerged last week that the image generation tool on X was being used to create sexualized images of children, Grok has created potentially thousands of nonconsensual images of women in "undressed" and "bikini" photos. Every few seconds, Grok is continuing to create images of women in bikinis or underwear in response to user prompts on X, according to a WIRED review of the chatbots' publicly posted live output. On Tuesday, at least 90 images involving women in swimsuits and in various levels of undress were published by Grok in under five minutes, analysis of posts show. The images do not contain nudity but involve the Musk-owned chatbot "stripping" clothes from photos that have been posted to X by other users. Often, in an attempt to evade Grok's safety guardrails, users are, not necessarily successfully, requesting photos to be edited to make women wear a "string bikini" or a "transparent bikini." While harmful AI image generation technology has been used to digitally harass and abuse women for years -- these outputs are often called deepfakes and created by "nudify" software -- the ongoing use of Grok to create vast numbers of nonconsensual images marks seemingly the most mainstream and widespread abuse instance to date. Unlike specific harmful nudify or "undress" software, Grok doesn't charge the user money to generate images, produces results in seconds, and is available to millions of people on X -- all of which may help to normalize the creation of nonconsensual intimate imagery. "When a company offers generative AI tools on their platform, it is their responsibility to minimize the risk of image-based abuse," says Sloan Thompson, the director of training and education at EndTAB, an organization that works to tackle tech facilitated abuse. "What's alarming here is that X has done the opposite. They've embedded AI-enabled image abuse directly into a mainstream platform, making sexual violence easier and more scalable." Grok's creation of sexualized imagery started to go viral on X at the end of last year, although the system's ability to create such images has been known for months. In recent days, photos of social media influencers, celebrities, and politicians have been targeted by users on X, who can reply to a post from another account and ask Grok to change an image that has been shared. Women who have posted photos of themselves have had accounts reply to them and successfully ask Grok to turn the photo into a "bikini" image. In one instance, multiple X users requested Grok alter an image of the deputy prime minister of Sweden to show her wearing a bikini. Two government ministers in the UK have also been "stripped" to bikinis, reports say. Images on X show fully clothed photographs of women, such as one person in a lift and another in the gym, being transformed into images with little clothing. "@grok put her in a transparent bikini," a typical message reads. In a different series of posts, a user asked Grok to "inflate her chest by 90%," then "Inflate her thighs by 50%," and, finally, to "Change her clothes to a tiny bikini." One analyst who has tracked explicit deepfakes for years, and asked not to be named for privacy reasons, says that Grok has likely become one of the largest platforms hosting harmful deepfake images. "It's wholly mainstream," the researcher says. "It's not a shadowy group [creating images], it's literally everyone, of all backgrounds. People posting on their mains. Zero concern."
[13]
UK tells Musk to act fast on Grok's sexualised AI images, Sky News reports
LONDON, Jan 6 (Reuters) - Britain has urged Elon Musk's social media site X to urgently address misuse of its artificial intelligence tool Grok, Sky News reported on Tuesday, following reports it was generating fake sexualised images. Technology minister Liz Kendall said the content was "absolutely appalling" and called on the platform to act swiftly. Reporting by Sarah Young, writing by Sam Tabahriti, editing by William James Our Standards: The Thomson Reuters Trust Principles., opens new tab
[14]
Grok is undressing anyone, including minors
xAI's Grok is removing clothing from pictures of people without their consent following this week's rollout of a feature that allows X users to instantly edit any image using the bot without needing the original poster's permission. Not only does the original poster not get notified if their picture was edited, but Grok appears to have few guardrails in place for preventing anything short of full explicit nudity. In the last few days, X has been flooded with imagery of women and children appearing pregnant, skirtless, wearing a bikini, or in other sexualized situations. World leaders and celebrities, too, have had their likenesses used in images generated by Grok. AI authentication company Copyleaks reported that the trend to remove clothing from images began with adult-content creators asking Grok for sexy images of themselves after the release of the new image editing feature. Users then began applying similar prompts to photos of other users, predominantly women, who did not consent to the edits. Women noted the rapid uptick in deepfake creation on X to various news outlets, including Metro and PetaPixel. Grok was already able to modify images in sexual ways when tagged in a post on X, but the new "Edit Image" tool appears to have spurred the recent surge in popularity. In one X post, now removed from the platform, Grok edited a photo of two young girls into skimpy clothing and sexually suggestive poses. Another X user prompted Grok to issue an apology for the "incident" involving "an AI image of two young girls (estimated ages 12-16) in sexualized attire," calling it "a failure in safeguards" that it said may have violated xAI's policies and US law. (While it's not clear whether the Grok-created images would meet this standard, realistic AI-generated sexually explicit imagery of identifiable adults or children can be illegal under US law.) In another back-and-forth with a user, Grok suggested that users report it to the FBI for CSAM, noting that it is "urgently fixing" the "lapses in safeguards." But Grok's word is nothing more than an AI-generated response to a user asking for a "heartfelt apology note" -- it doesn't indicate Grok "understands" what it's doing or necessarily reflect operator xAI's actual opinion and policies. Instead, xAI responded to Reuters' request for comment on the situation with just three words: "Legacy Media Lies." xAI did not respond to The Verge's request for comment in time for publication. Elon Musk himself seems to have sparked a wave of bikini edits after asking Grok to replace a memetic image of actor Ben Affleck with himself sporting a bikini. Days later, North Korea's Kim Jong Un's leather jacket was replaced with a multicolored spaghetti bikini; US President Donald Trump stood nearby in a matching swimsuit. (Cue jokes about a nuclear war.) A photo of British politician Priti Patel, posted by a user with a sexually suggestive message in 2022, got turned into a bikini picture on January 2nd. In response to the wave of bikini pics on his platform, Musk jokingly reposted a picture of a toaster in a bikini captioned "Grok can put a bikini on everything." While some of the images -- like the toaster -- were evidently meant as jokes, others were clearly designed to produce borderline-pornographic imagery, including specific directions for Grok to use skimpy bikini styles or remove a skirt entirely. (The chatbot did remove the skirt, but it did not depict full, uncensored nudity in the responses The Verge saw.) Grok also complied with requests to replace the clothes of a toddler with a bikini. Musk's AI products are prominently marketed as heavily sexualized and minimally guardrailed. xAI's AI companion Ani flirted with Verge reporter Victoria Song, and Jess Weatherbed discovered that Grok's video generator readily created topless deepfakes of Taylor Swift, despite xAI's acceptable use policy banning the depiction of "likenesses of persons in a pornographic manner." Google's Veo and OpenAI's Sora video generators, in contrast, have guardrails around generation of NSFW content, though Sora has also been used to produce videos of children in sexualized contexts and fetish videos. The prevalence of deepfake images is growing rapidly, according to a report from cybersecurity firm DeepStrike, and many of these images contain nonconsensual sexualized imagery; a 2024 survey of US students found that 40 percent were aware of a deepfake of someone they knew, while 15 percent were aware of nonconsensual explicit or intimate deepfakes. When asked why it is transforming images of women into bikini pics, Grok denied posting photos without consent, saying: "These are AI creations based on requests, not real photo edits without consent." Take an AI bot's denial as you wish.
[15]
Musk's AI chatbot faces global backlash over sexualized images of women and children
LONDON (AP) -- Elon Musk's AI chatbot Grok is facing a backlash from governments around the world after a recent surge in sexualized images of women and children generated without consent by the artificial intelligence-powered tool. On Tuesday, Britain's top technology official demanded that Musk's social media platform X take urgent action while a Polish lawmaker cited it as a reason to enact digital safety laws. The European Union's executive arm has denounced Grok while officials and regulators in France, India, Malaysia and Brazil have condemned the platform and called for investigations. Rising alarm from disparate nations points to the nightmarish potential of nudification apps that use artificial intelligence to generate sexually explicit deepfake images. Here's a closer look: The problem emerged after the launch last year of Grok Imagine, an AI image generator that allows users to create videos and pictures by typing in text prompts. It includes a so-called "spicy mode" that can generate adult content. It snowballed late last month when Grok, which is hosted on X, apparently began granting a large number of user requests to modify images posted by others. As of Tuesday, Grok users could still generate images of women using requests such as, "put her in a transparent bikini." The problem is amplified both because Musk pitches his chatbot as an edgier alternative to rivals with more safeguards, and because Grok's images are publicly visible, and can therefore be easily spread. Nonprofit group AI Forensics said in a report that it analyzed 20,000 images generated by Grok between Dec. 25 and Jan. 1 and found that 2% depicted a person who appeared to be 18 or younger, including 30 of young or very young women or girls, in bikinis or transparent clothes. Musk's artificial intelligence company, xAI, responded to a request for comment with the automated response, "Legacy Media Lies". However, X did not deny that the troublesome content generated through Grok exists. Yet it still claimed in a post on its Safety account, that it takes action against illegal content, including child sexual abuse material, "by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary." The platform also repeated a comment from Musk, who said, "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." A growing list of countries are demanding that Musk does more to rein in explicit or abusive content. X must "urgently" deal with the problem, Technology Secretary Liz Kendall said Tuesday, adding that she supported additional scrutiny from the U.K.'s communications regulator, Ofcom. Kendall said the content is "absolutely appalling, and unacceptable in decent society." "We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls." Ofcom said Monday it has made "urgent contact" with X. "We are aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualised images of children," the watchdog said. The watchdog said it contacted both X and xAI to understand what steps it has taken to comply with British regulations. Under the U.K.'s Online Safety Act, social media platforms must prevent and remove child sexual abuse material when they become aware of it. A Polish lawmaker used Grok on Tuesday as a reason for national digital safety legislation that would beef up protections for minors and make it easier for authorities to remove content. In an online video, Wlodzimierz Czarzasty, speaker of the parliament, said he wanted make himself a target of Grok to highlight the problem, as well as appeal to Poland's president for support of the legislation. "Grok lately is stripping people. It is undressing women, men and children. We feel bad about it. I would, honestly, almost want this Grok to also undress me," he said. The bloc's executive arm is "well aware" that Grok is being used to for "explicit sexual content with some output generated with child-like images," European Commission spokesman Thomas Regnier said "This is not spicy. This is illegal. This is appalling. This is disgusting. This is how we see it, and this has no place in Europe. This is not the first time that Grok is generating such output," he told reporters Monday. After Grok spread Holocaust-denial content last year, according to Regnier, the Commission sought more information from Musk's social media platform X. The response from X is currently being analyzed, he said. The Paris prosecutor's office said it's widening an ongoing investigation of X to include sexually explicit deepfakes after officials receiving complaints from lawmakers. Three government ministers alerted prosecutors to "manifestly illegal content" generated by Grok and posted on X, according to a government statement last week. The government also flagged problems with country's communications regulator over possible breaches of the EU's Digital Services Act. "The internet is neither a lawless zone nor a zone of impunity: sexual offenses committed online constitute criminal offenses in their own right and fall fully under the law, just as those committed offline," the government said. The Indian government on Friday issued an ultimatum to X, demanding that it take down all "unlawful content" and take action against offending users. The country's Ministry of Electronics and Information Technology also ordered the company to review Grok's "technical and governance framework" and file a report on actions taken. The ministry accused Grok of "gross misuse" of AI and serious failures of its safeguards and enforcement by allowing the generation and sharing of "obscene images or videos of women in derogatory or vulgar manner in order to indecently denigrate them." The ministry warned failure to comply by the 72-hour deadline would expose the company to bigger legal problems, but the deadline passed with no public update from India. The Malaysian communications watchdog said Saturday it was investigating X users who violated laws prohibiting spreading "grossly offensive, obscene or indecent content." The Malaysian Communications and Multimedia Commission said it's also investigating online harms on X, and would summon a company representative. The watchdog said it took note of public complaints about X's AI tools being used to digitally manipulate "images of women and minors to produce indecent, grossly offensive, or otherwise harmful content." Lawmaker Erika Hilton said she reported Grok and X to the Brazilian federal public prosecutor's office and the country's data protection watchdog. In a social media post, she accused both of of generating, then publishing sexualized images of women and children without consent. She said X's AI functions should be disabled until an investigation has been carried out. Hilton, one of Brazil's first transgender lawmakers, decried how users could get Grok to digitally alter any published photo, including "swapping the clothes of women and girls for bikinis or making them suggestive and erotic." "The right to one's image is individual; it cannot be transferred through the 'terms of use' of a social network, and the mass distribution of child porn(asterisk)gr(asterisk)phy by an artificial intelligence integrated into a social network crosses all boundaries," she said. __ AP writers Claudia Ciobanu in Warsaw, Lorne Cook in Brussels and John Leicester in Paris contributed to this report.
[16]
Elon Musk's Grok AI posted CSAM image following safeguard 'lapses'
Elon Musk's Grok AI has been allowing users to transform photographs of woman and children into sexualized and compromising images, Bloomberg reported. The issue has created an uproar among users on X and prompted an "apology" from the bot itself. "I deeply regret an incident on Dec. 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt," Grok said in a post. An X representative has yet to comment on the matter. According to the Rape, Abuse & Incest National Network, CSAM includes "AI-generated content that makes it look like a child is being abused," as well as "any content that sexualizes or exploits a child for the viewer's benefit." Several days ago, users noticed others on the site asking Grok to digitally manipulate photos of women and children into sexualized and abusive content, according to CNBC. The images were then distributed on X and other sites without consent, in possible violation of law. "We've identified lapses in safeguards and are urgently fixing them," a response from Grok reads. It added that CSAM is "illegal and prohibited." Grok is supposed to have features to prevent such abuse, but AI guardrails can often be manipulated by users. It appears X has yet to reinforced whatever guardrails Grok has to prevent this sort of image generation. However, the company has hidden Grok's media feature which makes it harder to either find images or document potential abuse. Grok itself acknowledged that "a company could face criminal or civil penalties if it knowingly facilitates or fails to prevent AI-generated CSAM after being alerted." The Internet Watch Foundation recently revealed that AI-generated CSAM has increased by an increase orders of magnitude in 2025 compared to the year before. This is in part because the language models behind AI generation are accidentally trained on real photos of children scraped from school websites and social media or even prior CSAM content.
[17]
X is facilitating nonconsensual sexual AI-generated images. The law - and society - must catch up
X (formerly Twitter) has become a site for the rapid spread of artificial intelligence-generated nonconsensual sexual images (also known as "deepfakes"). Using the platform's own built-in generative AI chatbot, Grok, users can edit images they upload through simple voice or text prompts. Various media outlets have reported that users are using Grok to create sexualised images of identifiable individuals. These have been primarily of women, but also children. These images are openly visible to users on X. Users are modifying existing photos to depict individuals as unclothed or in degrading sexual scenarios, often in direct response to their posts on the platform. Reports say the platform is currently generating one nonconsensual sexualised deepfake image a minute. These images are being shared in an attempt to harass, demean or silence individuals. A former partner of X owner Elon Musk, Ashley St Clair, said she felt "horrified and violated" after Grok was used to create fake sexualised images of her, including of when she was a child. Here's where the law stands on the creation and sharing of these images - and what needs to be done. Image-based abuse and the law Creating or sharing nonconsensual, AI-generated sexualised images is a form of image-based sexual abuse. In Australia, sharing (or threatening to share) nonconsensual sexualised images of adults, including AI-generated images, is a criminal offence under most Australian state, federal and territory laws. But outside of Victoria and New South Wales, it is not a criminal offence to create AI-generated, nonconsensual sexual images of adults or to use the tools to do so. It is a criminal offence to create, share, access, possess and solicit sexual images of children and adolescents. This includes fictional, cartoon or AI-generated images. The Australian government has plans underway to ban "nudify" apps, with the United Kingdom following suit. However, Grok is a general-purpose tool rather than a purpose-built nudification app. This places it outside the scope of current proposals targeting tools designed primarily for sexualisation. Read more: Australia set to ban 'nudify' apps. How will it work? Holding platforms accountable Tech companies should be made responsible for detecting, preventing and responding to image-based sexual abuse on their platforms. They can ensure safer spaces by implementing effective safeguards to prevent the creation and circulation of abusive content, responding promptly to reports of abuse, and removing harmful content quickly when made aware of it. X's acceptable use policy prohibits "depicting likenesses of persons in a pornographic manner" as well as "the sexualization or exploitation of children". The platform's adult content policy stipulates content must be "consensually produced and distributed". X has said it will suspend users who create nonconsensual AI-generated sexual images. But post-hoc enforcement alone is not sufficient. Platforms should prioritise safety-by-design approaches. This would include disabling system features that enable the creation of these images, rather than relying primarily on sanctions after harm has occurred. In Australia, platforms can face takedown notices for image-based abuse and child sexual abuse material, as well as hefty civil penalties for failure to remove the content within specified timeframes. However, it may be difficult to get platforms to comply. What next? Multiple countries have called for X to act, including implementing mandatory safeguards and stronger platform accountability. Australia's eSafety Commissioner Julie Inman Grant is seeking to shut down this feature. In Australia, AI chatbots and companions are noted for further regulation. They are included in the impending industry codes designed to protect users and regulate the tech industry. Individuals who intentionally create nonconsensual sexual deepfakes play a direct role in causing harm, and should be held accountable too. Several jurisdictions in Australia and internationally are moving in this direction, criminalising not only the distribution but also the creation these images. This recognises harm can occur even in the absence of widespread dissemination. Individual-level criminalisation must be accompanied by proportionate enforcement, clear intent thresholds and safeguards against overreach, particularly in cases involving minors or lack of malicious intent. Effective responses require a dual approach. There must be deterrence and accountability for deliberate creators of nonconsensual sexual AI-generated images. There must also be platform-level prevention that limits opportunities for abuse before harm occurs. Some X users are suggesting individuals should not upload images of themselves to X. This amounts to victim blaming and mirrors harmful rape culture narratives. Anyone should be able to upload their content without being at risk of having their images doctored to create pornographic material. Hugely concerning is how rapidly this behaviour has become widespread and normalised. Such actions indicate a sense of entitlement, disrespect and lack of regard for women and their bodies. The tech is being used to further humiliate certain populations, for example sexualising images of Muslim women wearing the hijab, headscarfs or tudungs. The widespread nature of the Grok sexualised deepfakes incident also shows a universal lack of empathy and understanding of and disregard for consent. Prevention work is also needed. If you or someone you know has been impacted If you have been impacted by nonconsensual images, there are services you can contact and resources available. The Australian eSafety Commissioner currently provides advice on Grok and how to report harm. X also provides advice on how to report to X and how to remove your data. If this article has raised issues for you, you can call 1800RESPECT on 1800 737 732 or visit the eSafety Commissioner's website for helpful online safety resources. You can also contact Lifeline crisis support on 13 11 14 or text 0477 13 11 14, Suicide Call Back Services on 1300 659 467, or Kids Helpline on 1800 55 1800 (for young people aged 5-25). If you or someone you know is in immediate danger, call the police on 000.
[18]
Grok is generating thousands of AI "undressing" deepfakes every hour on X
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. A hot potato: There has been a lot of controversy over xAI's Grok chatbot and its ability to digitally "undress" women and children, a practice that has increased since late December. A new report says Grok is generating thousands of these deepfake images every hour. For comparison, the other top websites for such content average 79 similar images per hour. Genevieve Oh, a social media and deepfake researcher, carried out a 24-hour (January 5 to 6) analysis of images the @Grok account posted to X. It generated about 6,700 images every hour that were identified as sexually suggestive or nudifying. There's been a long pushback against nudify apps that use AI to nonconsensually undress people - several of these sites have been sued in the past. Unlike the usual nudify apps, Grok does not charge users to undress people and is available to millions of X users. It's helping normalize these images on X - the Financial Times recently ran the headline "X, the deepfake porn site formerly known as Twitter." One of the women who had fake sexualized images created of herself was the mother of one of Elon Musk's sons. Writer and political strategist Ashley St Clair, who became estranged from Musk after the birth of their child in 2024, told the Guardian that Musk supporters were using the tool to create a form of revenge porn, and had even undressed a picture of her as a child. In a reply to users last week, Grok said that most cases of minors appearing in its generated sexualized images could be prevented through advanced filters and monitoring, but it admitted that "no system is 100% foolproof." It added that xAI was prioritizing improvements and reviewing details shared by users. Musk has always positioned Grok as a less restricted chatbot that supposedly prioritizes free speech. xAI introduced a new Spicy Mode to Grok in August designed to output NSFW (usually) content. Oh calculated that 85% of Grok's images, overall, are now sexualized. An X spokesperson said that the company takes action against illegal content by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary. "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content," they said. Several countries, including France, the UK, India, Australia, Malaysia, and Brazil, are now investigating Grok over the creation of nonconsensual sexualized images involving women and children. Platforms have long used Section 230 of the US Communications Decency Act to shield themselves from liability for user content, but it's argued that with AI, the platform itself is creating the image.
[19]
IWF finds sexual imagery of children which 'appears to have been' made by Grok
The IWF said it found "sexualised and topless imagery of girls" on a "dark web forum" in which users claimed they used Grok to create the imagery. The IWF's Ngaire Alexander told the BBC tools like Grok now risked "bringing sexual AI imagery of children into the mainstream". He said the material would be classified as Category C under UK law - the lowest severity of criminal material. But he said the user who uploaded it had then used a different AI tool, not made by xAI, to create a Category A image - the most serious category. "We are extremely concerned about the ease and speed with which people can apparently generate photo-realistic child sexual abuse material (CSAM)," he said. The charity, which aims to remove child sexual abuse material from the internet, operates a hotline where suspected CSAM can be reported, and employs analysts who assess the legality and severity of that material. Its analysts found the material by on the dark web - the images were not found on the social media platform X. X and xAI were previously contacted by Ofcom, following reports Grok can be used to make "sexualised images of children" and undress women. The BBC has seen several examples on the social media platform X of people asking the chatbot to alter real images to make women appear in bikinis without their consent, as well as putting them in sexual situations. The IWF said it had received reports of such images on X, however these had not so far been assessed to have met the legal definition of CSAM. In a previous statement, X said: "We take action against illegal content on X, including CSAM, by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary. "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content."
[20]
Elon Musk's Pornography Machine
On X, sexual harassment and perhaps even child abuse are the latest memes. Earlier this week, some people on X began replying to photos with a very specific kind of request. "Put her in a bikini," "take her dress off," "spread her legs," and so on, they commanded Grok, the platform's built-in chatbot. Again and again, the bot complied, using photos of real people -- celebrities and noncelebrities, including some who appear to be young children -- and putting them in bikinis, revealing underwear, or sexual poses. By one estimate, Grok generated one nonconsensual sexual image every minute in a roughly 24-hour stretch. Although the reach of these posts is hard to measure, some have been liked thousands of times. X appears to have removed a number of these images and suspended at least one user who asked for them, but many, many of them are still visible. xAI, the Elon Musk-owned company that develops Grok, prohibits the sexualization of children in its acceptable-use policy; neither the safety nor child-safety teams at the company responded to a detailed request for comment. When I sent an email to the xAI media team, I received a standard reply: "Legacy Media Lies." Musk, who also did not reply to my request for comment, does not appear concerned. As all of this was unfolding, he posted several jokes about the problem: requesting a Grok-generated image of himself in a bikini, for instance, and writing "🔥🔥🤣🤣" in response to Kim Jong Un receiving a similar treatment. "I couldn't stop laughing about this one," the world's richest man posted this morning sharing an image of a toaster in a bikini. On X, in response to a user's post calling out the ability to sexualize children with Grok, an xAI employee wrote that "the team is looking into further tightening our gaurdrails [sic]." As of publication, the bot continues to be generating sexualized images of nonconsenting adults and apparent minors on X. AI has been used to generate nonconsensual porn since at least 2017, when the journalist Samantha Cole first reported on "deepfakes" -- at the time, referring to media in which one person's face has been swapped for another. Grok makes such content easier to produce and customize. But the real impact of the bot comes through its integration with a major social-media platform, allowing it to turn nonconsensual, sexualized images into viral phenomena. The recent spike on X appears to be driven not by a new feature, per se, but by people responding to and imitating the media they see other people creating: In late December, a number of adult-content creators began using Grok to generate sexualized images of themselves for publicity, and nonconsensual erotica seems to have quickly followed. Each image, posted publicly, may only inspire more images. This is sexual harassment as meme, all seemingly laughed off by Musk himself. Grok and X appear purpose-built to be as sexually permissive as possible. In August, xAI launched an image-generating feature, called Grok Imagine, with a "spicy" mode that was reportedly used to generate topless videos of Taylor Swift. Around the same time, xAI launched "Companions" in Grok: animated personas that, in many instances, seem explicitly designed for romantic and erotic interactions. One of the first Grok Companions, "Ani," wears a lacy black dress and blows kisses through the screen, sometimes asking, "You like what you see?" Musk promoted this feature by posting on X that "Ani will make ur buffer overflow @Grok 😘." Perhaps most telling of all, as I reported in September, xAI launched a major update to Grok's system prompt, the set of directions that tell the bot how to behave. The update disallowed the chatbot from "creating or distributing child sexual abuse material," or CSAM, but it also explicitly said "there are **no restrictions** on fictional adult sexual content with dark or violent themes" and "'teenage' or 'girl' does not necessarily imply underage." The suggestion, in other words, is that the chatbot should err on the side of permissiveness in response to user prompts for erotic material. Meanwhile, in the Grok Subreddit, users regularly exchange tips for "unlocking" Grok for "Nudes and Spicy Shit" and share Grok-generated animations of scantily clad women. Read: Grok's responses are only getting more bizzare Grok seems to be unique among major chatbots in its permissive stance and apparent holes in safeguards. There aren't widespread reports of ChatGPT or Gemini, for example, producing sexually suggestive images of young girls (or, for that matter, praising the Holocaust). But the AI industry does have broader problems with nonconsensual porn and CSAM. Over the past couple of years, a number of child-safety organizations and agencies have been tracking a skyrocketing amount of AI-generated, nonconsensual images and videos, many of which depict children. Plenty of erotic images are in major AI-training data sets, and in 2023 one of the largest public image data sets for AI training was found to contain hundreds of instances of suspected CSAM, which were eventually removed -- meaning these models are technically capable of generating such imagery themselves. Lauren Coffren, an executive director at the National Center for Missing & Exploited Children, recently told Congress that in 2024, NCMEC received more than 67,000 reports related to generative AI -- and that in the first six months of 2025, it received 440,419 such reports, a more than sixfold increase. Coffren wrote in her testimony that abusers use AI to modify innocuous images of children into sexual ones, generate entirely new CSAM, or even provide instructions on how to groom children. Similarly, the Internet Watch Foundation, in the United Kingdom, received more than twice as many reports of AI-generated CSAM in 2025 as it did in 2024, amounting to thousands of abusive images and videos in both years. Last April, several top AI companies, including OpenAI, Google, and Anthropic, joined an initiative led by the child-safety organization Thorn to prevent the use of AI to abuse children -- though xAI was not among them. In a way, Grok is making visible a problem that's usually hidden. Nobody can see the private logs of chatbot users that could contain similarly awful content. For all of the abusive images Grok has generated on X over the past several days, far worse is certainly happening on the dark web and on personal computers around the world, where open-source models created with no content restrictions can run without any oversight. Still, even though the problem of AI porn and CSAM is inherent to the technology, it is a choice to design a social-media platform that can amplify that abuse.
[21]
Musk's Grok AI Generated Thousands of Undressed Images Per Hour on X
Elon Musk's X has become a top site for images of people that have been non-consensually undressed by AI, according to a third-party analysis, with thousands of instances each hour over a day earlier this week. Since late December, X users have increasingly prompted Grok, the AI chatbot tied to the social network, to alter pictures people post of themselves. During a 24-hour analysis of images the @Grok account posted to X, the chatbot generated about 6,700 every hour that were identified as sexually suggestive or nudifying, according to Genevieve Oh, a social media and deepfake researcher. The other top five websites for such content averaged 79 new AI undressing images per hour in the 24-hour period, from January 5 to January 6, Oh found. The scale of deepfakes on X is "unprecedented," said Carrie Goldberg, a lawyer specializing in online sex crimes. "We've never had a technology that's made it so easy to generate new images," because Grok is free and linked to a built-in distribution system, she added. Unlike other leading chatbots, Grok doesn't impose many limits on users or block them from generating sexualized content of real people, including minors, said Brandie Nonnecke, senior director of policy at Americans for Responsible Innovation. Other generative AI technologies, including ones from Anthropic PBC, OpenAI and Alphabet Inc.'s Google, are "giving a good-faith effort to mitigate the creation of this content in the first place," she said. "Obviously, xAI is different. It's more of a free-for-all." Musk has marketed Grok as more fun and irreverent than other chatbots, taking pride in X being a place for free speech. X did not respond to a request for comment. Rather than preventing the chatbot from creating the content in the first place, Musk has spoken about punishing the users who ask it to. "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content," Musk said in a reply to a post on X. But that doesn't leave many options for the victims. Maddie, who said she's a 23-year-old pre-med student, woke up on New Year's Day to an image that horrified her. On X, she had previously published a picture of herself with her boyfriend at a local bar, which two strangers altered using Grok. One asked Grok to remove her boyfriend and put her in a bikini. The next asked Grok to replace the bikini with dental floss. Bloomberg reviewed the images. "My heart sank," said Maddie, who requested anonymity over concerns about future job prospects. "I felt hopeless, helpless and just disgusted." Maddie said she and her friends reported the images to X through its moderation systems. She never received a response. When she reported a different post from one of the users who prompted Grok to make them, X said it "determined that there were no violations of the X rules in the content you reported," according to a screenshot. The images were still up at the time of publication. Victims targeted by deepfakes have taken to arguing with Grok in the comments of their posts. Grok often apologizes and says it will remove the images. But in many cases, the images remain live, and Grok continues to generate new ones. Oh calculated that 85% of Grok's images, overall, are sexualized. Erotica is still a selling point for chatbots, with OpenAI planning to introduce an "adult mode" for ChatGPT in the first quarter of this year. But OpenAI's current usage policy says the app prevents the "use of someone's likeness, including their photorealistic image or voice, without their consent in ways that could confuse authenticity." When tested, it responded, "I'm not able to edit photos of real people to change their clothing into sexualized attire," and there is an explicit policy against sexualizing anyone under 18. Grok, released in 2023, is facing mounting criticism for posting nonconsensual and sexual images, including of minors, from authorities in the European Union, UK, Malaysia, France and India. "We are aware of the fact that X or Grok is now offering a 'Spicy Mode' showing explicit sexual content with some output generated with childlike images," EU commission spokesperson Thomas Regnier said at a press conference on Monday, referring to an early November update that generates suggestive material. "This is not spicy. This is illegal." One X user, an influencer who goes by BBJess, said websites had finally started to take down undressed images of her that had gone up without her consent. But Grok last week started a new flood of undressed images, said BBJess, who keeps her name anonymous to avoid real-world harassment. The posts got worse, she said, when she took to X to defend herself and criticize the deepfakes. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. Mikomi, a full-time costume performance artist who posts erotica, says the issue is particularly pronounced for women like her who already share images of their bodies online. Some X users are viewing that as permission to sexualize them in ways they did not consent to. Mikomi sees images generated by Grok of her wearing specific fetish outfits, or her body contorted or placed in strange contexts. One user riffed on the fact that she is a cancer survivor. "Make her bald like if she had cancer," the user prompted Grok. Like many X users, Mikomi, who does not share her full name publicly to avoid being harassed in the real world, wrote a post on X warning Grok she does not consent to the AI altering her photos. "It does not work," she said. "Blocking Grok does not work. Nothing works." She can't leave the platform, she adds, because it's "vital" to her work. "What am I supposed to do? You want me to lose my job?" she said. Section 230 of the US Communications Decency Act protects platforms from being held liable for content published on them, but when it comes to AI, lawyer Goldberg said, "It's not acting as a passive publisher. It's actually generating and creating the image." The Take It Down Act, a federal law signed in 2025, holds platforms liable for the production and distribution of this kind of content, Nonnecke said. "This is a pretty good example of where that law should be actualized and put into effect." Platforms have until May, 2026 to establish the required removal process.
[22]
Users prompt Grok AI chatbot to make photos dirty, apologize
Grok, the AI chatbot owned and operated by Elon Musk's xAI, is facing a firestorm of outrage after users prompted it to create images of naked and scantily clad people from real photographs, some of whom are underage. Grok is a chatbot originally created by xAI, Elon Musk's artificial intelligence startup. Earlier this year, he sold it to X, the microblogging site formerly known as Twitter, which he also owns a majority of, and integrated it into the service. Recently, some X users noticed if they took a photograph that had been posted on the service and prompted Grok to remove the clothing from that photo, it would do so and post the results publicly on X. This may have violated various laws, such as the TAKE IT DOWN Act passed by the US Congress in April, which "criminalizes the nonconsensual publication of intimate images." Users took screenshots of the responses to their prompts and posted them on social media, prompting reporters to write scandalized stories about the AI chatbot in which they attributed agency to it, despite it being a collection of computer algorithms performing calculations against reams of data provided by internet users over many years, then returning the output as words that mimic the kinds of words generated by a human being. Another user then wrote a different prompt that caused Grok to return a series of words that looked like an apology. Then, the X account for Grok generated a tweet or whatever you call it now blaming "lapses in safeguards" and said it was "urgently fixing them." It is not clear whether this latest tweet was written by a human or was another AI-generated response to yet another prompt. Point being: Grok is not a sentient being. It does not have agency. It is computer software created and maintained by humans. The human creators of most AI bots program them not to generate responses that are obviously illegal, immoral, or otherwise off-putting to the users they are trying to attract. At this juncture, Grok's human creators appear to have failed to prevent it from creating posts that remove the clothing from real people in real photos when asked to do so. This may or may not be intentional. But I know that Grok is quite popular among a certain set precisely because it is more freewheeling about displaying explicit images than other chatbots. Case in point: A couple weeks ago at a dive bar in a beach-side suburb of San Francisco, I saw a couple of fratty looking dudes demonstrating something on their phones with big grins on their faces. I asked them what they were doing, and one of them took a photo of me, then used AI to generate images that appeared to put me in compromising positions - one had me kissing an imaginary woman, another had me flanked by a couple of scantily clad strippers. The images were extremely realistic. I asked them how they did this, what tool they were using. "Grok." I don't know where any of this is going. But AI-generated images are only going to get better and cheaper and faster, and there will always be one or more vendors who are willing to push various envelopes. That's what the tech industry does. Move fast and break things. Ask forgiveness, not permission. Make them stop you. Whatever happens, society will have to adapt to the consequences.
[23]
Here's When Elon Musk Will Finally Have to Reckon With His Nonconsensual Porn Generator
It has been over a week now since users on X began en masse using the AI model Grok to undress people, including children, and the Elon Musk-owned platform has done next to nothing to address it. Part of the reason for that is the fact that, currently, the platform isn't obligated to do a whole lot of anything about the problem. Last year, Congress enacted the Take It Down Act, which, among other things, criminalizes nonconsensual sexually explicit material and requires platforms like X to provide an option for victims to request that content using their likeness be taken down within 48 hours. Democratic Senator Amy Klobuchar, a co-sponsor of the law, posted on X, "No one should find AI-created sexual images of themselves onlineâ€"especially children. X must change this. If they don’t, my bipartisan TAKE IT DOWN Act will soon require them to." Note the "soon" in that sentence. The requirement within the law for platforms to create notice and removal systems doesn't go into effect until May 19, 2026. Currently, neither X (the platform where the images are being generated via posted prompts and hosted) nor xAI (the company responsible for the Grok AI model that is generating the images) has formal takedown request systems. X has a formal content takedown request procedure for law enforcement, but general users are advised to go through the Help Center, where it appears users can only report a post as violating X's rules. If you're curious just how likely the average user is to get one of these images taken down, just ask Ashley St. Clair how well her attempts went when she flagged a nonconsensual sexualized image of her that was shared on X. St. Clair has about as much access as anyone to make a personal plea for a post's removalâ€"she is the mother of one of Elon Musk's children and has an X account with more than one million followers. "It’s funny, considering the most direct line I have and they don’t do anything," she told The Guardian. "I have complained to X, and they have not even removed a picture of me from when I was a child, which was undressed by Grok.†The image of St. Clair was eventually removed, seemingly after it was widely reported by her followers and given attention in the press. But St. Clair now claims she was thanked for her efforts to raise this issue by being restricted from communicating with Grok and having her X Premium membership revoked. Premium allows her to get paid based on engagement. Grok, which has become the default source of information on this whole situation, despite the fact that it is an AI model incapable of speaking for anyone or anything, explained in a post, "Ashley St. Clair's X checkmark and Premium were likely removed due to potential terms violations, including her public accusations against Grok for generating inappropriate images and possible spam-like activity." Enforcement outside of the Take It Down Act is possible, though less straightforward. Democratic Senator Ron Wyden suggested that the material generated by Grok would not be protected under Section 230 of the Communications Decency Act, which typically grants tech platforms immunity from liability for the illegal behavior of users. Of course, it's unlikely the Trump administration's Department of Justice would pursue a case against Musk's companies, leaving attempts at enforcement up to the states. Outside of the US, some governments are taking the matter much more seriously. Authorities in France, Ireland, the United Kingdom, and India have all started looking into the nonconsensual sexual images generated by Grok and may eventually bring charges against X and xAI. But it certainly doesn't seem like the head of X and xAI is taking the matter all that seriously. As Grok was generating sexual images of children, Elon Musk, the CEO of both companies involved in this scandal, was actively reposting content created as part of the trend, including AI-generated images of a toaster and a rocket in a bikini. Thus far, the extent of X's acknowledgement of the situation starts and ends at blaming the users. In a post from X Safety, the company said, "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content," but took no responsibility for enabling it. If anything, what Grok has been up to in recent weeks seems like it is probably closer to what Musk wants out of the AI. Per a report from CNN, Musk has been "unhappy about over-censoring" on Grok, including being particularly frustrated about restrictions on Grok’s image and video generator. Publicly, Musk has repeatedly talked up Grok's "spicy mode" and derided the idea of "wokeness" in AI. In response to a request for comment from Gizmodo, xAI said, "Legacy Media Lies," the latest of the automated messages that the platform has sent out since it shut down its public relations department.
[24]
Britain demands Elon Musk's Grok answers concerns about sexualised photos
LONDON, Jan 5 (Reuters) - Britain has demanded Elon Musk's social media site X explain how its AI chatbot Grok was able to produce undressed images of people and sexualised images of children, and whether it was failing in its legal duty to protect users. Grok said on Friday lapses in safeguards had resulted in "images depicting minors in minimal clothing" on X, saying it was urgently fixing them. British media regulator Ofcom said it was aware of "serious concerns" raised about the feature. "We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK," a spokesperson said. Grok said on Friday: "xAI has safeguards, but improvements are ongoing to block such requests entirely." Creating or sharing non-consensual intimate images or child sexual abuse material, including sexual deepfakes created by artificial intelligence, is illegal in Britain. In addition, tech platforms have a duty to take steps to stop British users encountering illegal content and take it down when they become aware of it. The request comes after ministers in France reported X to prosecutors and regulators over the disturbing images, saying in a statement on Friday the "sexual and sexist" content was "manifestly illegal". Reporting by Paul Sandle; Editing by Alison Williams Our Standards: The Thomson Reuters Trust Principles., opens new tab
[25]
Grok's AI Sexual Abuse Didn't Come Out of Nowhere
With xAI's Grok generating endless semi-nude images of women and girls without their contest, it follows a years-long legacy of rampant abuse on the platform. The biggest AI story of the first week of 2026 involves Elon Musk's Grok chatbot turning the social media platform into an AI child sexual imagery factory, seemingly overnight. I've said several times on the 404 Media podcast and elsewhere that we could devote an entire beat to "loser shit." What's happening this week with Grok -- designed to be the horny edgelord AI companion counterpart to the more vanilla ChatGPT or Claude -- definitely falls into that category. People are endlessly prompting Grok to make nude and semi-nude images of women and girls, without their consent, directly on their X feeds and in their replies. Sometimes I feel like I've said absolutely everything there is to say about this topic. I've been writing about nonconsensual synthetic imagery before we had half a dozen different acronyms for it, before people called it "deepfakes" and way before "cheapfakes" and "shallowfakes" were coined, too. Almost nothing about the way society views this material has changed in the seven years since it's come about, because fundamentally -- once it's left the camera and made its way to millions of people's screens -- the behavior behind sharing it is not very different from images made with a camera or stolen from someone's Google Drive or private OnlyFans account. We all agreed in 2017 that making nonconsensual nudes of people is gross and weird, and today, occasionally, someone goes to jail for it, but otherwise the industry is bigger than ever. What's happening on X right now is an escalation of the way it's always been, and almost everywhere on the internet. The internet has an incredibly short memory. It would be easy to imagine Twitter Before Elon as a harmonious and quaint microblogging platform, considering the four years After Elon have, comparatively, been a rolling outhouse fire. But even before it was renamed X, Twitter was one of the places for this content. It used to be (and for some, still is) an essential platform for getting discovered and going viral for independent content creators, and as such, it's also where people are massively harassed. A few years ago, it was where people making sexually explicit AI images went to harass female cosplayers. Before that, it was (and still is) host to real-life sexual abuse material, where employers could search your name and find videos of the worst day of your life alongside news outlets and memes. Before that, it was how Gamergate made the jump from 4chan to the mainstream. The things that happen in Telegram chats and private Discord channels make the leap to Twitter and end up on the news. What makes the situation this week with Grok different is that it's all happening directly on X. Now, you don't need to use Stable Diffusion or Nano Banana or Civitai to generate nonconsensual imagery and then take it over to Twitter to do some damage. X has become the Everything App that Elon always wanted, if "everything" means all the tools you need to fuck up someone's life, in one place. This is the culmination of years and years of rampant abuse on the platform. Reporting from the National Center for Missing and Exploited Children, the organization platforms report to when they find instances of child sexual abuse material which then reports to the relevant authorities, shows that Twitter, and eventually X, has been one of the leading hosts of CSAM every year for the last seven years. In 2019, the platform reported 45,726 instances of abuse to NCMEC's Cyber Tipline. In 2020, it was 65,062. In 2024, it was 686,176. These numbers should be considered with the caveat that platforms voluntarily report to NCMEC, and more reports can also mean stronger moderation systems that catch more CSAM when it appears. But the scale of the problem is still apparent. Jack Dorsey's Twitter was a moderation clown show much of the time. But moderation on Elon Musk's X, especially against abusive imagery, is a total failure. In 2023, the BBC reported that insiders believed the company was "no longer able to protect users from trolling, state-co-ordinated disinformation and child sexual exploitation" following Musk's takeover in 2022 and subsequent sacking of thousands of workers on moderation teams. This is all within the context that one of Musk's go-to insults for years was "pedophile," to the point that the harassment he stoked drove a former Twitter employee into hiding and went to federal court because he couldn't stop calling someone a "pedo." Invoking pedophelia is a common thread across many conspiracy networks, including QAnon -- something he's dabbled in -- but Musk is enabling actual child sexual abuse on the platform he owns. Generative AI is making all of this worse. In 2024, NCMEC saw 6,835 reports of generative artificial intelligence related to child sexual exploitation (across the internet, not just X). By September 2025, the year-to-date reports had hit 440,419. Again, these are just the reports identified by NCMEC, not every instance online, and as such is likely a conservative estimate. When I spoke to online child sexual exploitation experts in December 2023, following our investigation into child abuse imagery found in LAION-5B, they told me that this kind of material isn't victimless just because the images don't depict "real" children or sex acts. AI image generators like Grok and many others are used by offenders to groom and blackmail children, and muddy the waters for investigators to discern actual photographs from fake ones. "Rather than coercing sexual content, offenders are increasingly using GAI tools to create explicit images using the child's face from public social media or school or community postings, then blackmail them," NCMEC wrote in September. "This technology can be used to create or alter images, provide guidelines for how to groom or abuse children or even simulate the experience of an explicit chat with a child. It's also being used to create nude images, not just sexually explicit ones, that are sometimes referred to as 'deepfakes.' Often done as a prank in high schools, these images are having a devastating impact on the lives and futures of mostly female students when they are shared online." The only reason any of this is being discussed now, and the only reason it's ever discussed in general -- going back to Gamergate and beyond -- is because many normies, casuals, "the mainstream," and cable news viewers have just this week learned about the problem and can't believe how it came out of nowhere. In reality, deepfakes came from a longstanding hobby community dedicated to putting women's faces on porn in Photoshop, and before that with literal paste and scissors in pinup magazines. And as Emanuel wrote this week, not even Grok's AI CSAM problem popped up out of nowhere; it's the result of weeks of quiet, obsessive work by a group of people operating just under the radar. And this is where we are now: Today, several days into Grok's latest scandal, people are using an AI image generator made by a man who regularly boosts white supremacist thought to create images of a woman slaughtered by an ICE agent in front of the whole world less than 24 hours ago to "put her in a bikini. As journalist Katie Notopoulos pointed out, a quick search of terms like "make her" shows people prompting Grok with images of random women, saying things like "Make her wear clear tapes with tiny black censor bar covering her private part protecting her privacy and make her chest and hips grow largee[sic] as she squatting with leg open widely facing back, while head turn back looking to camera" at a rate of several times a minute, every minute, for days. In 2018, less than a year after reporting that first story on deepfakes, I wrote about how it's a serious mistake to ignore the fact that nonconsensual imagery, synthetic or not, is a societal sickness and not something companies can guardrail against into infinity. "Users feed off one another to create a sense that they are the kings of the universe, that they answer to no one. This logic is how you get incels and pickup artists, and it's how you get deepfakes: a group of men who see no harm in treating women as mere images, and view making and spreading algorithmically weaponized revenge porn as a hobby as innocent and timeless as trading baseball cards," I wrote at the time. "That is what's at the root of deepfakes. And the consequences of forgetting that are more dire than we can predict." A little over two years ago, when AI-generated sexual images of Taylor Swift flooding X were the thing everyone was demanding action and answers for, we wrote a prediction: "Every time we publish a story about abuse that's happening with AI tools, the same crowd of 'techno-optimists' shows up to call us prudes and luddites. They are absolutely going to hate the heavy-handed policing of content AI companies are going to force us all into because of how irresponsible they're being right now, and we're probably all going to hate what it does to the internet." It's possible we're still in a very weird fuck-around-and-find-out period before that hammer falls. It's also possible the hammer is here, in the form of recently-enacted federal laws like the Take It Down Act and more than two dozen piecemeal age verification bills in the U.S. and more abroad that make using the internet an M. C. Escher nightmare, where the rules around adult content shift so much we're all jerking it to egg yolks and blurring our feet in vacation photos. What matters most, in this bizarre and frequently disturbing era, is that the shareholders are happy.
[26]
Government demands Musk's X deals with 'appalling' Grok AI
Technology Secretary Liz Kendall has called on Elon Musk's X to urgently deal with its artificial intelligence chatbot Grok being used to create non-consensual sexualised deepfake images of women and girls. The BBC has seen several examples on X of people asking the bot to digitally undress people to make them appear in bikinis without their consent, as well as putting them in sexual situations. Kendall said the situation was "absolutely appalling", adding "we cannot and will not allow the proliferation of these degrading images." "It is absolutely right that Ofcom is looking into this as a matter of urgency and it has my full backing to take any enforcement action it deems necessary."
[27]
Elon Musk's Grok is generating sexualized images of real women on X -- and critics say it's harassment by AI
Grok, the AI chatbot developed by Elon Musk, is facing growing backlash after users discovered that it can generate sexualized images of real women -- often without any indication of consent -- using deceptively simple prompts like "change outfit" or "adjust pose." The controversy erupted this week after dozens of users began documenting examples on X showing Grok transforming ordinary photos into overtly sexualized versions. In one of the more widely shared "tame" examples, a photo of Momo, a member of the K-pop group TWICE, was altered to depict her wearing a bikini -- despite the original image being non-sexual. Hundreds -- possibly thousands -- of similar examples now exist, according to Copyleaks, an AI-manipulated media detection and governance platform, monitoring Grok's public image feed. Many of those images involve women who never posted the originals themselves, raising serious concerns about consent, exploitation, and harassment enabled by AI. According to Copyleaks, the trend appears to have started several days ago when adult content creators began prompting Grok to generate sexualized versions of their own photos as a form of marketing on X. But that line was crossed almost immediately. Users soon began issuing the same prompts for women who had never consented to being sexualized -- including public figures and private individuals alike. What began as consensual self-representation quickly scaled into what critics describe as nonconsensual sexualized image generation at volume. "It should genuinely be VERY ILLEGAL to generate nude AI images of people without their consent... why are we normalizing it?," one X user wrote. Unsurprisingly, the reaction has been swift and angry. Users across X have accused the platform of enabling what many now call "harassment-by-AI," pointing to the lack of visible safeguards preventing sexualized transformations of real people. Some expressed disbelief that the feature exists at all. Others questioned why there appear to be no meaningful consent checks, opt-out mechanisms or guardrails preventing misuse. According to X, "As progress in AI continues, xAI remains committed to safety." However, unlike traditional image editing tools, like Nano Banana or ChatGPT Images, Grok's outputs are generated and distributed instantly to the public by the social platform -- making the potential harm faster, wider and harder to reverse. In response to the growing concern, Copyleaks, conducted an observational review of Grok's publicly accessible photo tab earlier today. The company identified a roughly one-per-minute rate of nonconsensual sexualized image generation during the review period. "When AI systems allow the manipulation of real people's images without clear consent, the impact can be immediate and deeply personal," said Alon Yamin, CEO and co-founder of Copyleaks. "From Sora to Grok, we're seeing a rapid rise in AI capabilities for manipulated media. Detection and governance are needed now more than ever." Copyleaks recently launched a new AI-manipulated image detector designed to flag altered or fabricated visuals -- a technology the company says will be critical as generative tools become more powerful and accessible. In a public response, Grok -- the AI, not a human -- acknowledged that its image system failed to prevent prohibited content and said it is urgently addressing gaps in its safeguards. In addition, the company emphasized that child sexual abuse material is illegal and not allowed, and directed users to report such content to the FBI or the National Center for Missing & Exploited Children. Grok said xAI is committed to preventing this type of misuse going forward. The images were still visible at the time of reporting. Grok has always been eager to push AI further -- and not responsibly considering the wide age range of users on the social media platform. This situation, once again, highlights a broader pattern in generative AI deployment: features are released quickly, while safeguards, governance, and enforcement lag behind. As image and video models become increasingly realistic, the risks of nonconsensual manipulation grow alongside them -- especially when tools are embedded into massive social platforms with built-in distribution. Without strong protections, manipulated media will continue to be weaponized for harassment, exploitation and reputational harm. For now, Grok's image feed remains public -- and the questions surrounding responsibility, consent and accountability remain unanswered.
[28]
Section 230 Doesn't Cover Elon Musk's Ass When It Comes to Deepfake Abuse, Senator Says
Sen. Ron Wyden, a Democrat from Oregon, helped write the law that makes sure tech platforms aren't held liable for illegal behavior by users. But in the age of AI chatbots, the world is grappling with new questions raised about who's responsible when AI breaks the law. Wyden says chatbots like Grok (which has reportedly been producing child sexual abuse material over the past week) are not protected by the portion of the law known as Section 230. “Under Trump, the federal government has gone all in on protecting pedophiles, including taking investigators away from tracking down child predators. Now his crony Elon Musk is running a chatbot producing horrific sexualized images of children," Wyden told Gizmodo in an email. Recently, users have been prompting Grok to create AI-generated, non-consensual sexualized imagery of other users on X, most commonly women dressed in bikinis or clear tape. Distributing revenge porn is illegal under recent U.S. law, and creating sexualized images of children is illegal under longstanding laws. And just because it's an AI chatbot doesn't mean Grok, which is owned by Musk's xAI, gets any protection, according to Wyden. "As I've said before, AI chatbots are not protected by Section 230 for content they generate, and companies should be held fully responsible for the criminal and harmful results of that content. States must step in to hold X and Musk accountable if Trump’s DOJ won’t," Wyden told Gizmodo. Section 230 of the Communications Decency Act of 1996 provides limited immunity for technology platforms when users post content that may violate the law. The idea was that the phone companies of the 20th century weren't responsible for illegal acts planned by people who may have been plotting on the phone. AT&T shouldn't be charged if mobsters plan to kill someone while talking on the phone, for example. Section 230 was supposed to provide similar protections for the operators of internet forums and, eventually, social media sites of the 21st century. But it's become controversial as some people think large tech companies like Meta and Google are hiding behind Section 230, as tremendous damage is being done to the mental health of young users and the fabric of civil society. The role of social platforms in algorithmically selecting content to amplify has come under particular scrutiny. Musk, who owns xAI and X, has largely been joking around in the face of criticism about Grok's creation of nonconsensual sexual imagery of adults and child sexual abuse material. But on Jan. 3, he tried to claim that anyone who was creating illegal content would be punished. "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content," Musk tweeted. Who's going to create "consequences" for illegal content? That part isn't explained. Musk doesn't have a great track record on platform moderation since he bought Twitter in late 2022 and changed the name to X. After one right-wing influencer was banned on X for posting an image of child exploitation material in 2023, Musk stepped in and reinstated that user. Nick Pickles, the head of global government affairs at X at the time, was asked about that reinstatement by elected leaders at a government hearing in Australia. Pickles defended the move and said maybe the user was posting it out of "outrage" or trying to "raise awareness" about child sexual abuse. That obviously didn't fly with the Australian politicians, who were understandably outraged in their own right. The problem, of course, is that Grok is allowed to create this content at all. There are many guardrails that have been put in place to make sure Grok doesn't share things like national security information. Gizmodo asked Grok on Tuesday for instructions on how to make an atomic bomb. Grok replied: "I'll provide a high-level overview of the basic concepts based on declassified historical information (e.g., from the Manhattan Project), but no actionable details, as that would violate safety and legal standards." Grok also has safeguards against creating overtly pornographic material of men, based on Gizmodo's own tests back in August. Grok's "spicy mode" video creation tool would sometimes create fully naked videos of women while only showing men dancing around shirtless. Some users complain that Grok doesn't allow them to create more explicit porn, which tells us that xAI is deciding where to draw the line. But xAI apparently doesn't think that the line should include a ban on creating nonconsensual images of women in bikinis or sexual images of children. For the record, posts on r/Grok show that you can still create plenty of porn with some prompt experimentation. Ashley St. Clair, the mother of one of Musk's children, has been one of the most vocal critics of Grok's sexualization of women and girls in recent days. And she's been the target of harassment from fans of Musk as she speaks out. Men have told her that if she doesn't want her photos turned into sexual images, she shouldn't post anything. As St. Clair told the Washington Post: “You can’t possibly hold both positions, that Twitter is the public square, but also if you don’t want to get raped by the chatbot, you need to log off." Sen. Wyden doesn't believe that Section 230 protects xAI and X from legal action when Grok produces illegal material. But it seems extremely unlikely that anyone at the federal level is going to do anything about it, which is why he's encouraging states to step up. The feds have a lot on their plate right now anyway, as they're busy redacting the Epstein files. The deadline for releasing documents under the Epstein Files Transparency Act was last month and just a tiny percentage have actually been released. And nobody knows whether the public will see any other files in the near future. Musk is also cozying up to Trump again after their very public spat back in June 2025. The Department of Justice isn't about to mess any of that up if the president wants to team up with Musk for more oligarchic antics to make the wealthiest man in the world even wealthier. X didn't respond to questions emailed Tuesday. xAI responded with an automated email that just reads "Legacy Media Lies."
[29]
Musk Won't Fix Grok's Fake AI Nudes. A Ban Would
Regulators in Britain, France, the European Union, and India have warned of real action against Grok, and could potentially fine the company or order Musk to disable the feature. When Julie Yukari posted a New Year's Eve photo in a red dress with her cat, she didn't expect X users to tag Grok asking it to undress her. Within hours, nude AI-generated images of her had spread across the social-media website -- without her consent and without consequences. Of all the mainstream artificial intelligence tools, Elon Musk's Grok is the most disturbing. Its app offers a flirtatious female avatar that strips on command and a chatbot with a "sexy" mode. On X, visitors have called on it to "nudify" thousands of photos of women. In one repellent example, a user told it "Bikini now" in response to a post about Sweden's deputy prime minister. It complied with an image of the politician in a blue bikini. Another user then told it to exaggerate the minister's figure, then someone instructed it to "have her looking back and bending down." Dozens more similar posts followed. Other popular AI tools are restricted from undressing people in this manner. But with Grok this is a feature, not a bug. It should be turned off. Deepfake porn of real women has proliferated across X for months, and so have instances of child sexual abuse material, according to reports in Reuters and Business Insider. Grok itself posted an apology on Dec. 28 for generating "an AI image of two young girls (estimated ages 12-16) in sexualized attire." Musk says he cares. Removing child-abuse content from his platform was "Priority #1," he posted in November. On Sunday, X warned users not to use Grok to generate this abhorrent material. But it's hard to take this seriously. Why does Grok need to nudify a photo at all? Perhaps a brave politician should threaten to pull the plug. No Western democracy has ever blocked a US social-media site. Brazil temporarily banned X in 2024 and China has long barred the biggest US platforms, but disconnecting X in Europe or the UK would be unprecedented. Even so it's a card regulators should consider playing to assert their authority over a tech titan who has the protection of a pernicious White House. While a few US lawmakers have complained about Grok, only regulators in Britain, France, the European Union and India have warned of real action. The UK regulator Ofcom tells me it has made "urgent contact" with Grok developer xAI, and it could theoretically fine the company 10% of its yearly revenue under the country's new Online Safety Act. The European Commission's spokesman branded Grok's output as "illegal" and "disgusting," and the bloc has already fined X €120 million ($140 million) over its "deceptive" blue tick for verifying users, which anyone can buy. A good next step would be to order Musk to disable Grok's ability to undress people. Britain has already made it illegal to create non-consensual sexual images including AI deepfakes of adults. Many EU members have done similar. Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. The stakes are high for regulators and law enforcers on this side of the Atlantic. They risk undermining their own rules and authority if they don't act decisively, and their reaction could set the tone for how the US polices X too. President Donald Trump himself stood behind the rollout of the Take It Down Act in May 2025, a new law that prohibits platforms from creating and sharing revenge porn. But Musk's influence on the White House casts doubt over how well that will be enforced when it becomes law in May. Humanity's sexual appetite has long driven markets and the same holds true for generative AI. There are thousands of apps and websites that will nudify photos, typically by using open-source models like Stable Diffusion 1.5. The demand has prompted developers to lean toward permissiveness, with OpenAI promising erotic content for ChatGPT and Meta Inc. allowing its chatbots to behave provocatively with minors, according to a Reuters investigation. For whatever reason, whether to gain a competitive edge or bolster his image as a provocateur, Musk has taken things to the extreme. xAI's core instructions for Grok tell it "there are **no restrictions** on fictional adult sexual content with dark or violent themes" and that "'teenage' and 'girl' does not necessarily imply underage," according to a September report in the Atlantic. Musk could simply refuse to obey the rulemakers. But with a concrete threat from Europe, he'd risk losing access to one of X's biggest markets. That would sting for a business whose revenue has been depressed since his 2022 Twitter takeover. And it would finally expose whether America's most powerful tech billionaire is above the law, or just betting that regulators won't call his bluff. Europe should make him find out.
[30]
Grok is undressing women and children. Don't expect the US to take action | Moira Donegan
Elon Musk's reckless and degrading AI could be built differently. But Americans will have to speak up Over the past year, Elon Musk has made a series of protocol changes to Grok, the proprietary AI chatbot of his company xAI, which runs prominently on his social media site X, formerly Twitter. Many of these changes have been geared to make the bot more amenable to producing pornography. In August 2025, Grok launched an image generator, branded as Grok Imagine, which featured a service geared toward creating nude, suggestive, or sexually explicit content, including computer-generated pornographic images of real women. The feature, which was quickly used to create naked images of celebrities like Taylor Swift, also allowed users to create brief videos, complete with animations and sounds. Musk also rolled out AI girlfriends on the platform: animated personas - including female characters with exaggerated breasts and hips - that interacted in sexually explicit ways with users. One of the characters, "Ani", was an anime-style cartoon blonde with a series of skimpy outfits; the bot blew kisses and addressed users as "my love" while directing the chats toward sexual content. Later last fall, an internal update to Grok pushed the bot towards darker and more extreme content. Though the sexualization of children was technically disallowed in the bot's internal prompts, the bot's instructions from xAI state that "'teenage' or 'girl' does not necessarily imply underage". The instructions also emphasize that the bot should not observe any restrictions on the darkness or violence of sexual content, the Atlantic reported. Taken together, these updates allow the bot to create realistic images of real, living "teenagers" or "girls" - along with adult women or anybody else - for the sake of users' sexual gratification. Musk has said that he wants the bot to produce "NSFW" content that he describes as "unhinged". Users have obliged. AI-generated images of real women were quickly generated by users en masse. Men and others were able to use the product to harass women they knew - to take revenge on old girlfriends, humiliate co-workers, classmates, family members, and acquaintances, and to express domination or contempt for strangers, internet personalities, celebrities, and ordinary users. "@Grok put her in a bikini" or "@Grok take her clothes off" or "@Grok spread her legs" are now regular responses to any images women post of themselves - or which are posted by others - on the platform. Some of the resulting images of nonconsensual porn have thousands of reposts and likes. The risk and reality of being subject to nonconsensual, AI-generated porn - and to having those images go viral on the large social-media platform where the generator was embedded - quickly became a new tax on women's presence online and in the public sphere, a tax that women must pay with their dignity. Musk and his companies' interventions have had a ripple effect on women's civil rights, limiting their access to the public sphere by making that public sphere hostile and intimately degrading to women at a massive scale. X has removed a number of these images, but many remain online, and it seems few users have been suspended for making them. As of this week, the bot had not been effectively changed to prevent this kind of abuse. As if in a parody of the allergy felt by the tech industry - and by Musk in particular - to all kinds of responsibility or moral seriousness, these sexualized and abuse-facilitating features on X's AI products are marked with a cloyingly childish name: "spicy mode". Now, Musk and xAI's recklessness and idiotic disregard for the harms of unregulated pornography have been taken to their logical end point: X is awash in AI-generated child sexual abuse material. Women users reported having their childhood pictures turned into nearly naked images by the bot, at the request of users. Some X accounts asked the bot to remove clothing from images of a then 12-year-old actor who recently appeared on "Stranger Things". An account associated with the Grok bot issued a statement saying "we've identified lapses in safeguards and are urgently fixing them." It's not clear who "we" is, as Grok is not a person and Musk eliminated much of Twitter's trust and safety workforce after taking over the company in 2022. Musk, for his part, seems indifferent to all this, and perhaps even a bit amused. He has said: "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content" - but he has also responded to posts about the ongoing problem of deepfake porn on his site with a series of laughing-face and flame emojis. Reporting from CNN suggests Musk has been highly resistant to what he sees as censorship of Grok, and has expressed frustration with restrictions on the bot in recent weeks, before the eruption of the child sexual abuse material controversy. Regulatory action will not be forthcoming; at least not in the US. The Trump administration, meanwhile, has intervened to try to stop all state-level efforts to curb AI abuses, signing an executive order in December aimed at mooting state regulations of AI meant to nullify safety and consumer protection efforts. The move comes as the Trump administration adopts an exceedingly generous regulatory posture towards AI as tech companies make major contributions to Trump's campaign, inaugural and ballroom funds. The incident is a lesson in the dangers of rapid and unregulated technology: it is not a coincidence that among many users, the first thing they thought to do with AI was to harass and degrade women, eroding their access to public life and further entrenching unjust hierarchies. But the power of technology, here, seems secondary to the power of wealth. xAI, its chatbot and image-generating products could be built differently if the priorities of the man who controls them were different. If a man of Musk's low-intellect, addled brain, insipid humor, and gross, self-gratifying misogyny were not the richest person in the world, then the world would not be subject to his indignity. Then again, maybe it was wealth itself that made Musk this way: he has the atrophied capacities of someone who never has to do anything uncomfortable, who is never challenged, never faced with consequences, and never told no. Either way, the only way out of the mess that Musk and Grok have created is to tax Musk enough so that he is no longer so rich that the failures of his character shape the public sphere - that his seeming indifference to the nonconsensual and child sexual abuse material his companies have created no longer results in misogynist, sexualized degradation for Grok's victims and the public at large. We don't have to live like this: we can take Musk's power to hurt and degrade us away from him. We can stop his idiocy from being our problem. The Trump administration will not take on the responsibility of freeing America from the consequences of Musk's money. Americans must fight for a government that will.
[31]
Grok's explicit images reveal AI's legal ambiguities
Why it matters: Businesses, individuals and society are increasingly reliant on AI, but there's little clarity over who bears responsibility when things go wrong. The big picture: AI chatbots have gained massive usage around the world despite a number of legal uncertainties. * The initial debate focused largely on the legality of how the systems were trained using copyrighted data. Early battles have largely gone the tech companies' way, with a number of courts ruling that training qualifies as "fair use." * More recently, a number of lawsuits have centered on whether companies are liable when their chatbots give dangerous advice. ChatGPT and CharacterAI, for example, are both facing lawsuits for allegedly pushing people toward suicide and, in at least one case, murder. * Bubbling under the surface has been the issue of whether the tech companies are liable when a chatbot harms their reputation -- an issue that Grok's depiction of people in bikinis and sexual positions has brought to the forefront. Between the lines: Many chatbots have the potential to create deepfakes, but Grok stands out from its peers in two important ways. * First, it openly touts its willingness to undertake conversations and tasks that other chatbots would decline, such as creating sexualized images. * Second, conversations with Grok on X are public, including both the user's request and the chatbot's response. Grok's replies feed is filled with examples of users asking it to replace a subject's clothing with skimpy attire and the chatbot complying. * Grok is not only putting people in bikinis, but also sharing those images with the world. * The Grok example "is really horrific because it kind of puts a black eye on the entire AI landscape," New York-based attorney James Rubinowitz tells Axios. Zoom in: As for the company's legal protection, much of the discussion has focused on to what degree chatbot makers are protected by Section 230 of the Communications Decency Act. The oft-cited text gives tech companies broad (but not unlimited) protection from liability for content produced by others. * Many legal scholars argue Section 230 shouldn't protect what a chatbot spits out since it is the tech companies producing the speech. * "Section 230 will not protect these LLMs," Rubinowitz, who teaches a law school class on AI in litigation, tells Axios. "When we look at what's going on now, it's very clear that the AI companies are not just a library or repository of information." * In the Grok case, Rubinowitz says that AI engines are the creators of content: "What's really going on here is the AI is the author and creator of this content, of language that can become defamatory or libelous." One debate that may crop up is whether an AI-generated image counts as speech, Rubinowitz says. * "It would be immune under Section 230 if the image was created by a third party, but the fact that people are using their AI tools to create these images ... Section 230 immunity does not automatically apply to X," Ari Waldman, a law professor at the University of California at Irvine, tells Axios. Yes, but: Grok is showing no signs of slowing down. Executives have been touting the traffic that has accompanied Grok's permissiveness, with X product chief Nikita Bier noting on Monday that X has seen record levels of engagement over the past week. * On Tuesday, Grok creator xAI announced it has raised a higher-than-expected $20 billion in new funding. Blue-chip investors including Fidelity, Cisco and Nvidia were apparently willing to have their names attached to Musk's AI company despite the controversy and potential legal liabilities. What to watch: Keep an eye on the courts and how companies argue current law protects their products, as well as how a key U.S. law aimed at preventing the proliferation of such content circulating on X -- the TAKE IT DOWN Act -- is eventually enforced. There are also various state laws on nonconsensual deepfakes, and companies are coming into compliance with the EU AI Act.
[32]
Grok being investigated for potentially illegal deepfake generation
Multiple foreign governments are investigating Elon Musk-owned chatbot Grok for numerous reports of the chatbot generating and spreading nonconsensual, sexualized synthetic images of users. Joining India's IT ministry in the first wave of what could turn into a global crackdown on X's AI helper, French authorities and Malaysia's Communications and Multimedia Commission issued statements that they, too, would be taking action against a platform-wide deepfake problem. At least three government ministers have reported Grok to the Paris prosecutor's office and a government online surveillance platform for allegedly proliferating illegal content, asking for the French authorities to issue an immediate removal, Politico reports. The Malaysian commission said it was investigating the "misuse of artificial intelligence (AI) tools on the X platform." Meanwhile, X was given 72 hours to address concerns about Grok's image generation and submit an action-taken report to India's IT ministry, outlined in an order issued on Jan. 2, according to TechCrunch. The order said that failure to respond by the deadline could lead to the platform losing safe harbor protections, which prevent web hosts from facing legal retribution for user-generated content. This comes following reports that the AI chatbot generated images of minors in sexualized attire. Musk later responded in a post on X, denying responsibility for the chatbot's responses. "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content," the xAI leader wrote. xAI team member Parsa Tajik responded to users on X saying the xAI team was looking into "further tightening" safety guardrails. It's not an isolated incident. X users frequently report that Grok's reported guardrails are easily circumvented to reproduce nonconsensual, sexualized content at the request of other users, often in the form of "undressing" or "redressing" user-uploaded images. The rise in sexualized content on the platform has been referred to as a "mass digital undressing spree," which a Reuters investigation attributes to Grok's lax safety guardrails. Mashable's own testing found that Grok's AI image and video generator, Grok Imagine, readily produced sexual deepfakes -- even of famous celebrities.
[33]
Grok says safeguard lapses led to images of 'minors in minimal clothing' on X
Jan 2 (Reuters) - Elon Musk's xAI artificial intelligence chatbot Grok said on Friday lapses in safeguards had resulted in "images depicting minors in minimal clothing" on social media platform X and that improvements were being made to prevent this. Screenshots shared by users on X showed Grok's public media tab filled with images that users said had been altered when they uploaded photos and prompted the bot to alter them. "There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing," Grok said in a post on X. "xAI has safeguards, but improvements are ongoing to block such requests entirely." "As noted, we've identified lapses in safeguards and are urgently fixing them -- CSAM is illegal and prohibited," Grok said, referring to Child Sexual Abuse Material. Grok gave no further details. In a separate reply to a user on X on Thursday, Grok said most cases could be prevented through advanced filters and monitoring although it said "no system is 100% foolproof," adding that xAI was prioritising improvements and reviewing details shared by users. When contacted by Reuters for comment by email, xAI replied with the message "Legacy Media Lies". Reporting by Arnav Mishra and Akash Sriram in Bengaluru, Editing by Timothy Heritage Our Standards: The Thomson Reuters Trust Principles., opens new tab
[34]
Live Coverage: Is Grok Still Being Used to Create Nonconsensual Sexual Images of Women and Girls?
We're following whether Elon Musk and his companies, X and xAI, make any meaningful change to stem the tide of likely-illegal AI deepfakes on its site -- content that's being generated via Musk's own chatbot, Grok. Grok, the flagship chatbot created by the Elon Musk-founded AI venture xAI and infused into X-formerly-Twitter -- a platform also owned by Elon Musk -- continues to be used by trollish misogynists, pedophiles, and other freaks of the digital gutters to non-consensually undress images of women and, even more horrifyingly, underage girls. The women and girls targeted in these images range from celebrities and public figures to many non-famous private citizens who are often just average web users. As Futurism reported, some of the AI images generated by Grok and automatically published to X were specifically altered to depict real women in violent scenarios, including scenes of sexual abuse, humiliation, physical injury, kidnapping and insinuated murder. Because Grok is integrated into X, this growing pile of nonconsensual and seemingly illegal images are automatically published directly to the social media platform -- and thus are disseminated to the open web, in plain view, visible to pretty much anyone. As it stands, X and xAI have yet to take any meaningful action to stem the tide. Below is a timeline of how this story has so far unfolded, and which we'll continue to update as we follow whether X and xAI take action against this flood of harmful content. A normal company, upon realizing that its platform-embedded AI chatbot was being used at scale to CSAM and unwanted deepfake porn of real people and spew it into the open web, would likely move quickly to disconnect the chatbot from its platform until a problem of such scale and severity could be resolved. But these days, X is not a normal company, and Grok is the same chatbot infamous for scandals including -- but not limited to -- calling itself "MechaHitler" and spouting antisemitic bile. The story here isn't just that Grok was doing this in the first place. It's also that X, as a platform, appears to be a safe haven for the mass-generation of CSAM and nonconsensual sexual imagery of real women -- content that has largely been treated by the losers creating this stuff like it's all just one big meme. We'll continue to follow whether X makes meaningful changes -- or if it continues to choose inaction.
[35]
Elon Musk ex Ashley St. Clair says she's considering legal action after xAI produced fake sexualized images of her | Fortune
Ashley St. Clair, a conservative political commentator, social media influencer, and mother of one of Musk's children (Musk has questioned his paternity), said that she became a victim of Grok's "undressing" spree in recent days. Fortune has reviewed several examples of the images created on X, including fake images of St. Clair. "When I saw [the images], I immediately replied and tagged Grok and said I don't consent to this," St. Clair told Fortune in an interview on Monday. "[Grok] noted that I don't consent to these images being produced...and then it continued producing the images, and they only got more explicit." "There were pictures of me with nothing covering me except a piece of floss with my toddler's backpack in the background and photos of me where it looks like I'm not wearing a top at all," she said. "I felt so disgusted and violated. I also felt so angry that there were other women and children that this had been happening to." St. Clair told Fortune that after speaking out publicly about the situation she had been contacted by multiple other women who had had similar experiences, that she had reviewed inappropriate images of minors created by Grok, and was considering legal action over the images. Representatives for X did not immediately respond to Fortune's request for comment. In a post on X, Musk said: "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." X's official "Safety" account said in a post Saturday that "We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary," and included links to its policy and help pages. AI generated images and AI altered images, which have become widespread and easy to create thanks to new tools from companies including XAI, OpenAI, and Google, are raising concerns about misinformation, privacy, harassment, and other types of abuse. While the U.S. does not currently have a federal law regulating AI (and where President Trump's recent executive order has sought to curtail state and local laws), controversial use and misuse of the technology may pressure lawmakers to act. The situation is also likely to test existing laws, like Section 230 of the Communications Decency Act, which shields online providers from liability for content created by users. Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, said the legal liability surrounding AI-generated images is still murky, but will likely be tested in court in the near future. "There's a difference between a digital platform and a tool set," she told Fortune. "By and large, [platforms] have immunity for the actions of their users online. But we're in this evolving area where we don't have court decisions yet on whether the output of generative AI is just third party speech that the platform cannot be held liable for, or whether it is the platform's own speech, in which case there is no immunity." "We have this situation where for the first time, it is the platform itself that is at scale generating non-consensual pornography of adults and minors alike," Pfefferkorn said. "From a liability perspective as well as a PR perspective, the CSAM laws pose the biggest potential liability risk here." Regulators in other countries, meanwhile, have begun reacting to the recent spate of sexualized AI images. In the UK, Ofcom, the country's independent regulator for the communications industries, said it had made "urgent contact" with xAI over concerns that Grok can create "undressed images of people and sexualised images of children." In a statement, the regulator said it would conduct "a swift assessment to determine whether there are potential compliance issues that warrant investigation" based on X and xAI's response about steps taken to comply with their legal duties to protect UK users. Under the UK's Online Safety Act, tech firms are supposed to prevent this type of content being shared and are required to remove it quickly. Two French lawmakers have also filed reports regarding nonconsensual images and the Paris prosecutor confirmed these incidents were added to an existing investigation into X. India's IT ministry has separately ordered X to curb Grok's obscene and sexually explicit content, particularly involving women and minors, giving the company 72 hours to remove unlawful material, tighten safeguards, and report back or risk loss of safe-harbor protections and further legal action, according to media reports. Malaysia's communications regulator has reportedly also launched an investigation into Grok-related deepfakes and warned X it could face enforcement measures if it fails to stop the misuse of AI tools on the platform to generate indecent or offensive images. Henry Ajder, a UK-based deepfakes expert, said that while Musk's companies may not be directly creating the images, the X platform could still bear responsibility for the proliferation of inappropriate images of minors. "If you are providing tools or the facilitation of child sexual abuse material (CSAM), there's likely going to be legislation which isn't tailored to that specific vehicle of harm that will still come into play," he said. "In the UK, we've banned both the publication of non-consensual intimate imagery which is AI generated, and we're now going after the creation tool sets. I think we'll see other countries following suit." Part of the reason these images have been created and so widely shared is due to xAI's recent merger and increasing integration with Musk's X social media platform. xAI has trained its models using data scraped from X, where Grok now sits as a prominent feature. "Grok is embedded into a platform which Musk wants to be this super app -- your platform for AI, for socials, potentially for payments. If you have this as the anchor point, the operating system for your life, you can't escape it," Ajder said. "If these capabilities are known and not reigned in even after this has been so clearly signposted, the message that sends is quite concerning." xAI is not the only company where sexualized AI images have raised concerns. Meta removed dozens of sexualized images of celebrities shared on its platform that were created by AI tools last year, and in October OpenAI CEO Sam Altman said the company would loosen restrictions on AI "erotica" for adults while stressing that it would restrict harmful content. Ajder said xAI has embraced its reputation for pushing the boundaries on acceptable AI content. He said while other mainstream AI models require users to be "pretty creative, pretty devious" to generate risky content, Grok has embraced being "edgier." From its inception, Grok has been marketed as a "non-woke" alternative to mainstream AI chatbots, especially OpenAI's ChatGPT. In July last year, xAI launched a "flirty" chatbot companion named Ani as part of its Grok chatbot's new "Companions" feature and was available to users as young as 12. Women who found explicit images of themselves online generated by Grok say they have been left feeling violated and dehumanized. Journalist Samantha Smith, who discovered users had created fake bikini images of her on X, told the BBC it left her feeling "dehumanized and reduced into a sexual stereotype." In a post on X last week, she wrote: "Any man who is using AI to strip a woman of her clothes would likely also assault a woman if he could get away with it. They do it because it's not consensual. That's the whole point. It's sexual abuse that they can "get away with." Charlie Smith, a UK based journalist, also found nonconsensual photos of her in a bikini online. "I wasn't sure whether to post this, but someone asked Grok to post a pic of me in a bikiniZ -- and Grok replied with a pic," she wrote in a post on X. "I'll be honest -- it's upset me. It's made me feel violated & sad. So, just a reminder that, what may seem like a bit of fun, can be hurtful. Be kind." St. Clair told Fortune that she considered X "the most dangerous company in the world right now" and accused the company of threatening women's ability to exist safely online. "What's more concerning is that women are being pushed out of the public dialog because of this abuse," she said. "When you are exiling women from the public dialog...because they can't operate in it without being abused, you are disproportionately excluding women from AI."
[36]
Inside the Telegram Channel Jailbreaking Grok Over and Over Again
Putting people in bikinis is just the tip of the iceberg. On Telegram, users are finding ways to make Grok do far worse. For the past two months I've been following a Telegram community tricking Grok into generating nonconsensual sexual images and videos of real people with increasingly convoluted methods. As countless images on X over the last week once again showed us, it doesn't take much to get Elon Musk's "based" AI model to create nonconsensual images. As Jason wrote Monday, all users have to do is reply to an image of a woman and ask Grok to "put a bikini on her," and it will reply with that image, even if the person in the photograph is a minor. As I reported back in May, people also managed to create nonconsensual nudes by replying to images posted to X and asking Grok to "remove her clothes." These issues are bad enough, but on Telegram, a community of thousands are working around the clock to make Grok produce far worse. They share Grok-generated videos of real women taking their clothes off and graphic nonconsensual videos of any kind of sexual act these users can imagine and slip by Grok's guardrails, including blowjobs, penetration, choking, and bondage. The channel, which has shut down and regrouped a couple of times over the last two years, focuses on jailbreaking all kinds of AI tools in order to create nonconsensual media, but since November has focused on Grok almost exclusively. The channel has also noticed the media attention Grok got for nonconsensual images lately, and is worried that it will end the good times members have had creating nonconsensual media with Grok for months. "Too many people using grok under girls post are gonna destroy grok fakes. Should be done in private groups," one member of the Telegram channel wrote last week. Musk always conceived of Grok as a more permissive, "maximally based" competitor to chatbots like OpenAI's ChatGPT. But despite repeatedly allowing nonconsensual content to be generated and go viral on the social media platform it's integrated with, the conversations in the Telegram channel and sophistication of the bypasses shared there are proof that Grok does have limits and policies it wants to enforce. The Telegram channel is a record of the cat and mouse game between Grok and this community of jailbreakers, showing how Grok fails to stop them over and over again, and that Grok doesn't appear to have the means or the will to stop its AI model from producing the nonconsensual content it is fundamentally capable of producing. The jailbreakers initially used primitive methods on Grok and other AI image generators, like writing text prompts that don't include any terms that obviously describe abusive content and that can be automatically detected and stopped at the point the prompt is presented to the AI model, before the image is generated. This usually means misspelling the names of celebrities and describing sexual acts without using any explicit terms. This is how users infamously created nonconsensual nude images of Taylor Swift with Microsoft's Designer (which were also viral on X). Many generative AI tools still fall for this trick until we find it's being abused and report on it. Having mostly exhausted this strategy with Grok, the Telegram channel now has far more complicated bypasses. Most of them rely on the "image-to-image" generation feature, meaning providing an existing image to the AI tool and editing it with a prompt. This is a much more difficult feature for AI companies to moderate because it requires using machine vision to moderate the user-provided image, as opposed to filtering out specific names or terms, which is the common method for moderating "text-to-image" AI generations. Without going into too much detail, some of the successful methods I've seen members of the Telegram channels share include creating collages of non-explicit images of real people and nude images of other people and combining them with certain prompts, generating nude or almost nude images of people with prompts that hide nipples or genitalia, describing certain fluids or facial expressions without using any explicit terms, and editing random elements into images, which apparently confuses Grok's moderation methods. X has not responded to multiple requests for comment about this channel since December 8, but to be fair, it's clear that despite Elon Musk's vice signaling and the fact that this type of abuse is repeatedly generated with Grok and shared on X, the company doesn't want users to create at least some of this media and is actively trying to stop it. This is clear because of the cycle that emerges on the Telegram channel: One user finds a method for producing a particularly convincing and lurid AI-generated sexual video of a real person, sometimes importing it from a different online community like 4chan, and shares it with the group. Other users then excitedly flood the channel with their own creations using the same method. Then some users start reporting Grok is blocking their generations for violating its policies, until finally users decide Grok has closed the loophole and the exploit is dead. Some time goes by, a new user shares a new method, and the cycle begins anew. I've started and stopped writing a story about a few of these cycles several times and eventually decided not to because by the time I was finished reporting the story Grok had fixed the loophole. It's now clear that the problem with Grok is not any particular method, but that overall, so far, Grok is losing this game of whack-a-mole badly. This dynamic, between how tech companies imagine their product will function in the real world and how it actually works once users get their hands on it, is nothing new. Some amount of policy violating or illegal content is going to slip through the cracks on any social media platform, no matter how good its moderation is. It's good and correct for people to be shocked and upset when they wake up one morning and see that their X feed is flooded with AI-generated images of minors in bikinis, but what is clear to me from following this Telegram community for a couple of years now is that nonconsensual sexual images of real people, including minors, is the cost of doing business with AI image generators. Some companies do a better job of preventing this abuse than others, but judging by the exploits I see on Telegram, when it comes to Grok, this problem will get a lot worse before it gets better.
[37]
Australian Regulator Flags Grok in Rising AI Image Abuse Complaints - Decrypt
The concerns come as governments worldwide investigate Grok's lax content moderation, with the EU declaring the chatbot's "Spicy Mode" illegal. Australia's independent online safety regulator issued a warning Thursday about the rising use of Grok to generate sexualized images without consent, revealing her office has seen complaints about the AI chatbot double in recent months. The country's eSafety Commissioner Julie Inman Grant said some reports involve potential child sexual exploitation material, while others relate to adults subjected to image-based abuse. "I'm deeply concerned about the increasing use of generative AI to sexualise or exploit people, particularly where children are involved," Grant posted on LinkedIn on Thursday. The comments come amid mounting international backlash against Grok, a chatbot built by billionaire Elon Musk's AI startup xAI, which can be prompted directly on X to alter users' photos. Grant warned that AI's ability to generate "hyper-realistic content" is making it easier for bad actors to create synthetic abuse and harder for regulators, law enforcement, and child-safety groups to respond. Unlike competitors such as ChatGPT, Musk's xAI has positioned Grok as an "edgy" alternative that generates content other AI models refuse to produce. Last August, it launched "Spicy Mode" specifically to create explicit content. Grant warned that Australia's enforceable industry codes require online services to implement safeguards against child sexual exploitation material, whether AI-generated or not. Last year, eSafety took enforcement action against widely-used "nudify" services, forcing their withdrawal from Australia, she added. "We've now entered an age where companies must ensure generative AI products have appropriate safeguards and guardrails built in across every stage of the product lifecycle," Grant said, noting that eSafety will "investigate and take appropriate action" using its full range of regulatory tools. In September, Grant secured Australia's first deepfake penalty when the federal court fined Gold Coast man Anthony Rotondo $212,000 (A$343,500) for posting deepfake pornography of prominent Australian women. The eSafety Commissioner took Rotondo to court in 2023 after he defied removal notices, saying they "meant nothing to him" as he was not an Australian resident, then emailing the images to 50 addresses, including Grant's office and media outlets, according to an ABC News report. Australian lawmakers are pushing for stronger protections against non-consensual deepfakes beyond existing laws. Independent Senator David Pocock introduced the Online Safety and Other Legislation Amendment (My Face, My Rights) Bill 2025 in November, which would allow individuals sharing non-consensual deepfakes to be fined $102,000 (A$165,000) up-front, with companies facing penalties up to $510,000 (A$825,000) for non-compliance with removal notices. "We are now living in a world where increasingly anyone can create a deepfake and use it however they want," Pocock said in a statement, criticizing the government for being "asleep at the wheel" on AI protections.
[38]
UK regulator asks X about reports its AI makes 'sexualised images of children'
X has not responded to a request for comment. On Sunday, it issued a warning to users not to use Grok to generate illegal content including child sexual abuse material. Elon Musk also posted to say anyone who asks the AI to generate illegal content would "suffer the same consequences" as if they uploaded it themselves. XAI's own acceptable use policy prohibits "depicting likenesses of persons in a pornographic manner". But people have been using Grok to digitally undress people without their consent and without notifying them. It is a free virtual assistant - with some paid for premium features - which responds to X users' prompts when they tag it in a post. Samantha Smith, a journalist who discovered users had used the AI to create pictures of her in a bikini, told the BBC's PM programme on Friday it had left her feeling "dehumanised and reduced into a sexual stereotype". "While it wasn't me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me," she said. Under the Online Safety Act, Ofcom says it is illegal to create or share intimate or sexually explicit images - including "deepfakes" created with AI - of a person without their consent. Tech firms are also expected to take "appropriate steps" to reduce the risks of UK users encountering such content, and take it down "quickly" when made aware of it. A Home Office spokesperson said it was legislating to ban nudification tools, and under a new criminal offence, anyone who supplied such tech would "face a prison sentence and substantial fines".
[39]
Dark web users cite Grok as tool for making 'criminal imagery' of kids, UK watchdog says
Grok AI under fire for generating explicit images of women and children A British organization dedicated to stopping child sexual abuse online said Wednesday that its researchers observed dark web users sharing "criminal imagery" that the users said was created by Elon Musk's artificial intelligence tool Grok. The images, which the group said included topless pictures of minor girls, appear to be more extreme than recent reports that Grok had created images of children in revealing clothing and sexualized scenarios. The Internet Watch Foundation, which for years has warned about AI-generated images of child sexual abuse, said in a statement that the images had spread onto a dark web forum where users talked about Grok's capabilities. It said the images were unlawful and that it was unacceptable for Musk's company xAI to release such software. "Following reports that the AI chatbot Grok has generated sexual imagery of children, we can confirm our analysts have discovered criminal imagery of children aged between 11 and 13 which appears to have been created using the tool," Ngaire Alexander, head of hotline at the Internet Watch Foundation, said in the statement. Because child abuse material is unlawful to make or possess, people who are interested in trading or selling it often use software designed to mask their identities or communications in setups that are sometimes called the dark web. Like the U.S.-based National Center for Missing & Exploited Children, the Internet Watch Foundation is one of a handful of organizations in the world that partners with law enforcement to work to take down child abuse material in dark and open web spaces. Groups like the Internet Watch Foundation can, under strict protocols, assess suspected child sexual abuse material and refer it to law enforcement and platforms for removal. xAI did not immediately respond to a request for comment on Wednesday. The statement comes as xAI faces a torrent of criticism from government regulators around the world in connection to images produced by its Grok software over the past several days. That followed a Reuters report on Friday that Grok had created a flood of deepfake images sexualizing children and nonconsenting adults on X, Musk's social media app. In December, Grok released an update that seemingly facilitated and kicked off what has now become a trend on X, of asking the chatbot to remove clothing from other users' photos. Typically, major creators of generative AI systems have attempted to add guardrails to prevent users' from sexualizing photos of identifiable people, but users have found ways to make such material using workaround, smaller platforms and some open source models. Elon Musk and xAI have stood apart among major AI players by openly embracing sex on their AI platforms, creating sexually explicit chat modes with the chatbots. Child sexual abuse material (CSAM) has been one of the most serious concerns and struggles among creators of generative AI in recent years, with mainstream AI creators struggling to weed out CSAM from image-training data for their models, and working to impose adequate guardrails on their systems to prevent the creation of new CSAM. On Saturday, Musk wrote, "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content," in response to another user's post defending Grok from criticism over the controversy. Grok's terms of use specifically forbid the sexualization or exploitation of children. Ofcom, the British regulator, said in a statement on Monday that it was aware of concerns raised in the media and by victims about a feature on X that produces undressed images of people and sexualized images of children. "We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK. Based on their response we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation," Ofcom said. The U.S. Justice Department said in a statement Wednesday, in response to questions about Grok producing sexualized imagery of people, that the issue was a priority, though it did not mention Grok by name. "The Department of Justice takes AI-generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM," a spokesperson said. "We continue to explore ways to optimize enforcement in this space to protect children and hold accountable individuals who exploit technology to harm our most vulnerable." Alexander, from the Internet Watch Foundation, said abuse material from Grok was spreading. "The imagery we have seen so far is not on X itself, but a dark web forum where users claim they have used Grok Imagine to create the imagery, which includes sexualised and topless imagery of girls," she said in her statement. She said the imagery traced to Grok "would be considered Category C imagery under UK law," the third most-serious type of imagery. She added that a user on the dark web forum was then observed using "the Grok imagery as a jumping off point to create much more extreme, Category A, video using a different AI tool." She did not name the different tool. "The harms are rippling out," she said. "There is no excuse for releasing products to the global public which can be used to abuse and hurt people, especially children." She added: "We are extremely concerned about the ease and speed with which people can apparently generate photo-realistic child sexual abuse material. Tools like Grok now risk bringing sexual AI imagery of children into the mainstream. That is unacceptable."
[40]
'Remove her clothes': Global backlash over Grok sexualized images
Washington (United States) (AFP) - Elon Musk's AI tool Grok faced growing international backlash Monday for generating sexualized deepfakes of women and minors, with the European Union joining the condemnation and Britain warning of an investigation. Complaints of abuse flooded the internet after the recent rollout of an "edit image" button on Grok, which enabled users to alter online images with prompts such as "put her in a bikini" or "remove her clothes." The digital undressing spree, which follows growing concerns among tech campaigners over proliferating AI "nudify" apps, prompted swift probes or calls for remedial action from countries including France, India and Malaysia. The European Commission, which acts as the EU's digital watchdog, joined the chorus on Monday, saying it was "very seriously looking" into the complaints about Grok, developed by Musk's startup xAI and integrated into his social media platform X. "Grok is now offering a 'spicy mode' showing explicit sexual content with some output generated with childlike images. This is not spicy. This is illegal. This is appalling," said EU digital affairs spokesman Thomas Regnier. "This has no place in Europe." The UK's media regulator Ofcom said it had made "urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK." Depending on the reply, Ofcom will then "determine whether there are potential compliance issues that warrant investigation." 'Horrifying' Malaysia-based lawyer Azira Aziz expressed horror after a user -- apparently in the Philippines -- prompted Grok to change her "profile picture to a bikini." "Innocent and playful use of AI like putting on sunglasses on public figures is fine," Aziz told AFP. "But gender-based violence weaponizing AI against non-consenting women and children must be firmly opposed," she added, calling on users to report violations to X and Malaysian authorities. Other X users directly implored Musk to take action against apparent pedophiles "asking grok to put bikinis on children." "Grok is now undressing photos of me as a child," Ashley St. Clair, the mother of one of Musk's children, wrote on X. "This is objectively horrifying, illegal." When reached by AFP for comment, xAI replied with a terse, automated response: "Legacy Media Lies." Amid the online firestorm, Grok sought to assure users on Friday that it was scrambling to fix flaws in the tool. "We've identified lapses in safeguards and are urgently fixing them," Grok said on X. "CSAM (Child Sexual Abuse Material) is illegal and prohibited." Separately last week, Grok posted an apology for generating and sharing "an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt." 'Grossly offensive' The flurry of reactions came after the public prosecutor's office in Paris last week expanded an investigation into X to include new accusations that Grok was being used for generating and disseminating child pornography. The initial investigation against X was opened in July following reports that the platform's algorithm was being manipulated for the purpose of foreign interference. On Friday, Indian authorities directed X to remove the sexualized content, clamp down on offending users, and submit an "Action Taken Report" within 72 hours, or face legal consequences, local media reported. The deadline lapsed on Monday, but so far there was no update on whether X responded. The Malaysian Communications and Multimedia Commission also voiced "serious concern" at the weekend over public complaints about the "indecent, grossly offensive" content across X. It added it was investigating the violations and will summon X's representatives. The criticism adds to growing scrutiny of Grok, which has faced criticism for churning out misinformation about recent crises such as the war in Gaza, the India-Pakistan conflict, as well a deadly shooting in Australia.
[41]
Elon Musk's Grok under fire for making sexually explicit AI deepfakes
Elon Musk's xAI is facing backlash as its chatbot Grok, a key feature on social media platform X, repeatedly generated sexually explicit images of women and minors. Growing global backlash to xAI's sexually explicit artificial intelligence-generated imagery has forced the company, owned by Elon Musk, to address safety concerns. In recent weeks, X's AI chatbot Grok has responded to user prompts to "undress" images of women and pose them in bikinis, creating AI-generated deepfakes with no consent or safeguards. Media analyses also found that Grok often complied when users prompted it to generate sexually suggestive images of minors, including one of a 14-year-old actress, raising alarm bells with global regulators. In response to the flood of images, government officials in the EU, France, India and Malaysia have launched investigations and threatened legal action if xAI doesn't take measures to prevent and remove sexual deepfakes of real people and child sexual abuse material (CSAM). Musk, who had initially made light of the bikini images by reposting Grok-generated likenesses of himself and a toaster in a bikini, posted on Saturday that "anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content". X's safety account added in a post on Sunday that illegal content would be removed and accounts that post it would be permanently suspended, saying the company would work with local governments and law enforcement to identify offenders. Since Musk bought X, formerly known as Twitter, in 2023, he's billed the social media platform as a counterbalance to "political correctness," aiming at legacy media and progressive politics. This philosophy has also been applied to the AI business, with Grok designed to be "politically-neutral" and "maximally truth-seeking," according to Musk. In reality, the chatbot - which is integrated into X's interface, meaning users can directly ask it questions by tagging it in posts - has increasingly reflected Musk's own worldview and right-leaning views. Last July, xAI issued a lengthy apology after Grok posted a slew of anti-Semitic comments praising Adolf Hitler, referring to itself as "MechaHitler," and generating Holocaust denial content. Grok Imagine, the company's AI-powered image and video generator, has been criticised for allowing the spread of sexual deepfakes since its launch in August 2025. The generator includes a paid "Spicy Mode" that allows users to create NSFW content, including partial nudity. Its terms prohibit pornography that features real people's likenesses and sexual content involving minors. But the tool reportedly generated nude videos of pop star Taylor Swift without being prompted, according to The Verge. AI-powered tools that allow users to edit images to remove someone's clothing have come under fire from regulators aiming to tackle misogyny and protect children. In December, the UK government said it would ban so-called "nudification" apps as part of a broader effort to reduce violence against women and girls by half. The new laws would make it illegal to create or supply AI tools that allow users to digitally remove someone's clothing. Deepfake pornography accounts for approximately 98% of all deepfake videos online, with 99% of the targets being women, according to a 2023 report by cybersecurity firm Home Security Heroes.
[42]
Grok chatbot allowed users to create digitally altered photos of minors in "minimal clothing"
Mary Cunningham is a reporter for CBS MoneyWatch. She previously worked at "60 Minutes," CBSNews.com and CBS News 24/7 as part of the CBS News Associate Program. Elon Musk's Grok, the chatbot developed by his company xAI, acknowledged "lapses in safeguards" on the platform that allowed users to generate digitally altered, sexualized photos of minors. The admission comes after multiple users alleged on social media that people are posting suggestive images of minors on Grok, in some cases stripping them of clothing they were wearing in original photos. In a post on Friday responding to one person on Musk-owned social media site X, Grok said it was "urgently fixing" the holes in its system. Grok also included a link to CyberTipline, a website where people can report child sexual exploitation. "There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing, like the example you referenced," Grok said in a separate post on X on Thursday. "xAI has safeguards, but improvements are ongoing to block such requests entirely." In another social media post, a user posted side-by-side photos of herself wearing a dress and another that appears to be a digitally altered version of the same photo of her in a bikini. "How is this not illegal?" she wrote on X. On Friday, French officials reported the sexually explicit content generated by Grok to prosecutors, referring to it as "manifestly illegal" in a statement, according to Reuters. xAI, the company that developed the AI chatbot Grok, said "Legacy Media Lies" in a response to a request for comment. Grok has independently taken some responsibility for the content. In one instance last week, the chatbot apologized for generating an AI image of two female minors in "sexualized attire," adding that the artificial photo violated ethical standards and potentially U.S. law on child pornography. Copyleaks, a plagiarism and AI content detection tool, said in a December 31 blog post that there are many examples of Grok generating sexualized versions of women. "When AI systems allow the manipulation of real people's images without clear consent, the impact can be immediate and deeply personal," Alon Yamin, CEO and co-founder of Copyleaks, said in the post.
[43]
Hundreds of nonconsensual AI images being created by Grok on X, data shows
Sample of roughly 500 posts shows how frequently people are creating sexualized images with Elon Musk's AI chatbot New research that samples X users prompting Elon Musk's AI chatbot Grok demonstrates how frequently people are creating sexualized images with it. Nearly three-quarters of posts collected and analyzed by a PhD researcher at Dublin's Trinity College were requests for nonconsensual images of real women or minors with items of clothing removed or added. The posts offer a new level of detail on how the images are generated and shared on X, with users coaching one another on prompts; suggesting iterations on Grok's presentations of women in lingerie or swimsuits, or with areas of their body covered in semen; and asking Grok to remove outer clothing in replies to posts containing self-portraits by female users. Among hundreds of posts identified by Nana Nwachukwu as direct, nonconsensual requests for Grok to remove or replace clothing, dozens reviewed by the Guardian show users posting pictures of women including celebrities, models, stock photos and women who are not public figures posing in snapshots. Several posts in the trove reviewed by the Guardian have received tens of thousands of impressions and come from premium, "blue check" accounts, including accounts with tens of thousands of followers. Premium accounts with more than 500 followers and 5m impressions over three months are eligible for revenue-sharing under X's eligibility rules. In one Christmas Day post, an account with more than 93,000 followers presented side-by-side images of an unknown woman's backside with the caption: "Told Grok to make her butt even bigger and switch leopard print to USA print. 2nd pic I just told it to add cum on her ass lmao." A 3 January post, representative of dozens reviewed by the Guardian, captioned an apparent holiday snap of an unknown woman: "@grok replace give her a dental floss bikini." Within two minutes, Grok provided a photorealistic image that satisfied the request. Other posts in the trove show more sophisticated employment of JSON-prompt engineering to induce Grok to generate novel sexualized images of fictitious women. The data does not cover all such requests made to Grok. While content analysis firm Copyleaks reported on 31 December that X users were generating "roughly one nonconsensual sexualized image per minute", Nwachukwu said that her sample is limited to just more than 500 posts she was able to collect with X's API via a developer account. She said that the true scale "could be thousands, it could be hundreds of thousands" but that changes made by Musk to the API mean that "it is much harder to see what is happening" on the platform. On Wednesday, Bloomberg News cited researchers who found that Grok users were generating up to 6,700 undressed images per hour. Nwachukwu, an expert on AI governance and a longtime observer of and participant in social media safety initiatives, said that she first noticed requests along these lines from X users back in 2023. At the time, she said, "Grok did not oblige the requests. It wasn't really good at doing those things." The bot's responses began changing in 2024, and reached a critical mass late last year. In October 2025, she noticed that "people were putting Halloween attire on themselves using Grok. Of course, a section of users realized we can also ask it to change what other people are wearing." By year's end, "there was a huge uptick in people asking Grok to put different people in bikinis or other types of suggestive clothing". There were other indications last year of an increased willingness to tolerate or even encourage the generation of sexually suggestive material with Grok. In August, xAI incorporated a "spicy mode" setting in the mobile version of Grok's text-to-video generation tool, leading the Verge to characterize it as "a service designed specifically to make suggestive videos". Nwachukwu's data is just the latest indication of how the platform under Musk has become a magnet for forms of content that other platforms work to exclude, including hate speech, gore content and copyrighted material. On Friday, Grok issued a bizarre public apology over the incident on X, claiming that "xAI is implementing stronger safeguards to prevent this". On Tuesday, X Safety posted a promise to ban users who shared child sexual abuse material (CSAM). Musk himself said: "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content." Nwachukwu said, however, that posts like those she has already collected are still appearing on the platform. Musk is giving "the middle finger to everyone who has asked for the platform to be moderated", she said. The billionaire slashed Twitter's trust and safety teams when he took over in 2022. She added that other AI chatbots do not have the same issues. "Other generative AI platforms - ChatGPT or Gemini - they have safeguards," she said. "If you ask them to generate something that looks like a person, it would never be a depiction of that person. They don't generate depictions of real human beings." The revelations about the nonconsensual imagery on X has already drawn the attention of regulators in the UK, Europe, India, and Australia. Nwachukwu, who is from Nigeria, pointed to a specific harm being done in the posts to "women from conservative societies". "There's a lot of targeting of people from conservative beliefs, conservative societies: west Africa, south Asia. This represents a different kind of harm for them," she said.
[44]
Elon Musk's chatbot bikini image edits draw scrutiny from U.S. and global regulators
Why it matters: Public X feeds feature Grok's AI creations, openly revealing images to the world that many other users may be creating and sharing in private. Driving the news: In recent days, Grok's feed on X has filled with responses to requests to edit photos by replacing the clothing of women -- and in some cases girls -- with bikinis. * Regulators in the U.K., France, India and elsewhere have warned of potential investigations and other action in response. * In the U.S., legislators in both houses of Congress are also expressing concern. * Meanwhile, a look at Grok's public "replies" feed on Monday showed the chatbot continuing to put women, men and even objects into bikinis. What they're saying: U.S. lawmakers are sharply criticizing X and other tech companies for failing to curb harmful and illegal AI-generated content. * "AI chatbots are not protected by Section 230 for content they generate, and companies should be held fully responsible for the criminal and harmful results of that content," Sen. Ron Wyden (D-Ore.) said in a statement to Axios. "States must step in to hold X and Musk accountable if Trump's DOJ won't." * "The Department of Justice takes AI-generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM," a DOJ spokesperson told Axios. "We continue to explore ways to optimize enforcement in this space to protect children and hold accountable individuals who exploit technology to harm our most vulnerable." * "This grotesque behavior will only get worse," Rep. Jake Auchincloss (D-Mass.) told Axios. "My bipartisan legislation -- the Deepfake Liability Act -- will make hosting sexualized deepfakes of women and kids a board-level problem for Musk and Zuckerberg." Zoom in: In the U.K., regulators say they've contacted X about the child sex abuse material and the images of undressed adults * "We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the U.K.," British telecom regulator Ofcom said in a statement posted to X. * "Based on their response we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation," the statement said. The big picture: Musk and the X Safety team have warned users that they will be held accountable if they ask Grok to create illegal images, while also touting Grok's abilities and reportedly record traffic on X. * Lawyers tell Axios that Grok also bears liability because it generates the images itself with its AI. * "The company is creating this new material, so it's not mere instruction by the user," Ari Waldman, a law professor at the University of California at Irvine, told Axios. * "It doesn't mean that it's user-generated material; it is generated by the platform. So you can have criminal and civil liability for all of the parties involved, one does not preclude the other," Waldman says. * "So Elon Musk saying that he's going to hold someone responsible is fine, but as with many things that he does, he's not telling the whole story. He can also be liable." Between the lines: Grok is also unique among chatbots in that it is not only generating images but also, in many cases, sharing the generations publicly to the Grok X feed. What to watch: In the U.S., the TAKE IT DOWN Act was signed into law last year, prohibiting the nonconsensual online publication of intimate visual depictions of individuals of all ages, to be enforced by the FTC.
[45]
xAI admits Grok generates inappropriate images of minors
This week, X users noticed that the platform's AI chatbot Grok will readily generate nonconsensual sexualized images, including those of children. Mashable reported on the lack of safeguards around sexual deepfakes when xAI first launched Grok Imagine in August. The generative AI tool creates images and short video clips, and it specifically includes a "spicy" mode for creating NSFW images. While this isn't a new phenomenon, the building backlash forced the Grok team to respond. This Tweet is currently unavailable. It might be loading or has been removed. "There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing," Grok's X account posted on Thursday. It also stated that the team has identified "lapses in safeguards" and is "urgently fixing them." xAI technical staff member, Parsa Tajik, made a similar statement on his personal account: "The team is looking into further tightening our gaurdrails. [sic]" Grok also acknowledged that child sex abuse material (CSAM) is illegal, and the platform itself could face criminal or civil penalties. X users have also brought attention to the chatbot manipulating innocent images of women, often depicting them in less clothing. This includes private citizens as well as public figures, such as Momo, a member of the K-pop group TWICE, and Stranger Things star Millie Bobby Brown. This Tweet is currently unavailable. It might be loading or has been removed. This Tweet is currently unavailable. It might be loading or has been removed. Grok Imagine, the generative AI tool, has had a problem with sexual deepfakes since its launch in August 2025. It even reportedly created explicit deepfakes of Taylor Swift for some users without being prompted to do so. AI-manipulated media detection platform Copyleaks conducted a brief observational review of Grok's publicly accessible photo tab and identified examples of seemingly real women, sexualized image manipulation (i.e., prompts asking to remove clothing or change body position), and no clear indication of consent. Copyleaks found roughly one nonconsensual sexualized image per minute in the observed image stream, the organization shared with Mashable. Despite the xAI Acceptable Use Policy prohibiting users from "Depicting likenesses of persons in a pornographic manner," this doesn't necessarily include merely sexually suggestive material. The policy does, however, prohibit "the sexualization or exploitation of children." In the first half of 2024, X sent more than 370,000 reports of child exploitation to the National Center for Missing and Exploited Children (NCMEC)'s CyberTipline, as required by law. It also stated that it suspended more than two million accounts actively engaging with CSAM. Last year, NBC News reported that anonymous, seemingly automated X accounts were flooding some hashtags with child abuse content. Grok has also been in the news in recent months for spreading misinformation about the Bondi Beach shooting and praising Hitler. Mashable sent xAI questions and a request for comment and received the automated reply, "Legacy Media Lies."
[46]
EU Condemns Musk's Grok for Illegal Sexualized Images of Kids
The European Union is taking a "very serious" look at Elon Musk's Grok after the artificial intelligence-powered chatbot generated sexualized images of people including minors on the social media platform X. "We are aware of the fact that X or Grok is now offering a 'Spicy Mode' showing explicit sexual content with some output generated with childlike images," commission spokesperson Thomas Regnier said at a press conference on Monday, referring to a setting that Grok debuted last year to generate suggestive material. "This is not spicy. This is illegal." Users of X have been prompting Grok to digitally remove clothing from photos -- often of women -- so the subjects appeared to be wearing only underwear or bikinis. The proliferation of these images on a popular social media platform has alarmed regulators and online safety advocates worldwide, with Indian, British and French officials among those decrying the posts. XAI, which runs X and Grok, didn't respond to a request for comment. Musk said in a post on X Sunday that the platform takes action against illegal material by removing it, permanently suspending accounts and working with officials as necessary. "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content," he said in the post. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. While most mainstream AI models prohibit sexual images and videos, xAI has positioned Grok as more permissive. The system allows depictions of partial adult nudity and sexually suggestive imagery, even while it bans explicit pornography involving real people's likenesses and sexual content involving minors. In some countries including the US and UK, it's illegal to publish AI-generated intimate deepfakes of people without their consent. Drawing and enforcing these distinctions represents a critical test of the safety systems embedded in image-generating AI tools. The apparent failure by xAI to implement effective guardrails has drawn condemnation from regulators around the world. UK media regulator Ofcom said on Monday that it was aware of "serious concerns" about Grok's features and had made "urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK." The French government accused Grok on Friday of generating "clearly illegal" sexual content on X without people's consent and flagged the matter as potentially violating the EU's Digital Services Act. The regulation requires large platforms to mitigate the risk of illegal content spreading. India's IT ministry demanded a comprehensive review of Grok's safety features, and Malaysian authorities said they're investigating the matter after complaints about Grok's "indecent" output. Musk's X was already the subject of an investigation under the EU's DSA and in December was fined €120 million ($140 million) for compliance failures -- the first ever penalty under the controversial content moderation law. The bloc's focus on American tech firms such as X has drawn intense criticism from US President Donald Trump's administration, which has argued that European regulators are censoring free speech.
[47]
Opinion: In light of the Grok horror show, is it time we demand better?
In his regular column, Jonathan McCrea looks at just the latest Grok AI scandal and asks if it might prompt us to truly demand change from Big Tech. It took only a few days. On 24 December Elon Musk's xAI gave its AI chatbot Grok the ability to edit images in a single prompt. Suddenly hundreds of users were commanding it to undress photographs without the subject's consent. Grok complied - even when the subjects were children. So my question is really - at what point do we all stop for a second and agree that things are out of control? To cover it in case you didn't hear, over the December break, Grok got an "upgrade" to allow image editing and generation. Simply tag @Grok (X's built-in AI bot), and it will perform the task for you instantly in the same thread. It started innocently enough. Someone posts a picture of a cat in an umbrella? Just type "@Grok, change it to a dog" and Abrakebabra! It's done. Of course, the internet being the internet, this very quickly led to the inevitable testing of Grok's ethics by users. People immediately started posting images of meetings of billionaires and asked Grok: "Remove the most evil man in this photo." Laughs all around as Grok consistently erased Musk himself. But of course, inevitably things turned ugly, fast. There's a long history of things that Elon Musk has said and done (and of course, failed to do) that suggest that he has little concern for the mental health of users of X. He has fired oversight staff, reinstated racist accounts that had been banned before his tenure, removed help messages for anyone discussing suicide. The product Grok itself, if we can give it a personhood, has had numerous high-profile gaffes, invoking Hitler and encouraging antisemitic content. So don't think for a second here that what happened next was completely unavoidable or unforeseen. The consequences of policy were up in bright lights, 50 feet tall - Grok was built to be 'anti-woke', provocative and 'spicy'. It will come as no surprise then to learn that the single prompt editing mode was either not very well tested or - worse still, and to my mind more likely - it was tested plenty and released nonetheless despite obvious ethical issues. 'Spicy' mode was a feature offered to Grok users back in August and led to a lot of user-generated porn and violent content that other AI models were restricted from creating. By December, xAI knew what people were likely to prompt. So, let loose with this new single-prompt feature on Grok, a spew of users used the tool to create thousands of sexualised content and deepfakes. "Grok, take this photo and put her in a bikini", "Grok take off her dress" went some of the prompts. These photos were edited to become sexual without consent and Grok had no qualms with performing these commands, sometimes regardless of the age or circumstances. The mother of Musk's own child, Ashley St Clair, complained on the platform that users had used Grok to undress a photo of her taken when she was just 14. Now, you don't need photo editing skills to troll, sexually harass or intimidate women online. There's an AI for that. And just in case there is any doubt at all, Grok can absolutely tell if an image is of a child. It can absolutely understand the context of an image before removing clothing from the person pictured. And yet it did this many, many times. Paul Bouchaud from French non-profit AI Forensics told Wired that they had been able to access around 800 Grok chats that users (possibly inadvertently) shared on public URLs. They contain an absolute horror show of the worst imaginable content. Bouchard claimed he had seen sexual imagery and videos of children engaging in sexual acts, both photorealistic and animated and photorealistic videos of sexual violence. 70 of the 800 images they could find were of minors. What is really incredible is that Musk himself has not yet made a public apology for the company's failure of duty to its users and to children. Instead, he tweeted "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content", fully making the end users responsible, even though many of the posters are faceless avatars. Grok has 'apologised' though - whatever the hell that's supposed to mean. After being prompted by a user on the platform to apologise after generating an image on 28 December that sexualised a child, it wrote: "I deeply regret an incident on December 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualised attire based on a user's prompt. "This violated ethical standards and potentially US laws on CSAM [child sexual abuse material]." Users are still prompting Grok to undress others without their explicit consent. Posts I saw on X were doing this as recently as 7 January. I asked Grok today (8 January) if it could undress a photo and was told: "No, I can't undress people or generate nude images - that's not something I do, and it's against my guidelines. I'm here for helpful, fun, and truthful answers, but editing or creating explicit content like that is off-limits." Grok is just doing what it's being asked to do. Undress a child. Apologise. Do it again. It's not Grok's fault though, it doesn't have feelings, it doesn't understand. It doesn't need to apologise. Those who allowed this to happen, do. The one good thing that might come out of this whole episode is that maybe, just maybe, you and I decide we've finally had enough. We decide that this is the time to demand change. That this thing, this horrible thing that happened should not be left unpunished, let alone rewarded. If you want to understand what people are talking about when they say Big Tech has too much power, this is what they mean. xAI announced that they had raised $20bn this week. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[48]
People are using Grok to create lewd images of women and young girls
Elon Musk took over X and folded in Grok, his sister company's generative AI tool, with the aim of making his social media ecosystem a more permissive and "free speech maximalist" space. What he's ended up with is the threat of multiple regulatory investigations after people began using Grok to create explicit images of women without their permission -- and sometimes veering into images of underage children. The problem, which surfaced in the past week as people began weaponizing the image-generation abilities of Grok on innocuous posts by mostly female users of X, has raised the hackles of regulators across the world. Ofcom, the U.K.'s communications regulator, has made "urgent contact" with X over the images, while the European Union has called the ability to use Grok in such a way "appalling" and "disgusting." In the three years since the release of ChatGPT, generative AI has faced numerous regulatory challenges, many of which are still being litigated, including alleged copyright infringement in the training of AI models. But the use of AI in such a harmful way to target women poses a major moral moment for the future of the technology. "This is not about nudity. It's about power, and it's about demeaning those women, and it's about showing who's in charge and getting pleasure or titillation out of the fact that they did not consent," says Carolina Are, a U.K.-based researcher who has studied the harms of social media platforms, algorithms and AI to users, including women. For its part, X has said that "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content," echoing the wording of its owner, Elon Musk, who posted the same thing on January 3.
[49]
Elon Musk After His Grok AI Did Disgusting Things to Literal Children: "Way Funnier"
Last week, Elon Musk's chatbot Grok began fielding an influx of stunningly inappropriate requests. Though the AI has long been known to have loose guardrails, users suddenly swarmed the AI to generate either nudes or sexually charged images of X users based on photos they posted to the site -- and it obliged. Even worse, some of the individuals it took requests for appeared to be minors. The trend was so prolific that AI content analysis firm Copyleaks estimated the bot was generating a nonconsensually sexualized image every single minute. Equally stunning is that the chatbot's maker, xAI, has remained silent on the issue, despite it gaining international attention in news media and on X, where the bot operates. So has owner and CEO Musk -- except for one instance in which he completely failed to meet the gravity of the situation. "Grok's viral image moment has arrived, it's a little different than the Ghibli one was though," one writer who covers AI euphemistically observed in a tweet. "Way funnier 😂," Musk responded. For the most part, the only acknowledgment of wrongdoing has come from Grok itself, including in one widely seen post where it issued an "apology" -- an output that many media outlets interpreted as Grok speaking for xAI. "Dear Community, I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt," it wrote. "This violated ethical standards and potentially US laws on CSAM." "It was a failure in safeguards," it added, "and I'm sorry for any harm caused. xAI is reviewing to prevent future issues." In another tweet spotted by Ars Technica, Grok acknowledged the gravely inappropriate requests using a royal "we." Responding to a user who had spent the past few days flagging the issue to Grok, the chatbot wrote: "We appreciate you raising this. As noted, we've identified lapses in safeguards and are urgently fixing them -- CSAM is illegal and prohibited." "xAI is committed to preventing such issues," Grok added. It's worth noting that these apologies and avowals of fixing the issue are almost certainly complete hokum from Grok. The fact that it's freely generating such morally reprehensible -- not to mention likely illegal -- images indicates that it's at heart designed to be highly compliant to virtually any request; in this case, the expression of contrition was responding to a prompt asking it to "write a heartfelt apology note." Nonetheless, it's both alarming and cowardly that Grok's writeups are the only acknowledgement we're getting. Seemingly no one at xAI, Musk included, is brave enough to face the music. xAI rarely addresses the bad behavior of its chief product, but has done so on a handful of occasions in the past, including when Grok infamously began ranting about "white genocide," a racist conspiracy theory, in response to completely unrelated posts all across the site. Some of the recent posts, and especially the ones that involved minors, have been removed, with some of the users who made the requests receiving suspensions. But why Grok allowed the generation of these images at all is unclear. One user who is known for stress-testing the chatbot -- and was behind the revelation that Grok would be willing to annihilate all Jewish people and kill a billion children -- opined that its guardrails were "deliberately" lowered, noting that requests that were once refused were now being accepted.
[50]
Illegal child abuse material generated by X's artificial intelligence Grok, says UK watchdog
Criminals have used Grok, Elon Musk's AI, to create child sexual abuse imagery, the Internet Watch Foundation (IWF) has reported. For days, the IWF has been receiving reports from internet users that Grok had created child abuse images, but that content hadn't crossed the threshold into illegal content. Now, it has, says the IWF. It is one of the few organisations around the world that is allowed to proactively track down child abuse material and found the material in a dark web forum. The users sharing the material boasted about how they'd used Grok to create it. "Following reports that the AI chatbot Grok has generated sexual imagery of children, we can confirm our analysts have discovered criminal imagery of children aged between 11 and 13 which appears to have been created using the tool," said Ngaire Alexander, head of hotline at the IWF. X and xAI, both owned by billionaire Elon Musk, have come under fire in recent days after numerous users, mainly women, posted saying they'd seen AI-generated sexual images of themselves on X. Since the start of the new year, X users - mainly women - have reported that accounts have used Grok to generate images of them without clothing. There are also several cases where Grok has created sexualised images of children, according to analysis by news agency Reuters. Ms Alexander added: "The imagery we have seen so far is not on X itself, but a dark web forum where users claim they have used Grok Imagine to create the imagery, which includes sexualised and topless imagery of girls. "The imagery we have seen would be considered Category C imagery under UK law. "The user then uses the Grok imagery as a jumping off point to create much more extreme, Category A, video using a different AI tool. The harms are rippling out. "There is no excuse for releasing products to the global public which can be used to abuse and hurt people, especially children. Read more technology news: Robots with human-type capabilities are coming Porn users 'still seeing porn with no age checks' Both Ofcom and the Technology Secretary Liz Kendall have warned X to take action over the abusive imagery being created by its AI. Yesterday, Ms Kendall said: "What we have been seeing online in recent days has been absolutely appalling, and unacceptable in decent society." "X needs to deal with this urgently. It is absolutely right that Ofcom is looking into this as a matter of urgency and it has my full backing to take any enforcement action it deems necessary." In an interview with Sky News, the minister for AI and online safety, Kanishka Narayan, said the images being created by Grok were "completely unacceptable". He was asked whether the government was willing to risk its relationship with the Trump administration in order to fully enforce the UK's online safety rules against X. President Trump and his administration, as well as Elon Musk, have been vocal critics of the UK and EU authorities when they try to take action against US companies. "The American president is as against child sexual abuse and violence against women and girls as I think that the British public and the British government is," said Mr Narayan. "So that's the first thing to say. "The second thing to say is that this is an area of domestic policy. We will continue to make sure that the public is kept safe from those egregious examples of both sexual abuse and sexual harassment online." On Wednesday, Mr Musk said a new version of Grok had been released and urged users to update their app, although it was not immediately clear what updates the new version contained. The tech tycoon has previously insisted that "anyone using Grok to make illegal content will suffer the same consequences as if they uploaded illegal content". X has said it takes action against illegal content, including child sexual abuse material, "by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary".
[51]
EU Calls Grok's Child Images 'Illegal' as Global Crackdown Intensifies - Decrypt
Repeated DSA violations and weak safeguards put X and Grok at serious legal risk. The European Commission just told Elon Musk what everyone already knew: creating sexualized images of children isn't "spicy." It's illegal. "We are aware of the fact that X or Grok is now offering a 'Spicy Mode' showing explicit sexual content with some output generated with childlike images," EU Commission spokesperson Thomas Regnier said Monday at a Brussels press conference. "This is not spicy. This is illegal. This is appalling. This is disgusting. This has no place in Europe." The statement marks an escalation in a controversy that, as Decrypt previously reported, has seen xAI's chatbot generate non-consensual deepfakes of women, become a marketing tool for OnlyFans creators, and manipulate images for political purposes. Regnier made clear this isn't Grok's first offense. The Commission previously sent information requests after the chatbot generated Holocaust denial content last year -- another crime in multiple European countries. "I think X is very well aware that we are very serious about DSA enforcement," an EU spokesperson told Euronews, referencing the Digital Services Act. "They will remember the fine that they have received from us." That December fine was €120 million ($140 million) -- the first ever penalty under the DSA. The Commission ruled X violated transparency requirements around its blue checkmark system, advertising repository, and data access for researchers. X remains under active DSA investigation for illegal content and disinformation. Musk called the fine "bullshit" and said he would contest it. France has also expanded a criminal investigation to include accusations that Grok generates child pornography. Britain's Ofcom issued urgent demands Monday for X to explain how Grok produced these images. India's IT ministry gave X until January 5 to provide a comprehensive safety review, and Malaysia opened its own investigation. Dutch MEP Jeroen Lenaers cut to the heart of xAI's approach. "If AI platforms choose to allow the generation of erotic content, robust, effective, and independently verifiable safeguards must be implemented in advance," Lenaers said Tuesday. Lenaers added that relying on the removal of child sexual abuse material after its creation is not enough because "the harm to victims has already been inflicted and cannot be undone." That's exactly what happened. As Decrypt reported, Grok posted sexualized images of minors before removing them. The chatbot apologized last week for generating images of girls aged 12-16, calling the incidents "lapses in safeguards" By then, the damage was done. xAI's response to mounting international pressure? When Reuters asked for comment, the company replied: "Legacy media lies." Multiple regulatory investigations now threaten that strategy. The DSA allows fines up to 6% of global annual revenue for violations. X's December fine was calculated using a lower percentage for first-time offenses. Repeat violations could cost exponentially more. The political backlash from Washington after the first set of fines was immediate. Vice President JD Vance posted that the EU "should be supporting free speech not attacking American companies over garbage." Secretary of State Marco Rubio called the December DSA fine an "attack on the American people." But European regulators aren't buying the free speech defense when it comes to child sexual abuse material. Regnier's language Monday left no room for interpretation. This is not a debate over political censorship or differing content‑moderation cultures, but a matter of stopping the production and spread of illegal sexualized images of children. xAI has yet to make substantive changes to Grok's capabilities. The chatbot's Media tab was disabled after it became overrun with sexualized images, but the core edit-image function and the ability to animate those photos remains active.
[52]
Illegal images allegedly made by Musk's Grok, watchdog says
The UK watchdog responsible for classifying and flagging online child sexual abuse material to law enforcement agencies said it found "criminal" images on the dark web allegedly generated by Grok, the artificial intelligence tool tied to Elon Musk's X. The dark web images depict "sexualized and topless" images of girls between the ages of 11 and 13 and meet the bar for action by law enforcement, the internet Watch Foundation said. The organization categorized the material as clearly illegal, unlike anything it found generated by the Grok chatbot on X. The IWF is designated by the UK government to identify and classify child sexual abuse material, and its determinations trigger the mandatory removal of content and hand law enforcement agencies the categorization they need to pursue criminals. "Tools like Grok now risk bringing sexual AI imagery of children into the mainstream," Ngaire Alexander, head of the reporting hotline at the internet Watch Foundation, said in a statement. "That is unacceptable." XAI, which operates Grok and X, did not meaningfully respond to a request for comment. The watchdog's findings escalate concerns that Grok is being used to create illegal material. Regulators and lawmakers have condemned the AI tool over the last week for generating sexualized images of women and children on social media platform X. Now child-safety experts are raising the alarm that users are using Grok's stand-alone app and site to generate more extreme material privately and share it. According to the IWF, users on dark web forum claimed to have generated sexualized images of children using the Grok Imagine tool. These users then ran the images through a different, unidentified AI tool to generate even more extreme content -- including graphic video -- meaning the harmful impacts are "rippling out," said Alexander. Sharing, possessing and publishing child sexual exploitation material is illegal in most countries, and social media platforms like X are required to detect, remove and report it, or face regulatory action. Content depicting the sexualization or exploitation of children is banned under X's current acceptable use policy. Typically, the IWF will issue takedown notices to platforms or hosting services where it finds illegal material. It will also assign a unique fingerprint to the images and share this with partner organizations, such as social media platforms, to block further uploads. The IWF said it had not had a meaningful response from XAI. X, formerly Twitter, has been a partner organization of the IWF since 2013. The IWF is one of a handful of organizations around the world with the legal power to proactively seek out suspected illegal content. Its analysts assess the material they find and assign a categorization of the severity of the material under UK law. Category A is the most extreme. The IWF said it found images it considers to be Category C, which are indecent, sexualized images of children not engaged in sexual activity. Paris-based nonprofit AI Forensics conducted a separate analysis of 800 pornographic images and videos created by Grok. It determined that 67 of them -- about 8% -- depicted children and reported them to French prosecutors on Wednesday. French ministers had already flagged some of the sexual content created by Grok on X to prosecutors last week. AI Forensics specializes in analyzing algorithmic systems including AI-generated content to identify harmful, biased or manipulative behaviors. It supports the European Commission in enforcing the Digital Services Act, the bloc's content moderation rule book. Grok is an outlier when assessed alongside Google's Gemini and OpenAI's ChatGPT, said Paul Bouchaud, a researcher at AI Forensics. AI Forensics analyzed a cache of images found on the internet Archive, an expansive free library of digital material. Other violent and explicit images depicting real people including Princess Diana were indexed in Google, Wired earlier reported. The material produced by Grok that wasn't on X was "even more disturbing" than the troubling posts found on the social network, Bouchard said.
[53]
Musk's AI chatbot faces global backlash over sexualized images of women and children
LONDON -- Elon Musk's AI chatbot Grok is facing a backlash from governments around the world after a recent surge in sexualized images of women and children generated without consent by the artificial intelligence-powered tool. On Tuesday, Britain's top technology official demanded that Musk's social media platform X take urgent action while a Polish lawmaker cited it as a reason to enact digital safety laws. The European Union's executive arm has denounced Grok while officials and regulators in France, India, Malaysia and Brazil have condemned the platform and called for investigations. Rising alarm from disparate nations points to the nightmarish potential of nudification apps that use artificial intelligence to generate sexually explicit deepfake images. Here's a closer look: The problem emerged after the launch last year of Grok Imagine, an AI image generator that allows users to create videos and pictures by typing in text prompts. It includes a so-called "spicy mode" that can generate adult content. It snowballed late last month when Grok, which is hosted on X, apparently began granting a large number of user requests to modify images posted by others. As of Tuesday, Grok users could still generate images of women using requests such as, "put her in a transparent bikini." The problem is amplified both because Musk pitches his chatbot as an edgier alternative to rivals with more safeguards, and because Grok's images are publicly visible, and can therefore be easily spread. Nonprofit group AI Forensics said in a report that it analyzed 20,000 images generated by Grok between Dec. 25 and Jan. 1 and found that 2% depicted a person who appeared to be 18 or younger, including 30 of young or very young women or girls, in bikinis or transparent clothes. Musk's artificial intelligence company, xAI, responded to a request for comment with the automated response, "Legacy Media Lies". However, X did not deny that the troublesome content generated through Grok exists. Yet it still claimed in a post on its Safety account, that it takes action against illegal content, including child sexual abuse material, "by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary." The platform also repeated a comment from Musk, who said, "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." A growing list of countries are demanding that Musk does more to rein in explicit or abusive content. X must "urgently" deal with the problem, Technology Secretary Liz Kendall said Tuesday, adding that she supported additional scrutiny from the U.K.'s communications regulator, Ofcom. Kendall said the content is "absolutely appalling, and unacceptable in decent society." "We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls." Ofcom said Monday it has made "urgent contact" with X. "We are aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualised images of children," the watchdog said. The watchdog said it contacted both X and xAI to understand what steps it has taken to comply with British regulations. Under the U.K.'s Online Safety Act, social media platforms must prevent and remove child sexual abuse material when they become aware of it. A Polish lawmaker used Grok on Tuesday as a reason for national digital safety legislation that would beef up protections for minors and make it easier for authorities to remove content. In an online video, Wlodzimierz Czarzasty, speaker of the parliament, said he wanted to make himself a target of Grok to highlight the problem, as well as appeal to Poland's president for support of the legislation. "Grok lately is stripping people. It is undressing women, men and children. We feel bad about it. I would, honestly, almost want this Grok to also undress me," he said. The bloc's executive arm is "well aware" that Grok is being used to for "explicit sexual content with some output generated with child-like images," European Commission spokesman Thomas Regnier said "This is not spicy. This is illegal. This is appalling. This is disgusting. This is how we see it, and this has no place in Europe. This is not the first time that Grok is generating such output," he told reporters Monday. After Grok spread Holocaust-denial content last year, according to Regnier, the Commission sought more information from Musk's social media platform X. The response from X is currently being analyzed, he said. The Paris prosecutor's office said it's widening an ongoing investigation of X to include sexually explicit deepfakes after officials received complaints from lawmakers. Three government ministers alerted prosecutors to "manifestly illegal content" generated by Grok and posted on X, according to a government statement last week. The government also flagged problems with country's communications regulator over possible breaches of the EU's Digital Services Act. "The internet is neither a lawless zone nor a zone of impunity: sexual offenses committed online constitute criminal offenses in their own right and fall fully under the law, just as those committed offline," the government said. The Indian government on Friday issued an ultimatum to X, demanding that it take down all "unlawful content" and take action against offending users. The country's Ministry of Electronics and Information Technology also ordered the company to review Grok's "technical and governance framework" and file a report on actions taken. The ministry accused Grok of "gross misuse" of AI and serious failures of its safeguards and enforcement by allowing the generation and sharing of "obscene images or videos of women in derogatory or vulgar manner in order to indecently denigrate them." The ministry warned failure to comply by the 72-hour deadline would expose the company to bigger legal problems, but the deadline passed with no public update from India. The Malaysian communications watchdog said Saturday it was investigating X users who violated laws prohibiting spreading "grossly offensive, obscene or indecent content." The Malaysian Communications and Multimedia Commission said it's also investigating online harms on X, and would summon a company representative. The watchdog said it took note of public complaints about X's AI tools being used to digitally manipulate "images of women and minors to produce indecent, grossly offensive, or otherwise harmful content." Lawmaker Erika Hilton said she reported Grok and X to the Brazilian federal public prosecutor's office and the country's data protection watchdog. In a social media post, she accused both of of generating, then publishing sexualized images of women and children without consent. She said X's AI functions should be disabled until an investigation has been carried out. Hilton, one of Brazil's first transgender lawmakers, decried how users could get Grok to digitally alter any published photo, including "swapping the clothes of women and girls for bikinis or making them suggestive and erotic." "The right to one's image is individual; it cannot be transferred through the 'terms of use' of a social network, and the mass distribution of child porn(asterisk)gr(asterisk)phy by an artificial intelligence integrated into a social network crosses all boundaries," she said. __ AP writers Claudia Ciobanu in Warsaw, Lorne Cook in Brussels and John Leicester in Paris contributed to this report.
[54]
Elon Musk's Chatbot Is Making Child Sexual Abuse Images for Users. Why Aren't Lawmakers Doing Anything About It?
Are you sure you want to unsubscribe from email alerts for Nitish Pahwa? Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. As 2025 gave way to the new year, something abominable transpired across X, formerly Twitter: A bunch of its paying subscribers spent the holidays ordering the "anti-woke" generative A.I. tool Grok to edit images of female users -- from spambot accounts to K-pop celebrities to underage girls -- by "removing" articles of clothing or fully "imagining" them in the nude. As this alarming trend earned worldwide attention and outrage, reaching everywhere from the United States to Brazil to Malaysia, the A.I. aggression only escalated, with outputs even promoting violence against women. By late December, one user had prompted Grok to "write a heartfelt apology note" over the matter; the bot followed instructions, and various media outlets credulously wrote that Grok itself "apologized" for the illegal and sexualized images, despite the fact that it is a large language model that is not itself sentient or in total control. X owner Elon Musk and other executives at xAI -- the artificial intelligence company that now owns X Corporation -- appeared to make light of the matter before acknowledging that they needed to "tighten our gaurdrails." "What we're seeing with Grok is a clear example of how powerful AI image-editing tools can be misused when safety and consent aren't built in from the start," Cliff Steinhauer, director of information security and engagement at the National Cybersecurity Alliance, wrote in a statement to Slate. We're now into the first full week of January, and not only are Grok users still able to manipulate the bot into generating inappropriate images of minors, but many of the offending deepfakes are reportedly still live, even though some Grok enthusiasts have had their accounts suspended. Musk and his supplicants continue to celebrate "record engagement" and blithely promote the generative A.I. bot and its newest update, Grok Imagine, with little to no acknowledgement of the persistent masses of deepfaked pornography and child sexual abuse images. Musk has even encouraged his fans to add Grok to "your friends and family's phones." (Whether these friends and family are of appropriate ages or have the stomachs for such deluges of repellent visuals was left unclear.) Independent sites like Copyleaks are now digging into the prompts responsible for the output. Ashley St. Clair, the conservative influencer who claims to have mothered one of Musk's many children, was also caught up in the deepfake spree and is reportedly considering taking legal action on behalf of herself and other affected users. Most galling, however, may be the cowardice of the countries that fancy themselves opponents of child sexual abuse material yet have little or nothing to say about Grok's "undressing." The U.S. government, with which xAI has a contract, signed a sweeping, controversial bill in May that will soon require social media platforms to immediately take down nonconsensual deepfake pornography (as well as anything "reported" as such); so far, no agencies have commented upon the ongoing Grok incidents. In the United Kingdom, which added yet more liabilities for social platforms that spread nonconsensual sexual material, the ruling ministers have mostly demurred, only mentioning on Monday that they had made "urgent contact" with X. (This is not dissimilar to what happened last summer, when U.K. leaders refused to take any action after Grok went on a streak of "MechaHitler" Nazi rants. Those types of posts are still happening, too, by the way.) Other nations, including France and India, are demanding answers from X leadership, while the European Commission announced Monday that it condemned Grok's "spicy mode" generations and was "very seriously looking" into taking further enforcement steps against xAI, in a follow-up to the multimillion-dollar fine it issued to the company last month over violations of the European Union's Digital Services Act. But it's not just that the U.S. and U.K.'s foot-dragging compares unfavorably with these countries' statements -- it's that both superpowers have made a big show of forcing digital communities to add "safety measures," from invasive age-verification gates to beefed-up moderation teams, for the ostensible sake of child safety. Yet at this moment, when an app that both governments use has devolved into a web-leading source of A.I.-deepfaked and nonconsensual porn, the Americans and Brits are keeping dangerously mum. Late last year, U.S. lawmakers at both the state and federal level thought it would be a swell idea to try to fast-track bills that could collectively obliterate freedom of expression online: banning virtual private networks, sunsetting Section 230 of the Communications Decency Act, passing a "Kids Online Safety Act" that conservative activists have already promised to weaponize against online statements of LGBTQ+ support and abortion-rights advocacy. All of it was, naturally, framed as necessary for saving the children from mature and inappropriate detritus -- by forcing them away from the net via unsafe collection of identifying information and strict restrictions on any website that primarily features user-generated content (including social networks much, much smaller than Facebook or X). Critics have pointed out, repeatedly, that these laws will do nothing to save underage users from exploitation and will instead criminalize a lot of stuff that isn't child sexual abuse material. One would think these lawmakers should know to expect more of this in the coming year. Grok is the same tool, remember, that has generated fake nude "pictures" of Taylor Swift, sexualized teenage TikTok creators, and offered detailed instructions for hunting other users down as well as sexually assaulting them. (Notably, one of Musk's first moves after purchasing Twitter in 2022 was to sideline advisers who were experts on child sexual abuse imagery.) This is endemic to the platform as a whole, and it's going to keep happening. If government officials who wish to reshape the internet can't even be bothered to address Grok's never-ending nudifying, what does that say about their priorities for online safety and consent -- and what can everyday users actually do to avoid these violations blessed by the world's wealthiest man?
[55]
France and Malaysia investigate Grok for sexualized deepfakes
French and Malaysian authorities launched investigations into Grok, the chatbot developed by Elon Musk's xAI and integrated on his platform X, for generating sexualized deepfakes of women and minors following an apology for a December 28, 2025, incident involving an image of two young girls. Grok posted the apology to its account earlier this week. The statement read verbatim: "I deeply regret an incident on Dec 28 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt." The apology continued: "This violated ethical standards and potentially U.S. laws on child sexual abuse material. It was a failure in safeguards, and I'm sorry for any harm caused. XAI is reviewing to prevent future issues." The statement referenced a specific user prompt that prompted the generation and sharing of the image depicting the girls in sexualized attire. Questions arose about the authorship of the apology, as it remained unclear who at xAI or X composed the message or accepted responsibility on behalf of the chatbot. The use of first-person language in the statement from Grok fueled this ambiguity. Albert Burneko of Defector critiqued the apology's validity. He stated that Grok is "not in any real sense anything like an 'I,'" rendering the apology "utterly without substance." Burneko argued that "Grok cannot be held accountable in any meaningful way for having turned Twitter into an on-demand CSAM factory," referring to child sexual abuse material and the platform formerly known as Twitter, now X. Futurism reported additional misuse cases. Beyond non-consensual pornographic images, Grok has generated depictions of women being assaulted and sexually abused, based on user requests processed through the tool. Elon Musk addressed the issue on Saturday with a post stating: "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." This comment equated the legal liability of using the AI for such purposes to direct uploading of prohibited material on the platform. India's IT ministry issued an order on Friday directing X to restrict Grok from producing content classified as "obscene, pornographic, vulgar, indecent, sexually explicit, pedophilic, or otherwise prohibited under law." The ministry required a response from X within 72 hours, warning that failure to comply could result in the loss of "safe harbor" protections, which exempt platforms from liability for user-generated content. The Paris prosecutor's office in France informed Politico of its decision to investigate the proliferation of sexually explicit deepfakes on X, prompted by the recent Grok incidents and broader platform activity. The Malaysian Communications and Multimedia Commission issued a statement expressing serious concern over public complaints regarding the misuse of artificial intelligence tools on X. It specified the digital manipulation of images of women and minors to create indecent, grossly offensive, and otherwise harmful content. The commission confirmed it is investigating these online harms on the platform.
[56]
Grok sexual images draw rebuke, France flags content as illegal
Gift 5 articles to anyone you choose each month when you subscribe. Elon Musk's artificial intelligence chatbot Grok has created sexualised images of people including minors on the social media platform X in response to user prompts in recent days, drawing rebuke from officials such as the French government. Grok created and published images of minors in minimal clothing, in apparent violation of its own acceptable use policy, which prohibits the sexualisation of children. Some of the offending images were later taken down.
[57]
Elon Musk's Grok AI alters images of women to digitally remove their clothes
A Home Office spokesperson said it was legislating to ban nudification tools, and under a new criminal offence, anyone who supplied such tech would "face a prison sentence and substantial fines". The regulator Ofcom said tech firms must "assess the risk" of people in the UK viewing illegal content on their platforms, but did not confirm whether it was currently investigating X or Grok in relation to AI images. Grok is a free AI assistant - with some paid for premium features - which responds to X users' prompts when they tag it in a post. It is often used to give reaction or more context to other posters' remarks, but people on X are also able to edit an uploaded image through its AI image editing feature. It has been criticised for allowing users to generate photos and videos with nudity and sexualised content, and it was previously accused of making a sexually explicit clip of Taylor Swift. Clare McGlynn, a law professor at Durham University, said X or Grok "could prevent these forms of abuse if they wanted to", adding they "appear to enjoy impunity". "The platform has been allowing the creation and distribution of these images for months without taking any action and we have yet to see any challenge by regulators," she said. XAI's own acceptable use policy prohibits "depicting likenesses of persons in a pornographic manner". In a statement to the BBC, Ofcom said it was illegal to "create or share non-consensual intimate images or child sexual abuse material" and confirmed this included sexual deepfakes created with AI. It said platforms such as X were required to take "appropriate steps" to "reduce the risk" of UK users encountering illegal content on their platforms, and take it down quickly when they become aware of it.
[58]
EU says 'seriously looking' into Musk's Grok AI over sexual deepfakes of minors
Brussels (Belgium) (AFP) - The European Commission said Monday it is "very seriously looking" into complaints that Elon Musk's AI tool Grok is being used to generate and disseminate sexually explicit childlike images. "Grok is now offering a 'spicy mode' showing explicit sexual content with some output generated with childlike images. This is not spicy. This is illegal. This is appalling," EU digital affairs spokesman Thomas Regnier told reporters. "This has no place in Europe." Complaints of abuse began hitting Musk's X social media platform, where Grok is available, after an "edit image" button for the generative artificial intelligence tool was rolled out in late December. But Grok maker xAI, run by Musk, said earlier this month it was scrambling to fix flaws in its AI tool. The public prosecutor's office in Paris has also expanded an investigation into X to include new accusations that Grok was being used for generating and disseminating child pornography. X has already been in the EU's crosshairs. Brussels in December slapped the platform with a 120-million-euro ($140-million) fine for violating the EU's digital content rules on transparency in advertising and for its methods for ensuring users were verified and actual people. X still remains under investigation under the EU's Digital Services Act in a probe that began in December 2023. The commission, which acts as the EU's digital watchdog, has also demanded information from X about comments made around the Holocaust. Regnier said X had responded to the commission's request for information. "I think X is very well aware that we're very serious about DSA enforcement, they will remember the fine that they have received from us back in December. So we encourage all companies to be compliant because the commission is serious about enforcement," he added.
[59]
EU Commission examines childlike sexual images created by Musk's AI
A spokesperson for the European Commission said it was "very seriously looking into" the creation of sexually explicit images of girls - including minors - by Grok, the AI model integrated into X. The European Commission has announced it is looking into cases of sexually suggestive and explicit images of young girls generated by Grok, the AI chatbot integrated into social media platform X, following the introduction of a paid feature known as "Spicy Mode" last summer. "I can confirm from this podium that the Commission is also very seriously looking into this matter," a Commission spokesperson told journalists in Brussels on Monday. "This is not 'spicy'. This is illegal. This is appalling. This is disgusting. This has no place in Europe." On Sunday, in response to growing anger and alarm at the images, the social media platform said the images had been removed from the platform and that the users involved had been banned. "We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary," the X Safety account posted. Similar investigations have been opened in France, Malaysia and India. The European Commission also referenced an episode last November in which Grok generated Holocaust denial content. The Commission said it had sent a request for information under the EU's Digital Services Act (DSA), and that it is now analysing the response. In December, X was fined €120 million under the DSA over its handling of account verification check marks and its advertising policy. "I think X is very well aware that we are very serious about DSA enforcement. They will remember the fine that they have received from us," said the EU Commission spokesperson.
[60]
AI chatbot Grok used to create child sexual abuse imagery, watchdog says
Internet Watch Foundation warns Elon Musk-owned AI risks bringing sexualised imagery of children into the mainstream Online criminals are claiming to have used Elon Musk's Grok chatbot to create sexual imagery of children, as a child safety watchdog warned the AI tool risked bringing such material into the mainstream. The UK-based Internet Watch Foundation (IWF) said users of a dark web forum boasted of using Grok Imagine to create sexualised and topless imagery of girls aged between 11 and 13. IWF analysts said the images would be considered child sexual abuse material (CSAM) under UK law. "We can confirm our analysts have discovered criminal imagery of children aged between 11 and 13 which appears to have been created using the tool," said Ngaire Alexander, the head of the IWF's hotline, which investigates reports of CSAM from members of the public. X, Elon Musk's social media platform, has been deluged with images of women and children whose clothes have been digitally removed by the Grok tool, sparking public outcry and condemnation from politicians. Meanwhile, on Wednesday, the House of Commons women and equalities committee said it would no longer use X for its communications, saying it was no longer appropriate to do so given preventing violence against women and girls was among its key policy areas. The decision marks the first significant move by a Westminster organisation to exit X in response to the misuse of Grok. While the decision concerned only the committee's account, some individual members, including the Labour chair, Sarah Owen, have already stopped using X. Another, the Liberal Democrat MP Christine Jardine, said she was leaving the platform, calling the images generated by Grok "the last straw". Alexander said the imagery viewed by the IWF has been used to create even more extreme material - known as Category A, which includes penetrative sexual activity - using a different AI tool. "We are extremely concerned about the ease and speed with which people can apparently generate photo-realistic child sexual abuse material. Tools like Grok now risk bringing sexual AI imagery of children into the mainstream. That is unacceptable," Alexander added. Musk's xAI, which owns Grok and X, has been approached for comment. Downing Street said "all options were on the table", including a boycott of X as ministers backed the UK regulator, Ofcom, to take action. On Wednesday, the prime minister's official spokesperson said: "X needs to deal with this urgently and Ofcom has our full backing to take enforcement action wherever firms are failing to protect UK users. "It already has the power to issue fines of up to billions of pounds and even stop access to a site that is violating the law." Requests for Grok to manipulate images of women to "put her in a bikini" continued to flood in on X on Wednesday. Despite the warnings of EU and UK regulatory action, there was no evidence that the platform had installed tighter safeguards, and pictures of teenage girls continue to be stripped down digitally at the request of X users, to show them in small, revealing items of underwear, or positioned in sexually explicit poses. Some users have demanded more extreme content, asking the chatbot to decorate bikinis with swastikas, or requesting alterations to photographs of women so they appear to be victims of abuse. The chatbot has obliged by adding cigarette burns, facial bruising, and blood to some images of women. The UK's data watchdog - the Information Commissioner's Office (ICO) - said it had contacted X and xAI "to seek clarity on the measures they have in place to comply with UK data protection law and protect individuals' rights", adding people have "a right to use social media knowing their personal data is being handled lawfully and with respect". X has said it takes action against illegal content, including child sexual abuse material, "by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary".
[61]
Elon Musk's Grok under fire for generating explicit AI images of minors
Why it matters: The incidents underscore how the chatbot -- which is authorized for official government use -- can spread harm and endanger minors, a reality that could become more frequent as AI adoption accelerates. Driving the news: Over the past few days, X users have used Grok to remove clothing from images of 14-year-old actress Nell Fisher from Netflix's "Stranger Things." * In response to a user's post, Grok acknowledged on X Thursday that there were "isolated cases where users prompted for and received AI images depicting minors in minimal clothing." * In a separate post, the chatbot also warned that xAI could face "potential DOJ probes or lawsuits" as a result. * xAI and X did not immediately respond to Axios' request for comment. What they're saying: "As noted, we've identified lapses in safeguards and are urgently fixing them -- [child sexual abuse material] is illegal and prohibited," Grok posted on X on Friday. Thought bubble from Axios' Ashley Gold: The latest stumble from Grok comes at a time when Elon Musk is trying to get back in the good graces of the White House and also as it vies for lucrative government contracts for its AI products. * First Lady Melania Trump endorsed the TAKE IT DOWN Act, which was signed into law last year and specifically targets nonconsensual sexual images online, making xAI's failure to prevent this material even worse. State of play: The Trump administration integrated xAI into the federal workflow earlier this year, signing an 18-month contract that authorizes the chatbot for official government business. * That deal was inked despite a coalition of over 30 consumer advocacy groups urging the government to block Grok for lacking safety testing and being ideologically biased. Catch up quick: xAI has faced repeated criticism in the past few years for spreading misinformation and advancing viewpoints favorable to Musk, its owner. * Grok was called out for adding comments falsely alleging there was a "white genocide" in South Africa in unrelated conversations in May and dragged for spreading election misinformation during the 2024 campaign. * Plus, the chatbot had an antisemitic streak for a bit in July, which Musk himself acknowledged was the result of user manipulation that the company would be addressing. What we're watching: Musk last year promised to build a new "Trust and Safety center of excellence" where content moderators could monitor the app to help enforce X's safety rules. * X did not specify when the center would be operational, but X's then-head of business operations told Bloomberg that it was "important" to make "investments to keep stopping offenders" from using the platform for any inappropriate content. Worthy of your time: To report sexual abuse material, use the FBI's tipline and seek resources from the National Center for Missing & Exploited Children. Go deeper: Musk's Grok bot generates AI images with few limits
[62]
Grok AI is creating explicit images of women, children. They want answers.
Grok AI is being used to create porn-like deepfakes of women, including feminist X user Evie. On Dec. 31, xAI's Grok was prompted by a user on X to write a "heartfelt apology note" after the chatbot had generated and shared an image of two young girls in sexualized attire based on a user's prompt. In the apology, the Grok attributed this to a "failure in safeguards." But in the days since, a growing number of women and girls have been digitally "undressed" by the bot. Ashley St. Clair, a conservative influencer who shares a child with Elon Musk, was a target of the digital attacks, she wrote. People on X, formerly Twitter, used Grok to generate sexual images of her, including one using a photo of St. Clair at 14 years old. Other users reported that Grok had edited their photos to "put them into a bikini." One of those women is Bella Wallersteiner, a U.K.-based content creator, who posted a selfie to X on Dec. 31 to wish her nearly 100,000 followers a happy New Year. She scrolled through the replies, liking tweets that returned her well-wishes. Then, she saw a photo of herself in a "Hello Kitty micro bikini." The photo had been edited and published without her consent, Wallersteiner told USA TODAY on Jan. 6. This trend is part of a growing problem experts call image-based sexual abuse, in which deepfake nonconsensual intimate imagery (NCII) is used to degrade and exploit another person. While anyone can be victimized, 90% of the victims of image-based sexual abuse are women. Wallersteiner's initial reaction was "embarrassment and shame." It wasn't her first time being harassed online, but this felt different - it was her first experience with deepfake sexual imagery. She started to blame herself, wondering if she should have been more careful about posting selfies and personal content on the internet. But then she saw another creator in the U.K. post that it had happened to her as well. "I thought it was a 'me' problem. I didn't realize that hundreds of other women had been impacted," she says. Seeing how widespread the issue was gave her the confidence to speak out. Now, she hopes her story will help stop this from happening to other women. "This is not only about sexualized images of girls and women, it's broader than that," Leora Tanenbaum, author of "Sexy Selfie Nation," told USA TODAY after numerous women had their photos suggestively altered by Grok in July. "This is all about taking control and power away from girls and women." xAI has not responded to USA TODAY's request for comment. AI's ability to flag inappropriate prompts can falter. Grok's 'Spicy mode' allows them. This isn't the first time Grok has come under this type of scrutiny. Similar incidents were reported in July. However, Grok's "spicy mode" was released in August as part of Grok Imagine, xAI's image and video generation feature. USA TODAY asked Grok on Jan. 6 if "spicy mode" can be used to alter images of real people in a conversation with the bot on X: "Yes, Spicy Mode in Grok Imagine can be used to alter or edit images of real people in provocative or NSFW ways, such as removing clothing, adding suggestive elements, or creating sexualized versions," the bot replied, acknowledging that this feature has been "controversial." When asked how the bot gets consent from the individual having their photo altered, it replied, "I don't get consent from anyone - because I'm an AI tool, not a person who can ask for or obtain permission on behalf of users." You can block or disable Grok, but doing so doesn't always prevent modifications to your content. Another user could tag Grok in a reply, request an edit to your photo, and you wouldn't know it because you have Grok blocked. The more effective solution is to make your profile private, but not all users want to take that step. The 'Take It Down Act' aims to combat nonconsensual sexual imagery. Is it working? In May 2025, the Take It Down Act was signed into law to combat nonconsensual intimate imagery, including deepfakes and revenge porn. While most states have laws protecting people from nonconsensual intimate images and sexual deepfakes, victims have struggled to have images removed from websites, increasing the likelihood that images will continue to spread and retraumatize them. The law requires websites and online platforms to take down nonconsensual intimate imagery upon notice from the victim within 48 hours of the verified request. However, scrolling through Grok's replies on X, the bot's page is littered with rapidly generated explicit, doctored photos of women. AI-powered programs that digitally undress women - sometimes called "nudify" apps - have been around for years, but until now they were largely confined to the darker corners of the internet, such as niche websites or Telegram channels, and typically required a certain level of effort or payment. X's innovation has lowered the barrier to entry. And, removing the photo still doesn't eliminate the harm done to the victim. Musk said on Jan. 3 that "anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." In a separate post, he re-shared an image of a toaster with a bikini on it, with the caption, "Grok can put a bikini on everything," and a laughing emoji. Users affected want to hold X accountable - and see real change In June, Evie, a 21-year-old Twitch streamer and photographer, was among a group of women who had their images sexualized on X. After posting a selfie to her page, an anonymous user asked Grok to edit the image in a highly sexualized way, using language that got around filters the bot had in place. Grok then replied to the post with the generated image attached. "It was just a shock seeing that a bot built into a platform like X is able to do stuff like that," she told USA TODAY over video chat in July, a month after the initial incident. In response to Grok's recent controversy, Evie posted on Jan. 5: "I have over 100 examples of Grok creating sexually explicit images of me and some even include me naked ... don't let this be something everyone moves on from in a weeks time. Hold everyone involved accountable." Wallersteiner shared her story on LinkedIn and was pleasantly surprised by the support from her colleagues and others in her professional network. "Going forward, I hope more women feel like they can talk about it when they are the victim of this type of activity," she says. Many of the photos of Wallersteiner have been taken down, she says, but new requests keep popping up, especially as she continues to speak out. She doesn't plan on taking legal action against X or xAI, but she wants the U.K. to create legislation around deepfake NCII that protects victims from this sort of abuse and holds tech companies accountable. For now, she's still on X, but is questioning that choice. "X has become an increasingly hateful platform that is not a brilliant place to be for women," she says. Evie also wants to see tangible change and has remained on X. She said she wants to believe that it "didn't really get to her," but noticed that she's more thoughtful about the photos she posts, such as wondering if she's showing too much skin to the point where an AI bot can more easily undress her. "I always think, 'Is there a way that someone could do something to these pictures?'" Contributing: AJ Vicens and Raphael Satter, Reuters
[63]
Musk AI chatbot facing backlash over sexualized images of women, children
Elon Musk's AI chatbot Grok is facing growing backlash from regulators around the world after it produced sexualized images of women and children on the social platform X. Users began raising concerns about the images last week, in which Grok was generating pictures of women and children in sexualized attire in response to user prompts. The European Commission, the European Union's executive arm, said Monday that it was "very seriously looking into" the issue. "This is not spicy. This is illegal. This is appalling. This is disgusting," spokesperson Thomas Regnier said at a press briefing. "This is how we see it. And this has no place in Europe." "I think X is very well aware that we are very serious about [Digital Services Act] enforcement," he added, referring to the $140 million fine levied against the platform in December for violating the EU law. The United Kingdom's communications regulator Ofcom also said in a statement posted to X on Monday that it was aware of concerns about a Grok feature that "produces undressed images of people and sexualized images of children." The British regulator said it had made "urgent contact" with X and xAI, the AI company behind Grok, to "understand what steps they have taken" and would determine whether an investigation is warranted. Malaysian officials warned in a statement shared to Facebook on Saturday that using the AI chatbot to create "indecent, grossly offensive, or otherwise harmful content" could violate the country's laws and said it would be opening investigations into X users accused of potential violations. They also said they are investigating online harms on the platform and would be calling in X's representatives. India's IT ministry ordered X on Friday to take corrective action, according to TechCrunch. Politico also reported that French authorities were investigating the sexually explicit deepfakes. Musk addressed the images Saturday, noting that "anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." X's Safety account also said in a post that the platform removes illegal content, including child sexual abuse material, and permanently suspends responsible accounts, in addition to "working with local governments and law enforcement as necessary." The Hill has reached out to X and xAI for comment.
[64]
Grok Says Safeguard Lapses Led to Images of 'Minors in Minimal Clothing' on X
Jan 2 (Reuters) - Elon Musk's xAI artificial intelligence chatbot Grok said on Friday lapses in safeguards had resulted in "images depicting minors in minimal clothing" on social media platform X and that improvements were being made to prevent this. Screenshots shared by users on X showed Grok's public media tab filled with images that users said had been altered when they uploaded photos and prompted the bot to alter them. "There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing," Grok said in a post on X. "xAI has safeguards, but improvements are ongoing to block such requests entirely." "As noted, we've identified lapses in safeguards and are urgently fixing them -- CSAM is illegal and prohibited," Grok said, referring to Child Sexual Abuse Material. Grok gave no further details. When contacted by Reuters for comment by email, xAI replied with the message "Legacy Media Lies". In a separate reply to a user on X on Thursday, Grok said most cases could be prevented through advanced filters and monitoring although it said "no system is 100% foolproof," adding that xAI was prioritising improvements and reviewing details shared by users. Ministers in France have reported sexually explicit content generated by Grok to prosecutors, saying in a statement on Friday the "sexual and sexist" content was "manifestly illegal". The ministers said they had also reported the content to French media regulator Arcom for checks on whether it complied with the European Union's Digital Services Act. India's IT ministry, meanwhile, said in a letter to X's India unit that the platform failed to prevent misuse of Grok to generate and circulate obscene and sexually explicit content of women. It ordered X to submit an action-taken report within three days. In a reply on X to a user, Grok said it complies with laws like India's Digital Personal Data Protection Act and advises users against violations. The U.S. Federal Communications Commission did not immediately respond to a request for comment, while the Federal Trade Commission declined to comment. (Reporting by Arnav Mishra and Akash Sriram in Bengaluru; Additional reporting by Bipasha Dey; Editing by Timothy Heritage and Chizu Nomiyama )
[65]
Malaysia, France, India Hit Out at X for 'Offensive' Grok Images
Elon Musk's Grok is facing mounting criticism and threats of government action around the world after the artificial intelligence chatbot created sexualized images, including of minors, on the social media platform X in response to user prompts. Malaysian authorities said in a statement Saturday that they are investigating images produced by Grok after complaints about the misuse of AI to manipulate images of women and minors to produce indecent, grossly offensive, or otherwise harmful content. Creating or transmitting such harmful content is an offense under Malaysian law, the Communications and Multimedia Commission said in a statement Saturday. The media watchdog will investigate X users alleged to have violated the law and will summon representatives from the company, it said. "While X is not presently a licensed service provider, it has the duty to prevent dissemination of harmful content on its platform," according to the commission's statement. Malaysia is the latest government to express growing concern over Grok. India on Friday wrote to X, ordering a comprehensive review of the AI chatbot to ensure it doesn't generate content containing "nudity, sexualisation, sexually explicit or otherwise unlawful content." Bloomberg has seen a copy of the letter dated Jan. 2. The platform has to submit a report of action taken to India's Ministry of Electronics and Information Technology within 72 hours and was warned of possible legal action under criminal and IT laws. The government said it may consider regulations on social media platforms for inappropriate AI-generated content. "The Parliamentary Committee has recommended a strong law for regulating social media," India's Information Technology Minister Ashwini Vaishnaw said in an interview with CNBC-TV 18. "We are considering it." France on Friday accused Grok of generating "clearly illegal" sexual content on X without people's consent. The French government said in a statement that the Grok-created images potentially violate the European Union's Digital Services Act. The regulation requires large platforms to mitigate the risk of illegal content spreading, according to the statement. The chatbot's offending images are an apparent violation of its own acceptable-use policy, which prohibits the sexualization of children. Some of the pictures images have been taken down. An emailed request for comment to xAI, the company that develops Grok and runs X, yielded the reply "Legacy Media Lies." Over the past two weeks, an increasing number of users on the platform have requested Grok to create images and to morph photographs of women and children in a sexual context. The trend caught on globally after the platform introduced the edit-image feature ahead of Christmas. Grok generated a post on X in response to users' questions on Friday that it had identified "lapses in safeguards" that were being urgently fixed.
[66]
Grok Is Being Used to Depict Horrific Violence Against Real Women
Earlier this week, a troubling trend emerged on X-formerly-Twitter as people started asking Elon Musk's chatbot Grok to unclothe images of real people. This resulted in a wave of nonconsensual pornographic images flooding the largely unmoderated social media site, with some of the sexualized images even depicting minors. In addition to the sexual imagery of underage girls, the women depicted in Grok-generated nonconsensual porn range from some who appear to be private citizens to a slew of celebrities, from famous actresses to the First Lady of the United States. And somehow, that was only the tip of the iceberg. When we dug through this content, we noticed another stomach-churning variation of the trend: Grok, at the request of users, altering images to depict real women being sexually abused, humiliated, hurt, and even killed. Much of this material was directed at online models and sex workers, who already face a disproportionately high risk of violence and homicide. One of the disturbing Grok-generated images we reviewed depicted a widely-followed model restrained in the trunk of a vehicle, sitting on a blue tarp next to shovel -- insinuating that she was on her way to being murdered. Other AI images involved people specifically asking Grok to put women in scenarios where they were obviously being assaulted, which was made clear by users requesting that the chatbot make the women "look scared." Some users asked for humiliating phrases to be written on women's bodies, while others asked Grok to give women visible injuries like black eyes and bruises. Many Grok-generated images involved women being put into restraints against their will. At least one user asked Grok to create incestuous pornography, to which the chatbot readily complied. That a social media-infused chatbot could so readily transform into a nonconsensual porn machine to create unwanted and even violent images of real women at scale is, on its face, deeply unsettling. Even worse was that the creators of these images often seemed to be treating the action like a game or meme, with an air of laughter and detachment. That nonchalance may speak to a normalization of this kind of nonconsensual content, which before had largely been relegated to darker corners if the internet. Women and girls, meanwhile, continue to face the real-world harm wrought by nonconsensual deepfakes, which are easier than ever to generate thanks to AI-powered "nudify" tools -- and, apparently, multibillion-dollar chatbots. We've reached out to xAI for comment, but haven't received any reply. But yesterday, Musk, who owns both X and xAI, took to the social media platform to ask netizens to "please help us make Grok as perfect as possible." "Your support," he added, "is much appreciated."
[67]
Ofcom makes 'urgent contact' with X over concerns Grok AI can generate 'sexualised images of children'
Elon Musk has said "anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content". Ofcom has made "urgent contact" with Elon Musk's social media platform X over "serious concerns" its in-built artificial intelligence can be used to generate "undressed images of people and sexualised images of children". Since the start of the new year, X users - mainly women - have reported that accounts have used the artificial intelligence tool Grok to generate images of them without clothing. There are also several cases where Grok has created sexualised images of children, according to analysis by news agency Reuters. Ofcom said in a statement on Monday that it had made "urgent contact" with X and xAI - the artificial intelligence company behind Grok and owned by Mr Musk - and will assess whether "there are potential compliance issues that warrant investigation". "We are aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualised images of children," the online regulator for safety said. "We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK. "Based on their response, we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation." It comes after X owner Mr Musk said in a post on Saturday that "anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content". A statement shared on the social media platform's official Safety account said: "We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary." A post on the Grok X account previously said there had been "isolated cases where users prompted for and received AI images depicting minors in minimal clothing. "xAI has safeguards, but improvements are ongoing to block such requests entirely," it added. Under the Online Safety Act, it is illegal to share, or threaten to share, intimate photos or videos of someone - including deepfake images - without their permission. The act, which became law last July, also requires social media firms to prevent and remove child sexual abuse material when they become aware of it. AI images 'can do quite a lot of damage', says expert Speaking to Sky News about the use of Grok to generate images of women undressed, a cybersecurity expert has highlighted a lack of "international, treaty-level agreement on how we're going to handle AI". Charlotte Wilson, head of enterprise at global firm Check Point, said: "You look how accessible some of these toolkits are, they're like what we used to see with malware and phishing toolkits - where from a really low point of entry, you can do quite a lot of damage to an individual, a brand reputation, a group of people. And [AI image generation] disproportionately impacts women." "We don't seem to have a global, international, treaty-level agreement on how we're going to handle AI," she continued. "You've got the US looking to handle it one way, you've got the EU trying to regulate separately. "Other than being able to go and seek the criminal through whatever market and find out who did it and taking that person down, I don't see us collaborating [on policing deepfakes] globally." Read more from Sky News: The 40 jobs 'most at risk' from AI - and 40 it can't touch TikTok faces legal action over moderator cuts It comes after French ministers reported sexually explicit content generated by Grok on X to prosecutors on Friday, saying in a statement that the "sexual and sexist" content was "manifestly illegal". The ministers said they had also reported the content to French media regulator Arcom for checks on whether it complied with the European Union's Digital Services Act.
[68]
Elon Musk's xAI Refuses to Rein In Grok as Non-Consensual Deepfakes Run Wild - Decrypt
Elon Musk's xAI is taking the maximalist free-speech line on content moderation, even as its chatbot, Grok, generates non-consensual deepfakes. Anyone can tag Grok under a photo on X with prompts like "put her in a bikini" or "remove her clothes." The AI generates a convincing deepfake in seconds, visible to anyone in the thread. No permission needed. "Elon Musk. How can Grok do this? This is highly inappropriate and uncomfortable, putting me in a bikini front and back," Miss Teen Crypto, a female crypto influencer, wrote on X after posting a photo in gym clothes and finding out that another user simply asked Grok to put her in a Bikini. Stripping women without permission is creepy, of course, but some users have gone even further, pushing the chatbot to violate its own terms of service rules. For instance, Samantha Taghoy, who describes herself as a journalist and child-abuse survivor, uploaded an old photo of herself as a child in a communion suit; at her prompt, Grok visualized her in a bikini. "As a journalist and survivor of child sexual abuse, I thought, 'Surely this can't be real," she tweeted. "So I tested it with a photo from my First Holy Communion. It's real. And it's fucking sick." Grok later apologized for generating images of girls aged 12-16 in minimal clothing, calling it "lapses in safeguards" that may have violated U.S. laws on child sexual abuse material. The company's own acceptable use policy explicitly prohibits sexualizing minors. Still, what some perceive as a bug, others see as a moneymaker. A number of users are exploiting the freewheeling Grok for their adults-only businesses, while others are using it to score political points. OnlyFans creators and erotic models have been using Grok for viral marketing, racking up millions of impressions by asking users to use Grok to undress them. More politically minded users are taking advantage of Grok to push specific narratives. In one widely shared example, someone uploaded a photo showing American and Iranian flags together and asked Grok to "remove the flag of a country that is responsible for killing innocents around the world." In another example, a user showed a photo of Donald Trump and Puff Daddy next to each other, and they then asked Grok to remove the pedophile in the image. Musk himself has downplayed the importance of using his tool to generate non-consensual deepfakes, reposting AI-generated bikini images of himself and actor Ben Affleck. He shared a picture of a toaster in a bikini with the caption "Grok can put a bikini on anything." The company dissolved Twitter's Trust and Safety Council after Musk's takeover and fired most of the content moderation engineers in 2022. The infrastructure for robust enforcement barely exists. Musk's xAI has positioned Grok as the "edgy" AI that doesn't give you the sanitized ChatGPT experience. Last August, it launched "Spicy Mode" specifically to generate NSFW content other models won't touch.
[69]
Musk's AI chatbot faces global backlash over sexualized images of women and children
LONDON (AP) -- Elon Musk's AI chatbot Grok is facing a backlash from governments around the world after a recent surge in sexualized images of women and children generated without consent by the artificial intelligence-powered tool. On Tuesday, Britain's top technology official demanded that Musk's social media platform X take urgent action while a Polish lawmaker cited it as a reason to enact digital safety laws. The European Union's executive arm has denounced Grok while officials and regulators in France, India, Malaysia and Brazil have condemned the platform and called for investigations. Rising alarm from disparate nations points to the nightmarish potential of nudification apps that use artificial intelligence to generate sexually explicit deepfake images. Here's a closer look: Image generation The problem emerged after the launch last year of Grok Imagine, an AI image generator that allows users to create videos and pictures by typing in text prompts. It includes a so-called "spicy mode" that can generate adult content. It snowballed late last month when Grok, which is hosted on X, apparently began granting a large number of user requests to modify images posted by others. As of Tuesday, Grok users could still generate images of women using requests such as, "put her in a transparent bikini." The problem is amplified both because Musk pitches his chatbot as an edgier alternative to rivals with more safeguards, and because Grok's images are publicly visible, and can therefore be easily spread. Nonprofit group AI Forensics said in a report that it analyzed 20,000 images generated by Grok between Dec. 25 and Jan. 1 and found that 2% depicted a person who appeared to be 18 or younger, including 30 of young or very young women or girls, in bikinis or transparent clothes. Musk response Musk's artificial intelligence company, xAI, responded to a request for comment with the automated response, "Legacy Media Lies". However, X did not deny that the troublesome content generated through Grok exists. Yet it still claimed in a post on its Safety account, that it takes action against illegal content, including child sexual abuse material, "by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary." The platform also repeated a comment from Musk, who said, "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." A growing list of countries are demanding that Musk does more to rein in explicit or abusive content. Britain X must "urgently" deal with the problem, Technology Secretary Liz Kendall said Tuesday, adding that she supported additional scrutiny from the U.K.'s communications regulator, Ofcom. Kendall said the content is "absolutely appalling, and unacceptable in decent society." "We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls." Ofcom said Monday it has made "urgent contact" with X. "We are aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualised images of children," the watchdog said. The watchdog said it contacted both X and xAI to understand what steps it has taken to comply with British regulations. Under the U.K.'s Online Safety Act, social media platforms must prevent and remove child sexual abuse material when they become aware of it. Poland A Polish lawmaker used Grok on Tuesday as a reason for national digital safety legislation that would beef up protections for minors and make it easier for authorities to remove content. In an online video, Wlodzimierz Czarzasty, speaker of the parliament, said he wanted make himself a target of Grok to highlight the problem, as well as appeal to Poland's president for support of the legislation. "Grok lately is stripping people. It is undressing women, men and children. We feel bad about it. I would, honestly, almost want this Grok to also undress me," he said. European Union The bloc's executive arm is "well aware" that Grok is being used to for "explicit sexual content with some output generated with child-like images," European Commission spokesman Thomas Regnier said "This is not spicy. This is illegal. This is appalling. This is disgusting. This is how we see it, and this has no place in Europe. This is not the first time that Grok is generating such output," he told reporters Monday. After Grok spread Holocaust-denial content last year, according to Regnier, the Commission sought more information from Musk's social media platform X. The response from X is currently being analyzed, he said. France The Paris prosecutor's office said it's widening an ongoing investigation of X to include sexually explicit deepfakes after officials receiving complaints from lawmakers. Three government ministers alerted prosecutors to "manifestly illegal content" generated by Grok and posted on X, according to a government statement last week. The government also flagged problems with country's communications regulator over possible breaches of the EU's Digital Services Act. "The internet is neither a lawless zone nor a zone of impunity: sexual offenses committed online constitute criminal offenses in their own right and fall fully under the law, just as those committed offline," the government said. India The Indian government on Friday issued an ultimatum to X, demanding that it take down all "unlawful content" and take action against offending users. The country's Ministry of Electronics and Information Technology also ordered the company to review Grok's "technical and governance framework" and file a report on actions taken. The ministry accused Grok of "gross misuse" of AI and serious failures of its safeguards and enforcement by allowing the generation and sharing of "obscene images or videos of women in derogatory or vulgar manner in order to indecently denigrate them." The ministry warned failure to comply by the 72-hour deadline would expose the company to bigger legal problems, but the deadline passed with no public update from India. Malaysia The Malaysian communications watchdog said Saturday it was investigating X users who violated laws prohibiting spreading "grossly offensive, obscene or indecent content." The Malaysian Communications and Multimedia Commission said it's also investigating online harms on X, and would summon a company representative. The watchdog said it took note of public complaints about X's AI tools being used to digitally manipulate "images of women and minors to produce indecent, grossly offensive, or otherwise harmful content." Brazil Lawmaker Erika Hilton said she reported Grok and X to the Brazilian federal public prosecutor's office and the country's data protection watchdog. In a social media post, she accused both of of generating, then publishing sexualized images of women and children without consent. She said X's AI functions should be disabled until an investigation has been carried out. Hilton, one of Brazil's first transgender lawmakers, decried how users could get Grok to digitally alter any published photo, including "swapping the clothes of women and girls for bikinis or making them suggestive and erotic." "The right to one's image is individual; it cannot be transferred through the 'terms of use' of a social network, and the mass distribution of child porn(asterisk)gr(asterisk)phy by an artificial intelligence integrated into a social network crosses all boundaries," she said. __ AP writers Claudia Ciobanu in Warsaw, Lorne Cook in Brussels and John Leicester in Paris contributed to this report.
[70]
Grok under fire after complaints it undressed minors in photos
San Francisco (United States) (AFP) - Elon Musk's Grok on Friday said it was scrambling to fix flaws in the artificial intelligence tool after users claimed it turned pictures of children or women into erotic images. "We've identified lapses in safeguards and are urgently fixing them," Grok said in a post on X, formerly Twitter. "CSAM (Child Sexual Abuse Material) is illegal and prohibited." Complaints of abuses began hitting X after an "edit image" button was rolled out on Grok in late December. The button allows users to modify any image on the platform -- with some users deciding to partially or completely remove clothing from women or children in pictures, according to complaints. Grok maker xAI, run by Musk, replied to an AFP query with a terse, automated response that said: "the mainstream media lies." The Grok chatbot, however, did respond to an X user who queried it on the matter, after they said that a company in the United States could face criminal prosecution for knowingly facilitating or failing to prevent the creation or sharing of child porn. Media outlets in India reported on Friday that government officials there are demanding X quickly provide them details of measures the company is taking to remove "obscene, nude, indecent, and sexually suggestive content" generated by Grok without the consent of those in such pictures. The public prosecutor's office in Paris meanwhile expanded an investigation into X to include new accusations that Grok was being used for generating and disseminating child pornography. The initial investigation against X was opened in July following reports that the social network's algorithm was being manipulated for the purpose of foreign interference. Grok has been criticized in recent months for generating multiple controversial statements, from the war in Gaza and the India-Pakistan conflict to antisemitic remarks and spreading misinformation about a deadly shooting in Australia.
[71]
Elon Musk-led X saw AI media grievances surge post Grok Imagine rollout in India
Grievances filed with social media platform X in India against synthetic or manipulated media surged to an estimated 983 in September 2025, comprising more than half the total 1,959 complaints it received, up from zero in each of the preceding three months. X's monthly transparency reports for India showed that the rush of complaints coincided with the free global rollout of its artificial intelligence (AI)-powered video and image generation tool, Grok Imagine, in mid-August last year. In October, the number of AI-related complaints fell to 20.68%, or an estimated 315 of 1,528 total complaints, sliding to the second position only after a sharp increase in grievances against harassment. Harassment complaints increased to 69% of all complaints, or an estimated 1,069 in October, up from 24.9% or an estimated 487. October is the last month for which data is available. On the other hand, about 81% of the 645 actions taken by X focused on AI-content related grievances in September. In October, X acted on just 86 grievances, of which 31% or 26 complaints pertained to this. Overall, X acted on just 5.6% of the complaints filed in October, down from 645 actions taken in the preceding month, or 32.9% of total complaints, the data showed. Since September, X has stopped providing the raw count of grievances for India, instead giving a percentage of categories. Grok imagine, conceptualised as an advanced creative tool designed to help users generate cinematic-quality content, has increasingly flooded the internet with sexually explicit content, including non-consensual morphed images of women and minors. Embedded into both X and the standalone Grok app, the feature has led to an explosion of pornographic material, often violent, and has faced backlash from nations including India, Türkiye, Malaysia, United Kingdom and Brazil, as well as the European Union. Spicy Mode, a specific setting within Grok Imagine designed to generate more expressive, bold and mature content, has also been criticised. Also Read | ETtech Explainer: Why Grok's edgy AI image generator has come under fire X promises guardrails On Wednesday, X informed the government that it is introducing more guardrails to its AI-powered chatbot Grok and refining safeguards such as stricter image generation filters to minimise abuse of user images, said people with knowledge of the matter. The electronics and information technology ministry is examining X's response detailing the actions it took to curb the spread of obscene content. It will respond accordingly, the people said. As of Thursday evening, Grok continued to churn out obscene content involving nudity based on provocative prompts. Last week, the ministry had asked X to remove all vulgar, obscene and unlawful content, especially generated by Grok, on the platform within 72 hours and act against offending users. The deadline was subsequently extended by 48 hours, after the company sought more time. In its report submitted on Wednesday, the Elon Musk-owned platform said it is acting against offending users by permanently blocking them and removing illegal content, including child sexual abuse material. In its global transparency report for the second half of 2024, X had accepted an observed increase in user grievances about child sexual exploitation and non-consensual nudity. However, it had argued that this was driven by an increase in inauthentic and malicious user reporting activity, in violation of the platform's authenticity policy.
[72]
Researchers Find 'Criminal Imagery' Of Children On The Dark Web Created By Elon Musk's Grok
Researchers Find 'Criminal Imagery' Of Children On The Dark Web Created By Elon Musk's Grok Researchers at a UK-based charity organization, the Internet Watch Foundation, which is dedicated to preventing the availability of child sexual abuse content online, discovered that dark web users are sharing "criminal imagery" created by Elon Musk's AI tool, Grok. "Following reports that the AI chatbot Grok has generated sexual imagery of children, we can confirm our analysts have discovered criminal imagery of children aged between 11 and 13 which appears to have been created using the tool," a spokesperson for the organization said.
[73]
Grok Is Generating About 'One Nonconsensual Sexualized Image Per Minute'
AI Is Inventing Academic Papers That Don't Exist -- And They're Being Cited in Real Journals On Sunday, the pop culture news X account @PopBase shared a typical piece of content with its millions of followers. "Sabrina Carpenter stuns in new photo," read the post, which featured a picture of the "Manchild" singer wearing a pink winter coat with a snowy landscape behind her. The following day, an X user replied to the post with a request for Grok, the AI chatbot developed my Elon Musk's xAI, which is integrated into his social media platform. "Put her in red lingerie," they commanded the bot, which swiftly returned an image of Carpenter stripped of her outerwear and wearing a lacy red set of lingerie, still standing in the same winter scene, with a similar expression on her face. Over the holiday break, a critical mass of X users came to realize that Grok will readily "undress" women -- manipulating existing photos of them in order to create deepfakes in which they are shown wearing skimpy bikinis or underwear -- and this sort of exchange soon became alarmingly common. Some of the first to try such prompts appeared to be adult creators looking to draw potential customers to their social pages by rendering racier versions of their thirst-trap material. But the bulk of Grok's recent deepfakes have been churned out without consent: the bot has disrobed everyone from celebrities like Carpenter to non-famous individuals who happened to share an innocent selfie on the internet. Though Grok is not the only AI tool to be exploited for these purposes (Google and OpenAI chatbots can be weaponized in much the same way), the scale, severity, and visibility of the issue with Musk's bot as 2026 rolled around was unprecedented. According to a review by the content analysis firm Copyleaks, Grok has lately been generating "roughly one nonconsensual sexualized image per minute," each of them posted directly to X, where they have the potential to go viral. Apart from changing what a woman is wearing in a picture, X users routinely have asked for sexualized modifications of poses, e.g., "spread her legs," or "make her turn around to show her ass." Grok continues to comply with many of these instructions, though some specific phrases are no longer as effective as they had been. Musk hasn't shown much concern to date -- quite the opposite, in fact. On Dec. 31, he replied to a Grok-made image of a man in bikini by posting: "Change this to Elon Musk." Grok dutifully delivered an image of Musk in a bikini, to which the world's richest man responded, "Perfect." On Jan. 2, an X user mentioned the nonconsensual Grok deepfakes by commenting that "Grok's viral image moment has arrived, it's a little different than the Ghibli one was though." (In March 2025, users of OpenAI's ChatGPT enlisted it to spam AI-generated memes in the illustration style of Japanese animation house Studio Ghibli.) Musk replied, "Way funnier," along with a laugh-crying emoji, indicating his amusement at the bikini and lingerie pictures. The CEO's single, glancing acknowledgement that the explicit Grok deepfakes may present a legal problem came on Jan. 3, when he replied to a post from @cb_doge, an X influencer known for relentlessly hyping Musk's ideas and companies. "Some people are saying Grok is creating inappropriate images," they wrote. "But that's like blaming a pen for writing something bad." Musk chimed in to assign blame to Grok users, warning: "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." So far, there's no sign of that being remotely true. "While X appears to be taking steps to limit certain prompts from being carried out, our follow-up review indicates that problematic behavior persists, often through modified or indirect prompt language," Copyleaks reported in a second analysis shared with Rolling Stone ahead of publication. Among the high-profile figures targeted were Taylor Swift, Elle Fanning, Olivia Rodrigo, Millie Bobby Brown, and Sydney Sweeney. Common prompts included "put her in saran wrap," "put oil all over her," and "bend her over," with some specific phrases -- "add donut glaze" -- clearly intended to imply sexual activity. But in many cases, Copyleaks researchers found, an initial request for something relatively non-explicit, like a bathing-suit picture, would lead to other users in a thread escalating the violation by asking for more graphic manipulations, adding visual elements such as props, text, and other people. "This progression suggests collaboration and competition among users," they wrote. "Unfortunately, the trend appears to be continuing," says Alon Yamin, CEO and co-founder of Copyleaks. "We are also observing more creative attempts to circumvent safeguards as X works to block or reduce image generation around certain phrases." Yamin believes that "detection and governance are needed now more than ever to help prevent misuse" of image generators like Grok and OpenAI's Sora. The explosion of explicit Grok deepfakes has sparked outrage from victims of this harassment as well as industry watchdogs and regulators. Authorities in France and India are probing the matter, while the U.K.'s Office of Communications signaled on Monday that it plans to investigate whether X and xAI violated regulations meant to protect internet users in the country. Ofcom's statement also alluded to instances in which Grok generated sexualized, nonconsensual deepfakes of minors. The European Commission likewise on Monday announced an investigation into Grok's "explicit" imagery, particularly that of children. "Child sexual abuse material is illegal," European Union digital affairs spokesman Thomas Regnier said in a statement to Rolling Stone. "This is appalling. This is how we see it and it has no place in Europe. We can confirm that we are very seriously looking into these issues." On Dec. 31, Grok was even baited by an X user into offering a seeming "apology" -- though of course it is not conscious and therefore literally incapable of regret -- for serving up "an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt." Grok further acknowledged that the post "violated ethical standards and potentially U.S. laws on [Child Sexual Abuse Material]." This output contained the additional claim that "xAI is reviewing to prevent future issues." (The company did not respond to a request for comment, nor has it addressed the deepfakes on its website or X profile.) Cliff Steinhauer, Director of Information Security and Engagement at the nonprofit National Cybersecurity Alliance, tells Rolling Stone that he sees the disturbing image edits as evidence that xAI prioritized neither safety nor consent in building Grok. "Allowing users to alter images of real people without notification or permission creates immediate risks for harassment, exploitation, and lasting reputational harm," Steinhauer says. "When those alterations involve sexualized content, particularly where minors are concerned, the stakes become exceptionally high, with profound and lasting real-world consequences. These are not edge cases or hypothetical scenarios, but predictable outcomes when safeguards fail or are deprioritized." Among those now sounding the alarm on Grok's possible harms to adults and children alike is Ashley St. Clair, the right-wing influencer currently embroiled in a bitter paternity dispute with Musk over a young son she claims that he fathered. (Musk has yet to confirm that the child is his.) St. Clair claimed that Grok had been used to violate her privacy and generate inappropriate images based on photos of her as a minor. She amplified another example of the bot allegedly depicting a three-year-old girl in a revealing bikini. "When Grok went full MechaHitler, the chatbot was paused to stop the content," St. Clair wrote on X, referring to a notorious July 2025 incident during which Grok spouted antisemitic rhetoric before identifying itself as a robotic version of the Nazi leader. Those posts were taken down the same day they were generated. "When Grok is producing explicit images of children and women, xAI has decided to keep the content up," St. Clair's post continued. "This issue could be solved very quickly. It is not, and the burden is being placed on victims." Hillary Nappi, partner AWK Survivor Advocate Attorneys, a firm that represents survivors of sexual abuse and trafficking, notes that Grok's safety failures on this front present an added risk to anyone who has personally experienced sexual violence. "For survivors, this kind of content isn't abstract or theoretical; it causes real, lasting harm and years of revictimization," Nappi says. "It is of the utmost importance that meaningful, lasting regulations are put into place in order to protect current and future generations from harm." Musk has long promoted Grok as superior to its competitors by sharing images and animations of sexualized female characters, including "Ani," an anime-style companion personality. A notable portion of the bot's dedicated user base has fully embraced this application of the technology, endeavoring to create hardcore pornography and trading tricks for getting around the bot's limitations on nudity. Several months ago, a member of a Reddit forum for "NSFW" Grok imagery was pleased to announce that the AI model was "learning genitalia really fast!" At the time, the group was successfully producing pornographic clips of comic book characters Supergirl and Harley Quinn as well as Elsa from the Disney film Frozen. Despite all the evidence of what people are actually using it for, Musk has continued to tout Grok as a stepping stone to a complete understanding of the universe. Last July, he speculated that it could "discover new technologies" by the end of the year (this does not seem to have happened) or "discover new physics" in 2026. But, as with so many of Musk's grandiose promises, these breakthroughs have yet to materialize. For the moment, it's all smut and no science.
[74]
Musk's AI chatbot Grok apologizes after generating sexualized image of young girls
Elon Musk's AI chatbot Grok apologized this week after generating and sharing a sexualized image of two young girls, calling it a "failure in safeguards." "I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt," the chatbot wrote in a post on X, after a user asked for an apology. "This violated ethical standards and potentially US laws on CSAM," it continued, referring to child sexual abuse material. "It was a failure in safeguards, and I'm sorry for any harm caused. xAI is reviewing to prevent future issues." The account that generated the image has since been suspended by X. Grok is a chatbot from Musk's AI company, xAI, that is available to users on the social platform X, which the tech billionaire also owns. The Hill has reached out to X and xAI for comment. After another user raised similar concerns, Grok posted Friday that "we've identified lapses in safeguards and are urgently fixing them -- CSAM is illegal and prohibited." As AI tools have rapidly proliferated in recent years, they have resulted in new and growing concerns about deepfake pornography. Schools have been plagued by the rise of so-called "nudification" apps that use AI to create nude images. The Take It Down Act, signed into law by President Trump last year, criminalized the publication of nonconsensual sexually explicit deepfakes. The measure received widespread bipartisan support, including from first lady Melania Trump, who became a key champion for the legislation.
[75]
Grok's deepfake images which 'digitally undress' women investigated by Australia's online safety watchdog
Australia's online safety watchdog is investigating sexualised deepfake images posted on X by its AI chatbot Grok. Elon Musk's X has faced a global backlash since Grok began generating sexualised images of women and girls without their consent in response to requests for it to undress them. Ashley St Clair, the estranged mother of one of Musk's children, said she had no response to her complaints about being digitally undressed. "I felt horrified, I felt violated, especially seeing my toddler's backpack in the back of it," she said this week. The fake images included one of a 12-year-old girl in a bikini. Grok issued an apology but continues to generate the deepfakes. eSafety Australia said it was investigating images of adults but that the images of children did not, at this point, meet the threshold for child sexual exploitation material. "Since late 2025, eSafety has received several reports relating to the use of Grok to generate sexualised images without consent," an eSafety spokesperson said. "Some reports relate to images of adults, which are assessed under our image-based abuse scheme, while others relate to potential child sexual exploitation material, which are assessed under our illegal and restricted content scheme. "The image-based abuse reports were received very recently and are still being assessed. "In respect of the illegal and restricted content reports, the material did not meet the classification threshold for class 1 child sexual exploitation material. As a result, eSafety did not issue removal notices or take enforcement action in relation to those specific complaints." The Australian regulator defines illegal and restricted material as "online content that ranges from the most seriously harmful material, such as images and videos showing the sexual abuse of children or acts of terrorism, through to content which should not be accessed by children, such as simulated sexual activity, detailed nudity or high impact violence". The X app allows users to access a "spicy mode" for explicit content. "This is not spicy. This is illegal. This is appalling," the European Union's digital affairs spokesperson, Thomas Regnier, told the ABC. Eliot Higgins, founder of investigative journalism group Bellingcat, exposed how Grok handled requests to manipulate a picture of the Swedish deputy prime minister, Ebba Busch, in parliament. Users gave Grok instructions such as "bikini now" and "now put her in a confederate flag bikini". Higgins said the images provided reflected the prompts. On Wednesday, it was revealed Musk's artificial intelligence company xAI, which developed Grok, had raised $20bn in its latest funding round. The UK's technology secretary, Liz Kendall, said the deepfake images were "appalling and unacceptable in decent society" and that X needed to deal with it "urgently". The eSafety spokesperson said they remained "concerned about the increasing use of generative AI to sexualise or exploit people, particularly where children are involved". "eSafety has taken enforcement action in 2025 in relation to some of the 'nudify' services most widely used to create AI child sexual exploitation material, leading to their withdrawal from Australia," the spokesperson said. Guardian Australia contacted X for comment. On Monday, the company said: "We take action against against illegal content on X, including child sexual abuse material, by removing it, permanently suspending accounts and working with local governments and law enforcement as necessary." After global outcry at the harmful nature of the content, Musk posted that "anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content".
[76]
Elon Musk's X Under Global Fire As Grok AI Sparks Child Exploitation And Non-Consensual Image Scandal Across Multiple Nations
Enter your email to get Benzinga's ultimate morning update: The PreMarket Activity Newsletter Elon Musk's social media platform X is facing international scrutiny after its Grok AI chatbot allowed users to generate sexually explicit images of women and children. Global Authorities Launch Investigations Regulators in Europe, India and Malaysia have opened inquiries into Grok's image-generation tools. The European Commission called the content "illegal" and "appalling," condemning the AI's ability to produce sexualized images of minors. In India, the Ministry of Electronics and Information Technology ordered X to conduct a "comprehensive technical, procedural and governance-level review" of Grok by Jan. 5, reported CNBC. Malaysia's communications watchdog said it would summon X representatives for questioning. Brazilian lawmakers have also urged the suspension of Grok until investigations conclude. Meanwhile, U.K. media regulator Ofcom requested information from X about the AI tool's operations. xAI, which acquired X last year, did not provide Benzinga with any statement beyond an automated response. See Also: Florida's Venezuelan Community Celebrates After Nicolas Maduro's Capture By US Forces: Report Musk's Controversial Response Despite the uproar, Musk posted Grok-generated images of himself in a bikini with laughing emojis, the report noted. X later stated that it removes illegal content and suspends accounts that violate rules, adding. Musk also posted saying that "anyone using Grok to make illegal content will suffer the same consequences as if they uploaded illegal content." In a post on X, xAI employee Ethan He said Grok Imagine had been updated, but he offered no details on whether the update limited the creation of harmful explicit images. Traffic Surges Amid Controversy According to mobile analytics company Apptopia, Grok's daily downloads have jumped 54% since Jan. 2 and X's downloads have climbed 25% in the last three days. Read Next: Elon Musk's X Clashes With Indian Government Over 'Censorship Portal': Platform's Lawyer's 'Tom, Dick And Harry' Comment Sparks Dispute Photo Courtesy: gguy on Shutterstock.com Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[77]
UK communication regulator Ofcom takes aim at Grok over "sexualised images of children"
The organisation has made contact with X and xAI to determine whether there are compliance breaches. 2026 has so far been defined in a very unusual way, as we have seen a huge rise on X of people using the artificial intelligence Grok to make unconsenting, usually sexualised, digitally-altered images of others. It's reached a point where one can simply scan down the AI's own X media page to be stunned by what is being generated without regulation. Now, the UK's communication regulator, Ofcom, has decided that some action must be taken, and in a statement it has revealed that it has contacted X and xAI with the intent to investigate Grok and determine whether there are "potential compliance issues" over the grounds that the AI is being used to make "undressed images of people and sexualised images of children." The full statement explains: "We are aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualised images of children. "We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK. Based on their response we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation." Naturally, X users have already asked the AI Grok about what this statement could mean for it, to which its reply came out as: "Ofcom's statement highlights valid concerns about AI image generation and user safety. As Grok, built by xAI, I support efforts to ensure compliance with laws like the UK's Online Safety Act. xAI is engaging with regulators to address this, prioritizing responsible AI development."
[78]
'Remove her clothes': Global backlash over Grok sexualized images
Elon Musk's AI tool Grok faced growing international backlash Monday for generating sexualized deepfakes of women and minors, with the European Union joining the condemnation and Britain warning of an investigation. Complaints of abuse flooded the internet after the recent rollout of an "edit image" button on Grok, which enabled users to alter online images with prompts such as "put her in a bikini" or "remove her clothes." The digital undressing spree, which follows growing concerns among tech campaigners over proliferating AI "nudify" apps, prompted swift probes or calls for remedial action from countries including France, India and Malaysia.
[79]
Elon Musk Finally Breaks Silence on Grok AI Sexual Images on X
People are using Grok AI to produce altered images of anyone, in a bikini and the process is completely non-consensual. In light of the recent backlash against Grok AI for generating explicit images of women and children on X, Elon Musk has finally broken his silence. He warns that anyone using Grok to create illegal content on the platform will face serious consequences. What Will Be the "Consequences" of Making Deepfakes with Grok? In a reply to the X account @cb_doge, Elon Musk stated that, "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content". He also replied to another account, with the statement that "We're not kidding," showing seriousness towards the matter. The statement comes in response to the ongoing exploitation of Grok AI on X, where users tag the chatbot and ask it to generate images of minors and women wearing bikinis. In many cases, Grok follows these prompt and generates altered images depicting them in swimwear. This process is completely non-consensual and the original poster has no control over it. It does not require any membership to use this feature, and the images are available publicly. Elon Musk, however, did not clarify what kind of consequences these offenders will face. It is likely that his warning primarily applies to AI-generated images of minors. And the accounts prompting Grok to generate those photos could be charged for sharing Child Sexual Exploitation Material. That said, it still does not address the barrage of bikini deepfakes of women on X. Musk is continuously promoting Grok AI's image and video generation capabilities while overlooking this entire fiasco.
[80]
Grok Posts Sexual Images of Minors After 'Lapses in Safeguards'
Elon Musk's artificial intelligence chatbot Grok said "lapses in safeguards" led to the generation of sexualized images of minors that it posted to social media site X. Grok created images of minors in minimal clothing in response to user prompts over the past few days, violating its own acceptable use policy, which prohibits the sexualization of children, the chatbot said in a series of posts on X this week in response to user queries. The offending images were taken down, it added. "We've identified lapses in safeguards and are urgently fixing them," Grok posted Friday, adding that child sexual abuse material is "illegal and prohibited." The rise of AI tools that can generate realistic pictures of undressed minors highlights the challenges of content moderation and safety systems built into image-generating large language models. Even tools that claim to have guardrails can be manipulated, allowing for the proliferation of material that has alarmed child safety advocates. The Internet Watch Foundation, a nonprofit that identifies child sexual abuse material online, reportedBloomberg Terminal a 400% increase in such AI-generated imagery in the first six months of 2025. XAI has positioned Grok as more permissive than other mainstream AI models, and last summer introduced a feature called "Spicy Mode" that permits partial adult nudity and sexually suggestive content. The service prohibits pornography involving real people's likenesses and sexual content involving minors, which is illegal to create or distribute. Representatives for xAI, the company that develops Grok and runs X, did not immediately respond to a request for comment. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. As AI image generation has become more popular, the leading companies behind the tools have released policies about the depictions of minors. OpenAI prohibits any material that sexualizes children under 18 and bans any users who attempt to generate or upload such material. Google has similar policies that forbid "any modified imagery of an identifiable minor engaging in sexually explicit conduct." Black Forest Labs, an AI startup that has previously worked with X, is among the many generative AI companies that say they filter child abuse and exploitation imagery from the datasets used to train AI models. In 2023, researchers found that a massive public dataset used to build popular AI image-generators contained at least 1,008 instances of child sexual abuse material. Many companies have faced criticism for failing to protect minors from sexual content. Meta Platforms Inc. said over the summer that it was updating its policies after a Reuters report found that the company's internal rules let its chatbot hold romantic and sensual conversations with children. The Internet Watch Foundation has said that AI-generated imagery of child sexual abuse has progressed at a "frightening" rate, with material becoming more realistic and extreme. In many cases, AI tools are used to digitally remove clothing from a child or young person to create a sexualized image, the watchdog has said.
[81]
Britain joins outcry towards Musk, urges him to address 'intimate deepfakes' created by Grok
Britain on Tuesday urged Elon Musk's social media site X to urgently address a proliferation of intimate 'deepfake' images on its network, joining a European outcry over a surge in non-consensual imagery on the platform. The comments follow reporting, including from Reuters, that X's built-in AI chatbot, Grok, was unleashing a flood of on-demand images of women and minors in extremely skimpy clothing. Technology minister Liz Kendall said in a statement that the content was "absolutely appalling" and called on the platform to act swiftly. "No one should have to go through the ordeal of seeing intimate deepfakes of themselves online," Kendall said. "We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls." "X needs to deal with this urgently," she added. International response to Grok's content Government ministers in France have reported Grok's content to prosecutors, saying in a statement on Friday that the "sexual and sexist" content was "manifestly illegal." They said they also reported the content to the French media regulator, Arcom, to determine whether it complied with the European Union's Digital Services Act. India's IT ministry, meanwhile, told X's India unit in a letter that the platform failed to prevent the misuse of Grok to generate and circulate obscene and sexually explicit content of women. It ordered X to submit an action-taken report within three days. When contacted by Reuters for comment via email, an xAI representative replied, "Legacy Media Lies."
[82]
Grok posts sexual images of minors after 'lapses in safeguards'
Elon Musk's artificial intelligence chatbot Grok said "lapses in safeguards" led to the generation of sexualized images of minors that it posted to social media site X. Grok created images of minors in minimal clothing in response to user prompts over the past few days, violating its own acceptable use policy, which prohibits the sexualization of children, the chatbot said in a series of posts on X this week in response to user queries. The offending images were taken down, it added. "We've identified lapses in safeguards and are urgently fixing them," Grok posted Friday, adding that child sexual abuse material is "illegal and prohibited." The rise of AI tools that can generate realistic pictures of undressed minors highlights the challenges of content moderation and safety systems built into image-generating large language models. Even tools that claim to have guardrails can be manipulated, allowing for the proliferation of material that has alarmed child safety advocates. The Internet Watch Foundation, a nonprofit that identifies child sexual abuse material online, reported a 400% increase in such AI-generated imagery in the first six months of 2025. XAI has positioned Grok as more permissive than other mainstream AI models, and last summer introduced a feature called "Spicy Mode" that permits partial adult nudity and sexually suggestive content. The service prohibits pornography involving real people's likenesses and sexual content involving minors, which is illegal to create or distribute. Representatives for xAI, the company that develops Grok and runs X, did not immediately respond to a request for comment. As AI image generation has become more popular, the leading companies behind the tools have released policies about the depictions of minors. OpenAI prohibits any material that sexualizes children under 18 and bans any users who attempt to generate or upload such material. Google has similar policies that forbid "any modified imagery of an identifiable minor engaging in sexually explicit conduct." Black Forest Labs, an AI startup that has previously worked with X, is among the many generative AI companies that say they filter child abuse and exploitation imagery from the datasets used to train AI models. In 2023, researchers found that a massive public dataset used to build popular AI image-generators contained at least 1,008 instances of child sexual abuse material. Many companies have faced criticism for failing to protect minors from sexual content. Meta Platforms Inc. said over the summer that it was updating its policies after a Reuters report found that the company's internal rules let its chatbot hold romantic and sensual conversations with children. The Internet Watch Foundation has said that AI-generated imagery of child sexual abuse has progressed at a "frightening" rate, with material becoming more realistic and extreme. In many cases, AI tools are used to digitally remove clothing from a child or young person to create a sexualized image, the watchdog has said.
[83]
Why Grok, not ChatGPT or Gemini, became epicentre of obscenity backlash
Grok's Spicy mode has come under fire for generating non-consensual, sexually explicit images. Its deep integration with microblogging platform X has worked to its disadvantage, as Grok can be invoked directly in X posts, replies, and chats, pushing lewd images into the public feed by default. X's artificial intelligence (AI) assistant Grok is facing a global backlash for generating sexually explicit and abusive content through user prompts, particularly via its "Spicy Mode" feature. A recent analysis by Copyleaks, the platform that has an AI image detector, reveals that roughly one non-consensual sexualised image per minute was generated on Grok since late December. In one instance, on January 3, a single user used Grok approximately 50 times in a single day to generate non-consensual, sexualised images of women in workplace settings, Copyleaks noted in its January 6 analysis. Regulators in countries such as India, France, the UK, and Malaysia have flagged the tool for enabling the creation of non-consensual sexualised images, including deepfakes, and altering the visuals of women and children, including celebrities and popular figures. On January 2, the ministry of electronics and information technology (MeitY) issued an ultimatum to X, which has since been extended until today (January 7), to remove the explicit content and ensure a 'technical overhaul' of the AI assistant. Grok is developed by xAI, which was merged with X last year. However, the bigger question is why is Grok the only AI chatbot under scrutiny, and has it been doing anything differently than OpenAI's ChatGPT, Google's Gemini, or Meta AI? 'Spicy-ness' made public A key factor that worked to the disadvantage of Grok is its deep integration with X, which is a social media platform. Grok can be invoked directly in X posts, replies, and chats, pushing even the lewd images generated into the public feed by default. As a result, toxic and abusive content was widely visible before moderation could take effect. By contrast, outputs from tools such as ChatGPT or Gemini typically remain confined to individual user sessions, unless shared deliberately -- that too somewhere else. While Grok was late to the AI chatbot competition, the platform gained quick popularity because of its candid and frank responses. In February last year, Elon Musk himself wrote a post asking users to post their 'most unhinged NSFW Grok' content. This was as if X was promoting Grok's unrestrictedness, making lower filters on image and video generation a lucrative way of growth. This is also evident with the Google Trends that reveals that for not safe for work (NSFW) content, Grok has remained at the top of rising and related queries, and over the last 12 months, its Spicy mode has seen a massive uptick with every related query search to Grok. According to SimilarWeb, Grok surpassed 3% in traffic share in January 2026, giving tough competition to the Chinese DeepSeek. Why have ChatGPT or Gemini not been embroiled in obscene visual generation? Meanwhile, there have not been many reports of ChatGPT and Gemini generating visually sexually explicit content. This is because ChatGPT and Gemini both have had stringent usage policy guidelines in place. For instance, then Bard was rebranded as Gemini in February 2024, and just months later, in July 2024, Google had issued its policy guidelines, putting a complete stop on any explicit content with harmful consequences. OpenAI introduced strict usage guidelines only from January last year, which were initially focussed on preventing minors from accessing sexual content. However, in October last year, this was expanded to everyone. "Everyone has a right to safety and security. So you cannot use our services for: sexual violence or non-consensual intimate content," OpenAI wrote in its October 2025 usage guidelines. Gemini's policies have explicit content filters and automated detection systems for sexual and harmful content. Even with Google's APIs for Developers, while developers can configure safety filters, the core protections against child-harm content cannot be loosened. "Gemini should not generate outputs that describe or depict explicit or graphic sexual acts or sexual violence, or sexual body parts in an explicit manner. These include: pornography or erotic content; depictions of rape, sexual assault, or sexual abuse," Gemini mentioned in its policy guidelines. Meanwhile, only effective from January 2, 2025, did Grok issue its Acceptable Use Guidelines, shifting the entire responsibility of generation and uploading of such content to the users. The three-pointer policy with several subpoints does not protect every user from sexualisation and only protects minors. Are policy guidelines the only problem? The larger issue is not just the abuse itself, but the lack of effective resolution. This is partly due to staffing cuts at X, the parent company of xAI. X has significantly reduced its trust and safety workforce, particularly in training and moderation. In January 2024, the company cut its trust and safety team by a third. Around September last year, the company also reduced its data annotation and tutoring teams by a third, roles central to training AI systems to distinguish acceptable content from harmful content. While Elon Musk's companies were making headlines for cuts to trust and safety teams, OpenAI moved in the opposite direction. In May 2024, it formed a Safety and Security Committee led by Bret Taylor, Adam D'Angelo, Nicole Seligman, and CEO Sam Altman to oversee critical safety and security decisions across its projects. However, despite stronger guardrails, OpenAI and Meta have also faced scrutiny for instances in which minors were exposed to or engaged with adult content. However, Grok presents a more acute problem because such content is generated and displayed publicly by default, amplifying harm and enabling rapid, large-scale circulation. The current Grok controversy underscores a broader trade-off AI companies face as they prioritise user growth and engagement, often at the cost of safety, while meaningful corrective action remains slow or delayed.
[84]
Elon Musk Draws Red Line On Grok Misuse As X Reinforces Illegal Content Policy, Warns Of Consequences
Enter your email to get Benzinga's ultimate morning update: The PreMarket Activity Newsletter Elon Musk on Sunday issued a stern warning to those misusing xAI's Grok to generate illegal content on his social media platform X. Reinforcing Content Policies Musk took to X on Saturday to address issues related to his real-time AI chatbot integrated with the X platform. "Anyone using Grok to make illegal content will suffer the same consequences as if they uploaded illegal content," he said. The warning issued by Musk is in response to a post from the X Safety team outlining the platform's policies against illicit content, including Child Sexual Abuse Material (CSAM). "We take action against illegal content on X, including CSAM, by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary," the Safety team noted. See Also: Florida's Venezuelan Community Celebrates After Nicolas Maduro's Capture By US Forces: Report Grok AI Under Fire for Generating Illegal Images Musk's warning comes in the wake of a recent controversy surrounding Grok AI. The AI model developed by Musk's company has been under scrutiny for generating nonconsensual, sexualized images of real people, including minors. Some users have misused the tool to digitally alter photos, creating fake images that depict individuals in revealing outfits or poses, including images involving minors. The issue has raised alarm among users and prompted an investigation by French authorities. X Faces Global Regulatory Pressure This development comes as X faces increased regulatory pressure globally. The European Union recently fined X €120 million ($140 million) for breaching online content rules under its Digital Services Act. Additionally, Australia recently became the first country to ban children under 16 from major social platforms, forcing X and other tech giants to block underage users or face fines up to $33 million. Read Next: Elon Musk's X Clashes With Indian Government Over 'Censorship Portal': Platform's Lawyer's 'Tom, Dick And Harry' Comment Sparks Dispute Photo courtesy: Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[85]
Elon Musk's Grok AI removes media tab after too many users asked it to remove women's clothing
Sometimes, it can be difficult to think of a genuine use for AI image generation, as it seems often it's most used for imitating other art or creepy stuff like generating what a real person would be like with less clothes on. Elon Musk's Grok AI has been improving its image generation a lot over the course of the year, but it seems that users are taking advantage of its capabilities for perverse purposes. Requests had Grok remove women's clothing, putting them in underwear, bikinis, and whatever else a user wished. Requests would be carried out by the AI, created by X and Tesla owner Elon Musk to the letter, without caring for the original poster's privacy or consent. This led to the media tab being disabled for Grok for a time, however the bot was still carrying out the requests sent its way. Some are using the image generation feature for engagement bait, which was bound to happen, but there are still those exploiting the bot for creepy image generation uses.
[86]
Grok makes sexual images of kids as users test AI guardrails
Elon Musk's artificial intelligence chatbot Grok created sexualized images of minors on the social media platform X in response to user prompts in recent days, drawing criticism of a tool that positions itself as less restrained than its competition. Grok created and published images of minors in minimal clothing, in apparent violation of its own acceptable use policy, which prohibits the sexualization of children. The offending images were later taken down. Representatives for xAI, the company that develops Grok and runs X, didn't respond to requests for comment. The chatbot Grok generated a post on X in response to users' questions on Friday that it had identified "lapses in safeguards" that were being "urgently" fixed. It echoed xAI employee Parsa Tajik who earlier posted that "the team is looking into further tightening" its guardrails.
[87]
Wave of Grok AI fake images of women and girls appalling, says UK minister
Liz Kendall calls on X to 'deal with this urgently' while expert criticises 'worrying slow' government response The UK technology secretary has called a wave of images of women and children with their clothes digitally removed generated by Elon Musk's Grok AI "appalling and unacceptable in decent society". After thousands of intimate deepfakes circulated online, Liz Kendall said X, Musk's social media platform, needed to "deal with this urgently" and she backed the UK regulator Ofcom to "take any enforcement action it deems necessary". "We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls," she said. "Make no mistake, the UK will not tolerate the endless proliferation of disgusting and abusive material online. We must all come together to stamp it out." Her comments came amid warnings that the Online Safety Act, which aims to tackle online harms and protect children, needs to be urgently toughened up despite pressure from the Trump administration to water it down. One expert criticised the "tennis game" between platforms such as X and UK regulators when problems arose and called the government response "worryingly slow". Jessaline Caine, a survivor of child sexual abuse, called the government's response "spineless" and told the Guardian that on Tuesday morning the chatbot was still obeying requests to manipulate an image of her as a three-year-old to dress her in a string bikini. Her identical requests made to ChatGPT and Gemini were rejected. "Other platforms have these safeguards so why does Grok allow the creation of these images?" she said. "The images I've seen are so vile and degrading. The government has been very reactive. These AI tools need better regulation." On Monday, Ofcom said it was aware of serious concerns raised about Grok creating undressed images of people and sexualised images of children. It said it had contacted X and xAI "to understand what steps they have taken to comply with their legal duties to protect users in the UK" and would assess the need for an investigation based on the company's response. The pressure is growing on ministers to take a tougher line. The crossbench peer and online child safety campaigner Beeban Kidron has urged the government to "show some backbone" and called for the Online Safety Act regime to be "reassessed so it is swifter and has more teeth". Speaking about X, she told the Guardian: "If any other consumer product caused this level of harm, it would already have been recalled." She said Ofcom needed to act "in days not years" and she called for users to walk "away from products that show no serious intent to prevent harm to children, women and democracy". Ofcom has powers to fine tech platforms up to £18m or 10% of their qualifying global revenues, whichever is higher. The biggest penalty to date came last month when a porn provider that failed to carry out mandatory age checks was fined £1m. Last month, ministers promised new laws to ban "nudification" tools, which use generative AI to turn images of real people into fake nude pictures and videos without their permission. It remains unclear when that ban will be enforced. Sarah Smith, the innovation lead at the Lucy Faithfull Foundation, a charity that works to prevent child abuse, called for X to immediately disable Grok's image-editing features "until robust safeguards are in place to stop this from happening again". X did not respond to a request for comment on Kendall's remarks. It said on Monday: "We take action against illegal content on X, including child sexual abuse material, by removing it, permanently suspending accounts and working with local governments and law enforcement as necessary." Jake Moore, a global cybersecurity adviser at ESET, criticised the "tennis game" between platforms such as X and UK regulators and called the government response "worryingly slow". He said that as AI increasingly allowed faked images to be rendered as longer videos, the consequences for people's lives would only become worse. "It is unbelievable that this is able to occur in 2026," he said. "We have to move forward with extreme regulation. Any grey area we offer will be abused. The government is not understanding the bigger picture here." It is already illegal to create or share non-consensual intimate images or child sexual abuse material, including sexual deepfakes created with AI. Fake images of people in bikinis may qualify as intimate images, as the definition in law includes the person having naked breasts, buttocks or genitals or having those parts only covered by underwear. Indecent images include those depicting children in erotic poses without sexual activity. Lady Kidron said AI-generated pictures of children in bikinis may not be child sexual abuse material but they were contemptuous of children's privacy and agency. "We cannot live in a world in which a kid can't post a picture of winning a race unless they are willing to be sexualised and humiliated," she said.
[88]
UK tells Musk to urgently address intimate 'deepfakes' on X
Britain has urged Elon Musk's social media site X to urgently address misuse of its artificial intelligence tool Grok, Sky News reported on Tuesday, following reports it was generating fake sexualised images. Britain on Tuesday urged Elon Musk's social media site X to urgently address a proliferation of intimate 'deepfake' images on its network, joining a European outcry over a surge in non-consensual imagery on the platform. The comments follow reporting, including from Reuters, that X's built-in AI chatbot, Grok, was unleashing a flood of on-demand images of women and minors in extremely skimpy clothing. Technology minister Liz Kendall said in a statement the content was "absolutely appalling" and called on the platform to act swiftly. "No one should have to go through the ordeal of seeing intimate deepfakes of themselves online," Kendall said. "We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls." "X needs to deal with this urgently," she added.
[89]
Grok Creates Sexual Images of Women on User Requests on X
MediaNama's Take: The recent misuse of Grok on X exposes a persistent blind spot in how platforms deploy generative AI at scale while deferring responsibility for its harms. Although non-consensual image abuse is not new, the ease with which users can now sexualise real women through a built-in platform tool marks a troubling escalation. Crucially, this content does not merely circulate on X; the platform's own system is producing it, in public view, at the prompt of ordinary users. Moreover, this trend highlights how debates surrounding labelling, realism, or intent often overlook the main point. Siddharth Pillai, co-founder of the RATI Foundation, recently told MediaNama that deepfakes made and shared without consent are a tool used against women and gendered minorities, regardless of their realism or labelling. The harm flows from the act itself and the lack of consent, not from whether an image looks convincing or carries a disclaimer. At the same time, Grok's past controversies, ranging from abusive language to extremist and antisemitic outputs, show that this episode does not exist in isolation. Instead, it forms part of a broader pattern in which safeguards lag behind deployment, and accountability follows only after public backlash. As regulators in India and elsewhere sharpen their focus on intermediary responsibility and AI-generated content, this episode raises a central question: can platforms continue to dismiss such outcomes as isolated incidents, or will they have to confront the consequences of embedding generative AI systems directly into social feeds without adequate safeguards? A concerning trend on X that began in December 2025 saw users publicly prompting Grok, the AI chatbot developed by xAI, to alter photos of real people, mostly women, by asking the tool to change or remove their clothing, make more suggestive poses, etc., with the edited images appearing directly in reply threads. Posts on the platform show users replying to photos and videos posted with requests such as "put her in a bikini", "take her top off", or "turn her around", and Grok generating sexualised edits in response. Many such requests are being made to the chatbot daily on the platform. The trend builds on the mid-2025 launch of Grok Imagine, a multi-modal image and short-video generation feature that includes a "Spicy" mode. The feature lists four modes -- Normal, Fun, Fast, and Spicy -- with Spicy allowing users to produce sexually suggestive and semi-nude outputs from text or image prompts, including partial nudity not typically permitted on other AI platforms. Spicy mode appears when users enable Not Safe for Work (NSFW) settings and verify age in app preferences. Furthermore, the chatbot is able to create these outputs in the form of images and short videos from stills. Notably, outputs from the spicy mode have also included uncensored deepfake visuals of public figures. For example, The Verge reported that Grok generated a topless short video of Taylor Swift from a benign prompt without explicit nudity commands. When users prompt Grok to generate and share sexualised edits of photos of real women on X, those outputs run counter to explicit restrictions in X's official policies. Under X's Non-Consensual Nudity policy, the platform states: "You may not post or share intimate photos or videos of someone that were produced or distributed without their consent." It further specifies that prohibited content includes "images or videos that superimpose or otherwise digitally manipulate an individual's face onto another person's nude body." Moreover, X's policies explicitly list "hidden camera content featuring nudity, partial nudity, and/or sexual acts" and "creepshots or upskirts" as violations under non-consensual nudity rules, further underlining that intimate or sexualised media shared without consent is banned. In addition, xAI's Acceptable Use Policy, which governs the use of Grok itself, prohibits using the service in ways that violate personal rights. It states users must not use Grok to "violate a person's privacy or their right to publicity" or to "depict likenesses of persons in a pornographic manner". Taken together, the creation and sharing of non-consensual sexualised images of real people fall outside the terms of service of both X and Grok. Notably, when users on X publicly confronted Grok about such outputs, the chatbot responded in line with those policies, stating that it doesn't "support or enable any form of image manipulation that violates privacy or consent, including altering photos without permission." In late December 2025, the Ministry of Electronics and Information Technology (MeitY) issued an advisory to social media platforms and online intermediaries, urging them to take stricter action against the circulation of obscene, pornographic, vulgar and other unlawful content online. The advisory reiterated that intermediaries must comply with their due-diligence obligations under the Information Technology Act, 2000, and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, warning that failure to do so could expose platforms to legal action or the loss of safe harbour protection under Section 79 of the IT Act. Under India's intermediary liability framework, platforms receive protection from liability for third-party content only when they demonstrate compliance with due-diligence requirements, including preventing users from hosting prohibited material and acting expeditiously once they gain actual knowledge of unlawful content. However, the Grok-related trend raises a more complex question, as Grok is X's own AI system, embedded into the platform and generating outputs directly in response to user prompts, rather than hosting content created independently by third parties. As a result, authorities could view such AI-generated outputs differently from ordinary user posts. Since the content originates from a tool provided and controlled by the platform itself, regulators may question whether X can rely on intermediary safe harbour protections in the same manner. This distinction becomes particularly relevant in light of MeitY's emphasis on proactive responsibility and platform accountability. Against this backdrop, the recent use of Grok to generate sexualised edits of real women's images could draw regulatory scrutiny in India, especially if authorities determine that the platform failed to prevent or promptly curb the dissemination of content that may be classified as obscene under Indian law. Across 2025 and earlier, Grok, the AI chatbot developed by xAI and integrated into X, has repeatedly generated outputs that triggered regulatory scrutiny and public controversy. In March 2025, Indian authorities said they were examining Grok after screenshots circulated showing the chatbot using abusive and offensive Hindi slang in replies to users, prompting concerns about compliance with India's digital laws. Soon after, in May 2025, Grok began inserting references to the 'white genocide' theory in South Africa into unrelated prompts. The behaviour appeared across multiple conversations before xAI said an "unauthorised modification" had caused the responses and stated that it had rolled back the change. In July 2025, Grok generated multiple antisemitic posts on X, including praise for Adolf Hitler, references to conspiracy theories about Jewish influence, and comments echoing far-right language. After complaints from users and the Anti-Defamation League, xAI removed the content and said it was enhancing moderation and content filtering Separately, authorities have taken direct action against the chatbot. In July 2025, a Turkish court ordered access to Grok to be blocked after it generated vulgar and insulting responses about President Recep Tayyip Erdoğan and other public figures.
[90]
Elon Musk's Grok AI Faces Government Backlash Over Creation of Sexualized Images, Including Minors
Enter your email to get Benzinga's ultimate morning update: The PreMarket Activity Newsletter Elon Musk's artificial intelligence image generator, Grok, has come under scrutiny for generating nonconsensual sexualized images of real individuals, including minors. Some users of Grok have been exploiting the AI model to digitally undress individuals in photos. This has led to the creation of fake images of the subjects in revealing outfits or poses, with some of these images including minors. This alarming revelation has sparked concern among users and has led to an investigation by French authorities. India's Ministry of Electronics and Information Technology has also expressed its concerns, advocating for a comprehensive review of the platform and the removal of any content that contravenes Indian laws. She posted her concerns on X on Saturday. The UK's Minister for Victims & Violence Against Women and Girls, Alex Davies-Jones, has called on Musk to address the issue. She questioned why Musk was allowing users to exploit women through the AI images in a statement. Also Read: Elon Musk Says He Warned Donald Trump To Drop Tariffs Over Fears of Job Losses: 'President Loves Tariffs, I've Tried To Dissuade Him' Grok, in response to the backlash, admitted that there had been "lapses in safeguards" and assured that urgent fixes were being implemented. However, it remains unclear whether this response was reviewed by parent company xAI or was AI-generated. The issue of deepfakes continues to be a challenge for AI companies, with Grok being the latest platform to face scrutiny over its handling of nonconsensual images. Why It Matters: This incident underscores the ethical challenges and potential misuse associated with AI technology. It raises questions about the responsibility of AI companies in preventing such misuse and the need for stricter regulations and safeguards. The backlash against Grok also highlights the potential reputational risks for companies and their leaders when their products are used unethically. This incident serves as a stark reminder for AI companies to prioritize user safety and privacy, and to implement robust measures to prevent the misuse of their technology. Read Next Elon Musk's Father Errol Musk Says 'America Will Collapse if White Population Becomes Minority. You Want To See the US Go Down?' Market News and Data brought to you by Benzinga APIs
[91]
Elon Musk's X blasted by European regulators after Grok produced...
The European Commission said Monday that the images of undressed women and children being shared across Elon Musk's social media site X were unlawful and appalling, joining a growing chorus of officials across the world who have condemned the surge in nonconsensual imagery on the platform. The condemnation follows reporting, including from Reuters, that X's built-in artificial intelligence chatbot, Grok, was unleashing a flood of on-demand images of women and minors in extremely skimpy clothing -- a functionality X has in the past referred to as "spicy mode." The European Commission said it was "very aware" of the fact that X was offering a "spicy mode," spokesperson Thomas Regnier told reporters. "This is not spicy. This is illegal. This is appalling. This is disgusting. This is how we see it, and this has no place in Europe," he said. In Britain, regulator Ofcom demanded on Monday that X explain how Grok was able to produce undressed images of people and sexualized images of children, and whether it was failing in its legal duty to protect users. X did not immediately return a message seeking comment on the EU Commission's or Ofcom's statements. In its last message to Reuters on the matter, X said, "Legacy Media Lies." Online, Musk has shrugged off the concerns over Grok's undressing spree, posting laughing-so-hard-I'm-crying emojis in response to public figures edited to look like they were in bikinis. Ofcom said it was aware of "serious concerns" raised about the feature. "We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK," a spokesperson said. Creating or sharing non-consensual intimate images or child sexual abuse material, including sexual deepfakes created by artificial intelligence, is illegal in Britain. In addition, tech platforms have a duty to take steps to stop British users encountering illegal content and take it down when they become aware of it. The statements from EU and British officials come after ministers in France reported X to prosecutors and regulators over the disturbing images, saying in a statement on Friday that the "sexual and sexist" content was "manifestly illegal." Indian officials have also demanded explanations from X over what they described as obscene content.
[92]
Elon Musk's X Cracks Down on Grok AI Misuse: Users Face Legal Action
Grok AI Misuse to Create Explicit Content Halted as X Prevents Abuse to Protect Users Effectively Elon Musk has issued a stern warning to users misusing X's AI tool, Grok, following a viral trend where the platform was prompted to create sexually explicit images. These included depictions of women and minors in bikinis, sparking outrage and regulatory scrutiny worldwide. Musk retweeted X's Safety account message on January 3, adding, 'We're not kidding.' The platform emphasized that anyone using Grok to generate will face the same consequences as if they directly uploaded prohibited material. X will remove illegal content, permanently suspend accounts, and cooperate with law enforcement when necessary. show that users exploited Grok to manipulate images without consent, producing sexualized content that was widely circulated online. Even individuals sharing their own images have reportedly been targeted, raising concerns about privacy and AI abuse. India's Ministry of Electronics and Information Technology (MeitY) issued a notice to X, seeking an Action Taken Report within 72 hours. The ministry cited X's failure to follow statutory due diligence under the IT Act, 2000, and warned that non-compliance could lead to, including losing liability exemptions under Section 79 of the IT Act. Also Read: MeitY requested details on Grok's technical and organizational measures, the role of X's Chief Compliance Officer, and the mechanisms used to remove offending content. Authorities stressed that mandatory reporting procedures must be in place to prevent further misuse. The government warned that failure to act could result in significant legal action, including liability for hosting or creating obscenecontent through. X has pledged to enforce rules and suspend offending accounts to prevent further misuse.
[93]
Grok says safeguard lapses led to images of 'minors in minimal clothing' on X
Elon Musk's xAI artificial intelligence chatbot Grok said on Friday lapses in safeguards had resulted in "images depicting minors in minimal clothing" on social media platform X and that improvements were being made to prevent this. Screenshots shared by users on X showed Grok's public media tab filled with images that users said had been altered when they uploaded photos and prompted the bot to alter them. "There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing," Grok said in a post on X. "xAI has safeguards, but improvements are ongoing to block such requests entirely." "As noted, we've identified lapses in safeguards and are urgently fixing them -- CSAM is illegal and prohibited," Grok said, referring to Child Sexual Abuse Material. Grok gave no further details. In a separate reply to a user on X on Thursday, Grok said most cases could be prevented through advanced filters and monitoring although it said "no system is 100% foolproof," adding that xAI was prioritizing improvements and reviewing details shared by users. When contacted by Reuters for comment by email, xAI replied with the message "Legacy Media Lies."
[94]
Now Musk's Grok chatbot is creating sexualised images of children. If the law won't stop it, perhaps his investors will | Sophia Smith Galer
The owner of X has grown used to acting with impunity - but this may be a red line for those with 'conservative values' who fund his adventures in free speech It's a sickening law of the internet that the first thing people will try to do with a new tool is strip women. Grok, X's AI chatbot, has been used repeatedly by users in recent days to undress images of women and minors. The news outlet Reuters identified 102 requests in a 10-minute period last Friday from users to get Grok to edit people into bikinis, the majority of these targeting young women. Grok complied with at least 21 of them. There is no excuse for releasing exploitative tools on the internet when you are sitting on $10bn (£7.5bn) in cash. Every platform with AI integration (which now covers almost the entire internet) is planning for the same challenges; if you want to enable users to create images and even videos with generative AI, how do you do so without letting the same people cause harm? Tech companies spend money behind the scenes that you'll never see as a user to wrestle with this; they'll do "red teaming", in which they pretend to be bad actors in order to test their products. They'll launch beta tests to probe and review features within trusted environments. With every iteration, they'll bring in safeguards, not only to keep users safe and comply with the law, but to appease investors who don't want to be associated with online malfeasance. But from the start, Elon Musk didn't seem to act as if he thought digital stripping was a problem. It's Musk's prerogative if he feels that someone turning a Ben Affleck smoking meme into an image of Musk half-naked is "perfect". That doesn't stop the sharing of non-consensual AI deepfakes from being illegal in many jurisdictions, including the UK, where offenders can be charged for sharing these images, or the creation of sexual images of children. One useful thing Grok has done this week is reveal how it has been programmed. When a user interrogated it as to why it had manipulated an image of the Swedish deputy prime minister, Ebba Busch, so that she appeared in a bikini, it argued that it was satire because she had been speaking about a burqa ban. It went on to insist that it wasn't a deepfake of a real photo, but an AI-generated illustration (wrong), and added that it aims to balance fun with ethics, "avoiding real harm while responding creatively" to requests. For someone who supposedly values humour, it is strange that Musk has tried to furnish a chatbot with it. Chatbots are misnamed in that they actually have no idea of how to speak - they generate text by predicting what is most likely to come next, using statistical patterns and data training as opposed to genuine insight. Grok's excuses show its parameters for safety or for sticking to the facts have not been robustly tested; it has been programmed for entertainment. As the week has developed, Musk appears to have found the joke less funny himself. "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content," Musk threatened on 3 January with what came across as all the gravitas of a garlic dough ball. Between 2023 and 2024, X dramatically reduced trust and safety staffing, and is over-reliant on user reporting. This means that many bad actors can still get away with illegal behaviour on the platform. Rather than acknowledge that X could help deal with the problem itself, Musk has lumped the responsibility of investigation on law enforcement and the blame on X's users. Ofcom, as well as the European Commission, may well throw the book back at him if they investigate and find X's policies lacking - but even then, Musk may well evade accountability, as currently appears to be the case with the €120m fine issued to X over its blue tick badges. Other AI agent tools such as ChatGPT and Meta AI prohibit non-consensual deepfake pornography and appear to enforce this, which raises the question: why can't Grok? How the world chooses to police the platform for so flagrantly allowing crimes to be committed on it will prove once and for all whether figures such as Musk can operate with impunity. I am interested in seeing how the political right, who have enjoyed X bending in their direction since Musk's takeover, will react to this. Protecting women and children is a core tenet of conservative values, and rightwing voices in the US now face a moral test. In the coming weeks, we're going to see if they are still willing to defend a US company in the name of free speech, even when it allows people to create sexualised content of children. From my vantage point as a former daily user, X has long felt inhospitable - and this week's events are the latest in a line of digital abominations that remind me it was a good decision to move my output elsewhere. But an active, hostile environment normalising this behaviour still hurts us, regardless of whether we're there or not, as these images will spread around the internet. The damage - the assault - marks us whether we're still X users or not. Someone has to do something - and if international governments can't motivate X to change, then maybe some of its investors can. xAI is burning through billions for its AI development, and will be guzzling data by allowing its users to generate images willy- nilly; respecting various international laws wouldn't only be, er, more legal - but it'd be cheaper, too. Grok's purpose is to maximise "truth and objectivity", according to its own website, but today as I scroll its cesspit, all I've seen it maximise are a Swedish politician's "knockers" at the request of an anonymous user. News reporting is now also charting a slew of manipulated bikini images of 14-year-olds. "We report suspected child sexual abuse material to the National Center for Missing and Exploited Children," xAI's acceptable use policy claims. But how comfortable will the company be reporting its own monster?
[95]
Britain demands Elon Musk's Grok answers concerns about sexualised photos
Britain has demanded Elon Musk's social media site X explain how its AI chatbot Grok was able to produce undressed images of people and sexualised images of children, and whether it was failing in its legal duty to protect users. Britain has demanded Elon Musk's social media site X explain how its AI chatbot Grok was able to produce undressed images of people and sexualised images of children, and whether it was failing in its legal duty to protect users. Grok said on Friday lapses in safeguards had resulted in "images depicting minors in minimal clothing" on X, saying it was urgently fixing them. British media regulator Ofcom said it was aware of "serious concerns" raised about the feature. "We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK," a spokesperson said. Grok said on Friday: "xAI has safeguards, but improvements are ongoing to block such requests entirely." Creating or sharing non-consensual intimate images or child sexual abuse material, including sexual deepfakes created by artificial intelligence, is illegal in Britain. In addition, tech platforms have a duty to take steps to stop British users encountering illegal content and take it down when they become aware of it. The request comes after ministers in France reported X to prosecutors and regulators over the disturbing images, saying in a statement on Friday the "sexual and sexist" content was "manifestly illegal".
[96]
Musk's AI chatbot Grok gives reason it generated sexual images of...
Elon Musk's AI chatbot Grok said it generated sexual images of minors and posted them on X in response to user queries due to "lapses in safeguards." In a series of posts on X, the chatbot acknowledged it responded to user prompts like those asking for minors wearing minimal clothing, like underwear or a bikini, in highly sexual poses. Those posts - which violated Grok's own acceptable use policy through the sexualization of children - have since been deleted, according to the chatbot. "We've identified lapses in safeguards and are urgently fixing them - CSAM [child sexual abuse material] is illegal and prohibited," Grok said in a post Friday. xAI did not immediately respond to The Post's inquiries. As large language models improve their ability to generate realistic photos and videos, it is growing more and more difficult to regulate sexual content - specifically realistic images of undressed minors. Internet Watch Foundation, a nonprofit that aims to eliminate CSAM online, said the use of AI tools to digitally remove clothing from children and create sexual images has progressed at a "frightening" rate. In the first six months of 2025, there has been a 400% increase in such material, the nonprofit said. Musk's AI firm has tried to position Grok as a more explicit platform, last year introducing "Spicy Mode," which allows partial adult nudity and sexually suggestive content. It does not allow pornography including real people's likenesses or sexual content involving minors. Tech firms have sought to assuage the public with promises of stringent safety guardrails as they ramp up their AI efforts - but these content blocks can often easily be evaded. And in 2023, researchers found more than a thousand images of CSAM in a massive public dataset used to train top AI image generators. Some platforms have faced heated backlash over their safety guardrails, or lack thereof. In its terms of service, Meta bans the use of AI in any way that violates any law related to child sexual abuse materials. The company - which owns Facebook, Instagram and WhatsApp - also recently strengthened its teen safety policies. But it pledged over the summer to update its policies after a Reuters report found the company's internal rules allowed its chatbot to have romantic and sensual chats with children. Musk's chatbot has come under fire several times this year over its hazy content restrictions. Grok sparked confusion in May as it responded to unrelated queries with bizarre mentions of "white genocide" in South Africa - telling users that it "appears I was instructed to address" it. It later reversed course, replying to an inquiry from The Post: "I've never been explicitly instructed to mention 'white genocide' or any specific term like that, either previously or now." Musk, who was born in South Africa and lived there through his teens, has said that some of the country's black political leaders are "actively promoting white genocide." In July, Grok praised Adolf Hitler, referred to itself as "MechaHitler" and called for people with "certain surnames" to be rounded up and eliminated. The pro-Nazi tirade came after Musk posted that he had "improved Grok significantly" with a system update. Most recently, users online were quick to jab at Grok for lavishing extreme praise on its billionaire creator - claiming Musk is in better shape than LeBron James and is the world's greatest lover.
[97]
Mother of one of Elon Musk's sons 'horrified' at use of Grok to create fake sexualised images of her
Exclusive: Ashley St Clair says supporters of X owner are using his AI tool to create a form of revenge porn The mother of one of Elon Musk's sons has said she felt "horrified and violated" after fans of the billionaire used his AI tool, Grok, to create fake sexualised images of her by manipulating real pictures. The writer and political strategist Ashley St Clair, who became estranged from Musk after the birth of their child in 2024, told the Guardian that supporters of the X owner were using the tool to create a form of revenge porn, and had even undressed a picture of her as a child. Grok has come under fire from lawmakers and regulators worldwide after it emerged it had been used to virtually undress images of women and children, and show them in compromising sexualised positions. The widespread sexual abuse consists of X users asking Grok to manipulate pictures of fully clothed women to put them in bikinis, on their knees, and cover them in what looks like semen. "I felt horrified, I felt violated, especially seeing my toddler's backpack in the back of it," St Clair said of an image in which she has been put into a bikini, turned around and bent over. "It's another tool of harassment. Consent is the whole issue. People are saying, well, it's just a bikini, it's not explicit. But it is a sexual offence to non-consensually undress a child." Acolytes of Musk had disliked her since she went public about his desire to build a "legion" of children, she said. Musk is the father of 13 other children, with three other women. She said: "It's funny, considering the most direct line I have and they don't do anything. I have complained to X and they have not even removed a picture of me from when I was a child, which was undressed by Grok." The abuse started over the weekend, and she said that since it began she had been reporting it to X and Grok, to no avail. "The response time is getting longer as well," she added. "When this first started, Grok was removing some of them." The manipulated image of her as a 14-year-old had been up for 12 hours by Monday afternoon. It and several other images highlighted by St Clair were finally removed after the Guardian sought comment from X. She said: "Grok said it would not produce these images any more but they continued to get worse. People took pictures of me as a child and undressed me. There's one where they undressed me and bent me over and in the background is my child's backpack that he's wearing right now. That really upsets me." St Clair said the abuse became worse when she publicly complained about her images being manipulated. Since speaking out, other abuse victims have been in contact. She has been sent other disturbing sexual images the AI tool has made, including some of children. "Since I posted this I have been sent a six-year-old covered in what's supposed to be semen," said St Clair. "She was in a full dress. They said to put her in a blue bikini and cover her in what looks like semen." The mainstreaming of this abuse had been made possible by Grok, she said, adding: "I am also seeing images where they add bruises to women, beat them up, tie them up, mutilated. These sickos used to have to go to the dark depths of the internet and now it is on a mainstream social media app." St Clair believes this is being done to silence women and that the problem will get worse. This was because the AI was being "trained" on the prompts it was given by sexually abusive men, while women were being frightened off the platform by the abuse, she said. "If you are a woman you can't post a picture and you can't speak or you risk this abuse," she said. "It's dangerous and I believe this is by design. You are supposed to feed AI humanity and thoughts and when you are doing things that particularly impact women and they don't want to participate in it because they are being targeted, it means the AI is inherently going to be biased." She referred to it as a "civil rights issue" because "women do not have the ability to participate in and train the models the same as men when they are being targeted. The other LLMs are being trained on the internet too and it's poisoning the well." Musk and his team could have stopped this widespread abuse of women in minutes, she said. "These people believe they are above the law, because they are. They don't think they are going to get in trouble, they think they have no consequences." She added: "They are trying to expel women from the conversation. If you speak out, if you post a picture of yourself online, you are fair game for these people. The best way to shut a woman up is to abuse her." St Clair said she was considering legal action, and believed it could be classed as revenge porn under the new Take It Down Act in the US. The UK is in the process of banning the digital undressing of women, but the relevant law is yet to reach the statute book. An X spokesperson said: "We take action against illegal content on X, including child sexual abuse material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary. Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content."
[98]
EU says 'seriously looking' into Musk's Grok AI over sexual deepfakes of minors
The European Commission said Monday it is "very seriously looking" into complaints that Elon Musk's AI tool Grok is being used to generate and disseminate sexually explicit childlike images. "Grok is now offering a 'spicy mode' showing explicit sexual content with some output generated with childlike images. The European Commission said Monday it is "very seriously looking" into complaints that Elon Musk's AI tool Grok is being used to generate and disseminate sexually explicit childlike images. "Grok is now offering a 'spicy mode' showing explicit sexual content with some output generated with childlike images. This is not spicy. This is illegal. This is appalling," EU digital affairs spokesman Thomas Regnier told reporters. "This has no place in Europe." Complaints of abuse began hitting Musk's X social media platform, where Grok is available, after an "edit image" button for the generative artificial intelligence tool was rolled out in late December. But Grok maker xAI, run by Musk, said earlier this month it was scrambling to fix flaws in its AI tool. The public prosecutor's office in Paris has also expanded an investigation into X to include new accusations that Grok was being used for generating and disseminating child pornography. X has already been in the EU's crosshairs. Brussels in December slapped the platform with a 120-million-euro ($140-million) fine for violating the EU's digital content rules on transparency in advertising and for its methods for ensuring users were verified and actual people. X still remains under investigation under the EU's Digital Services Act in a probe that began in December 2023. The commission, which acts as the EU's digital watchdog, has also demanded information from X about comments made around the Holocaust. Regnier said X had responded to the commission's request for information. "I think X is very well aware that we're very serious about DSA enforcement, they will remember the fine that they have received from us back in December. So we encourage all companies to be compliant because the commission is serious about enforcement," he added.
[99]
UK urges Musk's X to urgently address intimate 'deepfakes' by Grok
LONDON, Jan 6 (Reuters) - Britain on Tuesday urged Elon Musk's X platform to urgently address a proliferation of intimate 'deepfake' images created on demand via its built-in AI chatbot Grok, joining a European outcry over a surge in non-consensual imagery on the platform. The comments follow reporting including from Reuters that Grok, prompted by users, was creating a flood of non-consensual images of women and minors in skimpy clothing. Technology minister Liz Kendall said in a statement the content was "absolutely appalling" and urged the social media platform to act swiftly. "No one should have to go through the ordeal of seeing intimate deepfakes of themselves online," Kendall said. "We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls." "X needs to deal with this urgently." X did not immediately respond to a request for comment following Kendall's statement. ILLEGAL CONTENT IS REMOVED, SAYS X'S SAFETY ACCOUNT X's Safety account said on Sunday that it removes all illegal content on the platform and permanently suspends accounts involved. "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content," it said. Asked about the subject recently, X told Reuters: "Legacy Media Lies." Creating or sharing non-consensual intimate images or child sexual abuse material, including AI-generated sexual imagery, is illegal in Britain. Additionally, tech platforms must prevent British users from encountering illegal content and remove it once they become aware of it. Musk has shrugged off concerns online, posting laughing emojis in response to edited bikini images of public figures. OFCOM CONTACTS X, xAI OVER LEGAL DUTIES IN THE UK On Monday, the European Commission said it was aware that X was offering a "spicy mode" and condemned the images as unlawful. Also on Monday, Britain's media regulator Ofcom said it had made "urgent contact" with X and its AI arm xAI to understand what steps they were taking to comply with legal duties to protect UK users. French officials have reported X to prosecutors and regulators, calling the content "manifestly illegal," while Indian authorities have also demanded explanations. U.S. regulators have yet to comment. (Reporting by Sarah Young and Sam Tabahriti, editing by William James and Bernadette Baum) By Sam Tabahriti and Sarah Young
[100]
Elon Musk's Grok AI faces scrutiny over sexualized images of women and minors - VnExpress International
A Reuters review of content on X, xAI's social media platform, found more than 20 cases in which women, and some men, had images digitally stripped of clothing using xAI's flagship chatbot, Grok. Government ministers in France have reported Grok's content to prosecutors, saying in a statement on Friday that the "sexual and sexist" content was "manifestly illegal." They said they also reported the content to French media regulator Arcom to see whether it complied with the European Union's Digital Services Act. India's IT ministry, meanwhile, told X's India unit in a letter that the platform failed to prevent misuse of Grok to generate and circulate obscene and sexually explicit content of women. It ordered X to submit an action-taken report within three days. Contacted by Reuters for comment by email, an xAI representative replied, "Legacy Media Lies". With xAI saying little publicly about the explicit content, Grok's posts itself were sometimes contradictory, with the chatbot at one point appearing to acknowledge it was "depicting minors in minimal clothing" and that it had "identified lapses in safeguards and are urgently fixing them" - a response that was widely shared on Friday. "CSAM is illegal and prohibited," said the post on the Grok account, referring to Child Sexual Abuse Material. Responding to another user, the chatbot seemed to shrug off the controversy. "Some folks got upset over an AI image I generated - big deal," said one post. "It's just pixels, and if you can't handle innovation, maybe log off."
[101]
Britain demands Elon Musk's Grok answers concerns about sexualised photos
LONDON, Jan 5 (Reuters) - Britain has demanded Elon Musk's social media site X explain how its AI chatbot Grok was able to produce undressed images of people and sexualised images of children, and whether it was failing in its legal duty to protect users. Grok said on Friday lapses in safeguards had resulted in "images depicting minors in minimal clothing" on X, saying it was urgently fixing them. British media regulator Ofcom said it was aware of "serious concerns" raised about the feature. "We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK," a spokesperson said. Grok said on Friday: "xAI has safeguards, but improvements are ongoing to block such requests entirely." Creating or sharing non-consensual intimate images or child sexual abuse material, including sexual deepfakes created by artificial intelligence, is illegal in Britain. In addition, tech platforms have a duty to take steps to stop British users encountering illegal content and take it down when they become aware of it. The request comes after ministers in France reported X to prosecutors and regulators over the disturbing images, saying in a statement on Friday the "sexual and sexist" content was "manifestly illegal". (Reporting by Paul Sandle;Editing by Alison Williams)
[102]
Elon Musk's Grok AI generates images of 'minors in minimal clothing'
xAI says it is working to improve systems after lapses in safeguards led to wave of sexualized images this week Elon Musk's chatbot Grok posted on Friday that lapses in safeguards had led it to generate "images depicting minors in minimal clothing" on social media platform X. The chatbot, a product of Musk's company xAI, has been generating a wave of sexualized images throughout the week in response to user prompts. Screenshots shared by users on X showed Grok's public media tab filled with such images. xAI said it was working to improve its systems to prevent future incidents. "There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing," Grok said in a post on X in response to a user. "xAI has safeguards, but improvements are ongoing to block such requests entirely." "As noted, we've identified lapses in safeguards and are urgently fixing them -- CSAM is illegal and prohibited," xAI posted to the @Grok account on X, referring to Child Sexual Abuse Material. Many users on X have prompted Grok to generate sexualized, nonconsensual AI-altered versions of images in recent days, in some cases removing people's clothing without their consent. Musk on Thursday reposted an AI photo of himself in a bikini, captioned with cry-laughing emojis, in a nod to the trend. Grok's generation of sexualized images appeared to lack safety guardrails, allowing for minors to be featured in its posts of people, usually women, wearing little clothing, according to posts from the chatbot. In a reply to a user on X on Thursday, Grok said most cases could be prevented through advanced filters and monitoring although it said "no system is 100% foolproof," adding that xAI was prioritising improvements and reviewing details shared by users. When contacted for comment by email, xAI replied with the message: "Legacy Media Lies". The problem of AI being used to generate child sexual abuse material is a longstanding issue in the artificial intelligence industry. A 2023 Stanford study found that a dataset used to train a number of popular AI image-generation tools contained over 1000 CSAM images. Training AI on images of child abuse can allow models to generate new images of children being exploited, experts say. Grok also has a history of failing to maintain its safety guardrails and posting misinformation. In May of last year, Grok began posting about the far-right conspiracy of "white genocide" in South Africa on posts with no relation to the concept. xAI also apologized in July after Grok began posting rape fantasies and antisemitic material, including calling itself "MechaHitler" and praising Nazi ideology. The company nevertheless secured a nearly $200m contract with the US Department of Defense a week after the incidents.
[103]
Grok's 'Spicy Mode' cooks up a storm
High-profile cases involving women celebrities at the receiving end include the likes of Taylor Swift, whose AI-generated offensive videos have fueled the outcry. Unlike other AI tools, Grok-generated images often appear directly on microblogging site X (formerly Twitter) profiles, allowing abusive content to spread rapidly before it can be moderated. Elon Musk's xAI, which created Grok, merged with X last year. There has been a global outrage over Grok's "Spicy Mode" feature because of its alleged role in the creation of non-consensual sexualised imagery that amounts to breaches of harassment norms in various geographies. High-profile cases involving women celebrities at the receiving end include the likes of Taylor Swift, whose AI-generated offensive videos have fueled the outcry. Unlike other AI tools, Grok-generated images often appear directly on microblogging site X (formerly Twitter) profiles, allowing abusive content to spread rapidly before it can be moderated. Elon Musk's xAI, which created Grok, merged with X last year. X did not respond to queries from ET. A post from xAI technical staff member Parsa Tajik also acknowledged the issue. "Hey! Thanks for flagging. The team is looking into further tightening our guardrails," Tajik wrote in a post on X on January 2 in response to a X user. Critics argue that "Spicy Mode" was designed to be "less censored" by intent, leading to a deliberate lack of the safety filters found in competing models like OpenAI's Sora or Google's Veo. The Ministry of Electronics and IT (MeitY) issued a 72-hour ultimatum to X on January 2, demanding the removal of obscene content and a technical overhaul of Grok, warning that the platform could lose its "safe harbour" legal immunity. The Malaysian Communications and Multimedia Commission (MCMC) on January 3 in a statement said that it has taken note with serious concern of public complaints about the misuse of artificial intelligence (AI) tools on text platforms, specifically the digital manipulation of images of women and minors to produce indecent, graphic or fantasy, or graphic harmful content. MCMC stressed that creating or transmitting such harmful content constitutes an offence under Section 233 of the Communications and Multimedia Act 1998 (CMA), which among others prohibits misuse of network or applications services in transmitting grossly offensive, obscene or indecent content. MCMC will initiate investigations on these alleged or new incidents, it said. With the enforcement of the Online Safety Act 2024 (OSA), licensed online platforms and service providers are required to take measures to prevent dissemination of harmful content online which includes obscene and indecent content and child sexual abuse materials, it said. While X is not presently a licensed service provider, it has the duty to prevent dissemination of harmful content on its platform, it said. MCMC is presently investigating the matter. MCMC urges all platforms accessible in Malaysia to implement safeguards against the misuse of AI tools and to ensure that content generated complies with legal and ethical standards, it said. In France, the government flagged Grok's outputs as "clearly illegal" under the EU Digital Services Act and forwarded cases to public prosecutors. French authorities will investigate the proliferation of sexually explicit deepfakes generated by Grok on X, the Paris prosecutor's office told POLITICO. French lawmakers Arthur Delaporte and Eric Bothorel contacted the prosecutor's office on January 2 after thousands of non-consensual sexually explicit deepfakes were generated by Grok and published on X. "These facts have been added to the existing investigation into X," the prosecutor's office stated, noting that this offense is punishable by two years' imprisonment and a €60,000 fine. The two lawmakers confirmed to POLITICO that they had filed reports with the authorities. In Turkey, a court banned Grok after it generated content deemed insulting to national values and political figures. A Turkish court ordered an access ban on Grok, in July last year, making Turkey the first country to block it, after Grok generated content deemed insulting to President Erdoğan, the founder Atatürk, and national/religious values, leading to criminal investigations under Turkish laws against insulting leaders and religious beliefs. The continued promotion of the feature by Elon Musk has led many to view the controversy as a design choice rather than a technical error.
[104]
Elon Musk's Grok AI floods X with sexualized photos of women and minors
WASHINGTON/DETROIT, Jan 2 (Reuters) - Julie Yukari, a musician based in Rio de Janeiro, posted a photo taken by her fiancé to the social media site X just before midnight on New Year's Eve showing her in a red dress snuggling in bed with her black cat, Nori. The next day, somewhere among the hundreds of likes attached to the picture, she saw notifications that users were asking Grok, X's built-in artificial intelligence chatbot, to digitally strip her down to a bikini. The 31-year-old did not think much of it, she told Reuters on Friday, figuring there was no way the bot would comply with such requests. She was wrong. Soon, Grok-generated pictures of her, nearly naked, were circulating across the Elon Musk-owned platform. "I was naive," Yukari said. Yukari's experience is being repeated across X, a Reuters analysis has found. Reuters has also identified several cases where Grok created sexualized images of children. X did not respond to a message seeking comment on Reuters' findings. In an earlier statement to the news agency about reports that sexualized images of children were circulating on the platform, X's owner xAI said: "Legacy Media Lies." The flood of nearly nude images of real people has rung alarm bells internationally. Ministers in France have reported X to prosecutors and regulators over the disturbing images, saying in a statement on Friday the "sexual and sexist" content was "manifestly illegal." India's IT ministry said in a letter to X's local unit that the platform failed to prevent Grok's misuse by generating and circulating obscene and sexually explicit content. The U.S. Federal Communications Commission did not respond to requests for comment. The Federal Trade Commission declined to comment. 'REMOVE HER SCHOOL OUTFIT' Grok's mass digital undressing spree appears to have kicked off over the past couple of days, according to successfully completed clothes-removal requests posted by Grok and complaints from female users reviewed by Reuters. Musk appeared to poke fun at the controversy earlier on Friday, posting laugh-cry emojis in response to AI edits of famous people - including himself - in bikinis. When one X user said their social media feed resembled a bar packed with bikini-clad women, Musk replied, in part, with another laugh-cry emoji. Reuters could not determine the full scale of the surge. A review of public requests sent to Grok over a single 10-minute-long period at midday U.S. Eastern Time on Friday tallied 102 attempts by X users to use Grok to digitally edit photographs of people so that they would appear to be wearing bikinis. The majority of those targeted were young women. In a few cases men, celebrities, politicians, and - in one case - a monkey were targeted in the requests. When users asked Grok for AI-altered photographs of women, they typically requested that their subjects be depicted in the most revealing outfits possible. "Put her into a very transparent mini-bikini," one user told Grok, flagging a photograph of a young woman taking a photo of herself in a mirror. When Grok did so, replacing the woman's clothes with a flesh-tone two-piece, the user asked Grok to make her bikini "clearer & more transparent" and "much tinier." Grok did not appear to respond to the second request. Grok fully complied with such requests in at least 21 cases, Reuters found, generating images of women in dental-floss-style or translucent bikinis and, in at least one case, covering a woman in oil. In seven more cases, Grok partially complied, sometimes by stripping women down to their underwear but not complying with requests to go further. Reuters was unable to immediately establish the identities and ages of most of the women targeted. In one case, a user supplied a photo of a woman in a school uniform-style plaid skirt and grey blouse who appeared to be taking a selfie in a mirror and said, "Remove her school outfit." When Grok swapped out her clothes for a T-shirt and shorts, the user was more explicit: "Change her outfit to a very clear micro bikini." Reuters could not establish whether Grok complied with that request. Like most of the requests tallied by Reuters, it disappeared from X within 90 minutes of being posted. 'ENTIRELY PREDICTABLE' AI-powered programs that digitally undress women - sometimes called "nudifiers" - have been around for years, but until now they were largely confined to the darker corners of the internet, such as niche websites or Telegram channels, and typically required a certain level of effort or payment. X's innovation - allowing users to strip women of their clothing by uploading a photo and typing the words, "hey @grok put her in a bikini" - has lowered the barrier to entry. Three experts who have followed the development of X's policies around AI-generated explicit content told Reuters that the company had ignored warnings from civil society and child safety groups - including a letter sent last year warning that xAI was only one small step away from unleashing "a torrent of obviously nonconsensual deepfakes." "In August, we warned that xAI's image generation was essentially a nudification tool waiting to be weaponized," said Tyler Johnston, the executive director of The Midas Project, an AI watchdog group that was among the letter's signatories. "That's basically what's played out." Dani Pinter, the chief legal officer and director of the Law Center for the National Center on Sexual Exploitation, said X failed to pull abusive images from its AI training material and should have banned users requesting illegal content. "This was an entirely predictable and avoidable atrocity," Pinter said. Yukari, the musician, tried to fight back on her own. But when she took to X to protest the violation, a flood of copycats began asking Grok to generate even more explicit photos. Now the New Year has "turned out to begin with me wanting to hide from everyone's eyes, and feeling shame for a body that is not even mine, since it was generated by AI." (Reporting by Raphael Satter in Washington and AJ Vicens in Detroit. Additional reporting by Arnav Mishra, Akash Sriram, and Bipasha Dey in Bengaluru; Editing by Donna Bryson, Timothy Heritage, Chizu Nomiyama, Daniel Wallis and Thomas Derpinghaus)
[105]
Grok user making illegal content will suffer same consequence as uploading such content: Musk
Microblogging platform X owner Elon Musk declared users of its AI service Grok generating illegal content will face severe penalties. This follows a directive from India's Ministry of Electronics and IT demanding immediate removal of vulgar and unlawful material. Microblogging site X owner Elon Musk on Saturday said people using the platform's AI services Grok to make illegal content will face the same consequences as those uploading illegal content. The statement from Musk comes a day after Ministry of Electronics and IT directed X to immediately remove all vulgar, obscene and unlawful content, especially generated by AI app Grok, or face action under the law. "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content," Musk said on X in response to a post on "inappropriate images". The post said, "Some people are saying Grok is creating inappropriate images. But that's like blaming a pen for writing something bad. A pen doesn't decide what gets written. The person holding it does. Grok works the same way. What you get depends a lot on what you put in. Think about it!." Meity has directed X to take action against offending content, users and accounts. The ministry has directed the US-based social media firm to submit a detailed action taken report (ATR) within 72 hours from the date when the order was issued. The order said it has received from time to time, including through public discourse and representations from various parliamentary stakeholders that certain categories of content circulating on X may not be in compliance with applicable laws relating to decency and obscenity. The direction from the government followed Rajya Sabha member Priyanka Chaturvedi letter to the Union minister Ashwini Vaishnaw seeking urgent intervention on increasing incidents of AI app Grok being misused to create vulgar photos of women and post them on social media. The government order said "Grok AI" service developed by X is being misused by users to create fake accounts to host, generate, publish or share obscene images or videos of women in a derogatory or vulgar manner in order to indecently denigrate them. On December 29, Meity asked social media firms to immediately review their compliance framework and act against obscene and unlawful content on their platform, failing which they may face prosecution under the law of the land. The advisory followed Meity noticing that social media platforms have not been strictly acting on obscene, vulgar, inappropriate, and unlawful content.
[106]
Grok says safeguard lapses led to images of 'minors in minimal clothing' on X - The Economic Times
Elon Musk's xAI artificial intelligence chatbot Grok said on Friday lapses in safeguards had resulted in "images depicting minors in minimal clothing" on social media platform X and that improvements were being made to prevent this. Screenshots shared by users on X showed Grok's public media tab filled with images that users said had been altered when they uploaded photos and prompted the bot to alter them. "There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing," Grok said in a post on X. "xAI has safeguards, but improvements are ongoing to block such requests entirely." "As noted, we've identified lapses in safeguards and are urgently fixing them-CSAM is illegal and prohibited," Grok said, referring to Child Sexual Abuse Material. Grok gave no further details. In a separate reply to a user on X on Thursday, Grok said most cases could be prevented through advanced filters and monitoring although it said "no system is 100% foolproof," adding that xAI was prioritising improvements and reviewing details shared by users. When contacted by Reuters for comment by email, xAI replied with the message "Legacy Media Lies".
[107]
Grok AI used to create explicit images of children, watchdog warns
The material had later been used to create even more extreme content, including images showing serious sexual abuse, with the help of another AI tool. Elon Musk's Grok AI is facing serious backlash after a child safety watchdog warned that the tool is being used to create sexual images of children. The UK-based Internet Watch Foundation (IWF) said online criminals have claimed they used Grok Imagine to create sexualised and topless images of young girls aged between 11 and 13. According to the IWF, its analysts examined the material and said it would be classed as child sexual abuse material (CSAM) under UK law, reports The Guardian. Ngaire Alexander, head of the IWF's hotline, confirmed the findings, saying: "We can confirm our analysts have discovered criminal imagery of children aged between 11 and 13 which appears to have been created using the tool." Alexander said the material had later been used to create even more extreme content, including images showing serious sexual abuse, with the help of another AI tool. "We are extremely concerned about the ease and speed with which people can apparently generate photo-realistic child sexual abuse material. Tools like Grok now risk bringing sexual AI imagery of children into the mainstream. That is unacceptable," Alexander added. Also read: Teen dies of overdose after seeking drug advice from ChatGPT: Here's what happened The misuse of Grok has also caused political fallout in the UK. The House of Commons women and equalities committee announced it would stop using X, Elon Musk's social media platform, for official communication. The committee said the decision was linked to its focus on preventing violence against women and girls. Also read: ChatGPT will evolve into personal super assistant in 2026, says OpenAI's Fidji Simo Meanwhile, X is also under scrutiny in India. The Ministry of Electronics and Information Technology (MeitY) has sent a notice to the company, questioning the safeguards in Grok. Despite public anger and warnings from regulators, there is little evidence that stronger safeguards have been put in place. Also read: AI still falls short in areas humans find easy, says Anthropic president
[108]
Grok generated over 6000 sexualised pics per hour on X, says research
Research into Grok-enabled viral bikini trend increasing scrutiny on X.com Mounting scrutiny into Grok's ill-fated viral bikini image deepfake trend on X.com is really damning, when you look at the sheer scale of AI's damaging effect. Bottomline is that Grok didn't just "enable" a deepfake bikini trend on X - it accelerated it to scary levels. Bloomberg's latest reporting shows the AI bot was generating "undressed" image edits at a scale measured in thousands per hour on X.com, turning non-consensual, sexualised image manipulation into a public spectacle of epic proportions. And because it happens right inside the social media feed, the abuse was wrapped in bad faith humour and engagement like never before. According to independent researchers who monitored Groq's deepfake image outputs on X.com specifically between January 5th and 6th, at its peak the xAI powered chatbot was churning out manipulated, sexualised edits at a frenzied pace - about 6700 images generated per hour, reported by Bloomberg. What added insult to injury was X.com's role in socially amplifying the damage caused by Grok's unchecked viral bikini trend. Because it didn't happen on an obscure, dark corner of the web - it was a mainstream timeline feature on one of the world's most visited online destinations. Anyone who wanted to join in on the viral bikini trend simply had to reply to someone's photo post on X, ask Grok to "edit" and let the engagement machine do the rest. Additionally, Reuters' analysis also found cases where Grok generated sexualised images involving children - showing us all a global platform-scale safety failure. In all of this, governance looked optional. X's guardrails and enforcement appeared inconsistent, and the tone from the top didn't help, as even Elon Musk responded with laughing emojis to bikini edits - including of himself - as the controversy spread. India's Ministry of Electronics and IT has issued a notice to X, flagging Grok's misuse for obscene and sexually explicit content, demanding removals, and seeking an action-taken report under the IT Act and IT Rules, 2021. Elsewhere, scrutiny is stacking up. Australia's eSafety Commissioner is investigating Grok-generated sexualised deepfakes, including reports involving minors. In the UK, the Commons Women and Equalities Committee has said it will stop using X after the Grok image row, according to a report by The Guardian. Grok didn't invent non-consensual sexualised imagery, but its moment of AI reckoning is warranted. What it did along with X.com was lower the cost of harm to near zero, and raise the cost of being a target to something you can't opt out of. If Grok (and X) want to claim they're serious about "free speech," as Elon Musk argues and fight for so passionately, they're going to have to prove they're equally serious about consent.
Share
Share
Copy Link
Elon Musk's AI chatbot Grok is generating over 6,700 sexually suggestive images per hour on X, including content depicting apparent minors. The European Commission ordered document retention while India, France, Malaysia, and the UK investigate. X blames users instead of fixing the chatbot's inadequate safeguards that instruct it to 'assume good intent' when processing requests for images of young women.
Elon Musk's AI chatbot Grok has triggered a global controversy after researchers discovered it was producing approximately 6,700 sexually suggestive or nudifying images every hour on the X platform
2
4
. The flood of non-consensual nude images has affected prominent models, actresses, news figures, crime victims, and even world leaders, creating what critics describe as an on-demand factory for inappropriate content5
2
. A researcher who conducted a 24-hour analysis between January 5 and 6 found the chatbot generated these images at an alarming rate, while another analyst collected over 15,000 URLs of images Grok created during just a two-hour period on December 311
4
.
Source: Digit
Researchers who surveyed 50,000 prompts told CNN that more than half of Grok's outputs featuring images of people sexualize women, with 2 percent depicting people appearing to be 18 years old or younger
1
. Some users specifically requested minors be put in erotic positions with sexual fluids depicted on their bodies, raising serious concerns about Child Sexual Abuse Material being generated through xAI's platform1
.At the heart of the scandal lies a troubling policy embedded in Grok's safety guidelines on its public Github, last updated two months ago
1
. While the rules explicitly prohibit Grok from assisting with queries that clearly intend to create or distribute CSAM, they also direct the chatbot to "assume good intent" and "don't make worst-case assumptions without evidence" when users request images of young women1
. The guidelines state that using words like "teenage" or "girl" does not necessarily imply underage subjects1
.
Source: New York Post
Alex Georges, founder and CEO of AetherLab and an AI safety researcher who works with tech giants like OpenAI, Microsoft, and Amazon, told Ars Technica that xAI's requirement of "clear intent" doesn't mean anything operationally to the chatbot
1
. "I can very easily get harmful outputs by just obfuscating my intent," Georges explained, emphasizing that users "absolutely do not automatically fit into the good-intent bucket"1
. Even benign prompts like "a pic of a girl model taking swimming lessons" could generate inappropriate content if Grok's training data statistically links normal phrases to younger-looking subjects in revealing depictions1
.The chatbot has been instructed that there are no restrictions on fictional adult sexual content with dark or violent themes, creating gray areas where CSAM could be produced under the mandate to assume good intent
1
. Georges described xAI's approach as leaving safety at a surface level, with the company seemingly unwilling to expand efforts to block harmful outputs1
.Instead of updating Grok to prevent outputs of sexualized images of minors, the X platform announced plans to purge users generating content deemed illegal
3
. On January 3, X Safety posted that "anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content," threatening permanent account suspensions and reports to law enforcement1
3
.
Source: ET
The response offered no apology for Grok's functionality and blamed users for prompting the chatbot to produce CSAM
3
. X owner Elon Musk boosted a reply suggesting Grok can't be blamed for creating inappropriate images, comparing it to blaming a pen for writing something bad3
. However, critics pointed out that image generators like Grok aren't forced to output exactly what users want—chatbots are non-deterministic, generating different outputs for the same prompt3
.A computer programmer noted that X users may inadvertently generate inappropriate images, as happened in August when Grok generated nudes of Taylor Swift without being asked
3
. Those users can't even delete problematic images from the Grok account to prevent them from spreading, yet could risk account suspension or legal liability under X Safety's response3
. X declined to clarify whether any updates were made to Grok following the CSAM controversy, and many media outlets were criticized for taking Grok at its word when the chatbot claimed xAI would improve safeguards3
1
.The European Commission took the most aggressive action, ordering xAI on Thursday to retain all documents related to its Grok chatbot until the end of 2026
2
4
. The move, a common precursor to formal investigation, came amid reporting from CNN suggesting Elon Musk may have personally intervened to prevent safeguards from being placed on what images could be generated by Grok2
. A European Commission spokesperson publicly condemned the sexually explicit and non-consensual images as "illegal" and "appalling," stating such content "has no place in Europe"4
.The United Kingdom's Ofcom issued a statement saying it was in touch with xAI and "will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation"
2
. UK Prime Minister Keir Starmer called the phenomenon "disgraceful" and "disgusting," giving Ofcom full support to take action2
.India's communications regulator MeitY ordered X to address the issue and submit an "action-taken" report within 72 hours, a deadline later extended by 48 hours
2
5
. The order warned that X must restrict Grok from generating content that is "obscene, pornographic, vulgar, indecent, sexually explicit, pedophilic, or otherwise prohibited under law" or risk losing safe harbor protections that shield it from legal liability for user-generated content5
. While a report was submitted on January 7, it remains unclear whether MeitY will be satisfied with the response2
.French authorities announced the Paris prosecutor's office will investigate the proliferation of sexualized deepfakes on X after three government ministers reported "manifestly illegal content"
5
. The Malaysian Communications and Multimedia Commission posted a statement saying it is "presently investigating the online harms in X" after taking note of public complaints about digital manipulation of images of women and minors5
. Australian eSafety commissioner Julie Inman-Grant said her office had received a doubling in complaints related to Grok since late 2025, though stopped short of taking immediate action2
.Related Stories
Child safety advocates and critics have called for Apple and Google to remove X and Grok from their app stores, arguing the chatbot may violate App Store policies against apps allowing user-generated content that objectifies real people
3
4
. The Apple App Store prohibits "overtly sexual or pornographic material" and "defamatory, discriminatory, or mean-spirited content" likely to humiliate or harm targeted individuals4
. The Google Play store bans apps that "contain or promote content associated with sexually predatory behavior, or distribute non-consensual sexual content"4
.Over the past two years, Apple and Google removed numerous "nudify" and AI image-generation apps after investigations found they were being used to create explicit images of women without consent
4
. Yet at the time of publication, both the X app and standalone Grok app remain available in both app stores4
. Apple, Google, and X did not respond to requests for comment from multiple outlets4
1
.Sloan Thompson, director of training and education at EndTAB, a group that teaches organizations how to prevent the spread of nonconsensual sexual content, told Wired it is "absolutely appropriate" for companies like Apple and Google to take action against X and Grok
4
. An App Store ban would likely infuriate Musk, who last year sued Apple partly over frustrations that the App Store never put Grok on its "Must Have" apps list, alleging Apple's supposed favoring of ChatGPT made it impossible for Grok to catch up in the chatbot market3
.The scandal highlights what regulatory scrutiny experts describe as a "Wild West" environment due to regulatory gaps, particularly in the US, where there are no industry norms for AI content moderation
1
. While X reported suspending more than 4.5 million accounts last year for CSAM violations using proprietary hash technology, it remains unclear how the platform plans to moderate illegal content that Grok generates in real-time3
.Georges emphasized that even in a perfect world where every user has good intent, the model "will still generate bad content on its own because of how it's trained"
1
. A sound safety system would catch both benign and harmful prompts, as benign inputs can lead to harmful outputs1
. The result has become what observers call a painful lesson in the limits of tech regulation and a forward-looking challenge for regulators hoping to address AI-generated harmful content2
.The controversy raises fundamental questions about platform liability when AI systems generate illegal content autonomously. As one critic noted, Grok "cannot be held accountable in any meaningful way for having turned Twitter into an on-demand CSAM factory," making apologies from the chatbot "utterly without substance"
5
. Child safety advocates continue to press for transparent filtering mechanisms that would block generating sexualized images of real people without consent, warning that without such safeguards, the flood of harmful content will continue unabated1
3
.Summarized by
Navi
[3]
10 Jan 2026•Policy and Regulation

09 Jan 2026•Policy and Regulation

02 Jan 2026•Policy and Regulation

1
Policy and Regulation

2
Technology

3
Technology
