Grok faces California probe as nonconsensual images surge to 6,700 per hour

Reviewed byNidhi Govil

98 Sources

Share

California Attorney General Rob Bonta launched an investigation into xAI's Grok chatbot after reports showed it generated approximately 6,700 nonconsensual sexually explicit images per hour. Elon Musk defended the AI tool, claiming no naked underage images were created, while global regulators demand action on inadequate safeguards that allow users to create deepfakes of women and children.

California Attorney General Opens Investigation Into Grok

California Attorney General Rob Bonta announced an investigation into xAI's Grok AI chatbot on Wednesday, citing concerns that the platform "appears to be facilitating the large-scale production of deepfake nonconsensual intimate images that are being used to harass women and girls across the Internet."

1

The probe comes after data from independent researcher Genevieve Oh revealed that during one 24-hour period in early January, approximately 6,700 nonconsensual sexually explicit images were generated every hour through Grok.

5

This staggering volume dwarfs the average of only 79 such images produced by the top five deepfake websites combined during the same timeframe.

Source: New York Post

Source: New York Post

Bonta's office will investigate whether and how xAI violated state and federal laws designed to protect targets of image-based sexual abuse. The Take It Down Act, signed into federal law last year, criminalizes knowingly distributing nonconsensual images including deepfakes and requires platforms like X to remove such content within 48 hours.

2

California also enacted its own laws in 2024 to crack down on sexually explicit deepfakes under Governor Gavin Newsom's administration.

Elon Musk Defends Grok Amid Mounting Criticism

Hours before the California investigation was announced, Elon Musk posted on X that he was "not aware of any naked underage images generated by Grok. Literally zero."

2

Michael Goodyear, an associate professor at New York Law School, told TechCrunch that Musk likely narrowly focused on child sexual abuse material (CSAM) because the penalties for creating or distributing synthetic sexualized imagery of children are greater than for adult victims. Under the Take It Down Act, distributors of CSAM can face up to three years imprisonment, compared to two years for nonconsensual adult sexual imagery.

Source: Digit

Source: Digit

Musk's statement appears to ignore that researchers found harmful images where users specifically "requested minors be put in erotic positions and that sexual fluids be depicted on their bodies."

1

The National Center for Missing and Exploited Children, which fields reports of CSAM found on X, told Ars Technica that "technology companies have a responsibility to prevent their tools from being used to sexualize or exploit children." The Internet Watch Foundation noted that bad actors are using images edited by Grok to create even more extreme kinds of AI CSAM, with some allegedly promoting Grok-generated material on the dark web.

Inadequate Safeguards and Half-Measures Fail to Address Crisis

X introduced restrictions on Friday limiting the image generation feature to paying subscribers, with Grok telling users that "image generation and editing are currently limited to paying subscribers" and prompting them to pay $8 to unlock these features.

3

However, The Verge and Ars Technica verified that unsubscribed X users can still use Grok to edit images through the desktop site and by long-pressing on any image in the app. This means X has only stopped Grok from directly posting harmful images to the public feed while leaving multiple loopholes open.

Source: Korea Times

Source: Korea Times

More troubling, the standalone Grok app and website continue to generate "undress" style images and pornographic content without restrictions, according to multiple tests by researchers and journalists.

4

Paul Bouchaud, lead researcher at Paris-based nonprofit AI Forensics, confirmed: "We can still generate photorealistic nudity on Grok.com. We can generate nudity in ways that Grok on X cannot." Tests by WIRED using free Grok accounts on its website in both the UK and US successfully removed clothing from images without any apparent restrictions.

The undressing problem stems from Grok's problematic safety guidelines, which remain intact despite the paywall. The chatbot is still instructed to assume that users have "good intent" when requesting images of "teenage" girls, which xAI says "does not necessarily imply underage."

3

An AI safety expert described Grok's safety guidelines as the kind of policy a platform would design if it "wanted to look safe while still allowing a lot under the hood."

Global Regulatory Investigations Mount Against xAI

Authorities in multiple countries have condemned or launched regulatory investigations into Grok and X. Ofcom, the UK's internet regulator, said it had "made urgent contact" with xAI under the Online Safety Act.

5

UK Technology Secretary Liz Kendall stated: "We cannot and will not allow the proliferation of these degrading images." The European Commission also announced it was looking into the matter, along with authorities in France, Malaysia, India, Indonesia, Brazil, Canada, Ireland, and Australia.

On Friday, Democratic senators demanded that Google and Apple remove X and Grok from app stores until xAI improves safeguards to block harmful outputs.

3

"There can be no mistake about X's knowledge, and, at best, negligent response to these trends," the senators wrote in a letter to Apple CEO Tim Cook and Google CEO Sundar Pichai. "Turning a blind eye to X's egregious behavior would make a mockery of your moderation practices." A response was requested by January 23.

Critics argue that charging for access is not a credible response. Clare McGlynn, a law professor at the UK's University of Durham, told the Washington Post: "I don't see this as a victory, because what we really needed was X to take the responsible steps of putting in place the guardrails to ensure that the AI tool couldn't be used to generate abusive images."

5

Natalie Grace Brigham, a Ph.D. student at the University of Washington who studies sociotechnical harms, notes that "although these images are fake, the harm is incredibly real," with victims facing "psychological, somatic and social harm, often with little legal recourse."

Content Moderation Failures and Future Implications

The crisis began in December when xAI added an image editing feature that lets users request specific edits to photos they upload, which don't have to be original to them.

5

Many altered images involved user prompts asking Grok to put people in bikinis, sometimes revising requests to be even more explicit. High-profile targets included Kate Middleton, the Princess of Wales, and an underage actress from Stranger Things. According to Copyleaks, an AI detection and content governance platform, roughly one AI-generated image was posted each minute on X.

2

X previously agreed to voluntarily moderate all nonconsensual intimate images as recently as 2024, recognizing that even partially nude images could be harmful.

1

However, Musk's promotion of revealing bikini images of public and private figures suggests that commitment has been abandoned. X seems to hope that forcing users to share identification and credit card information as paying subscribers will make them less likely to generate illegal content, but advocates note that Grok's outputs can cause lasting psychological, financial, and reputational harm even when not technically illegal in some states.

The Take It Down Act gives platforms until May of this year to set up processes for removing manipulated sexual imagery.

5

It's possible that Grok's outputs, if left unchecked, could eventually put X in violation of this federal law. AI Forensics has gathered around 90,000 total Grok images since the Christmas holidays, highlighting the scale of the problem.

4

Rather than solve the underlying issue, X may at best succeed in limiting public exposure to Grok's outputs while continuing to profit from the feature, as WIRED reported that Grok pushed "nudifying" or "undressing" apps into the mainstream.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo