4 Sources
4 Sources
[1]
Elon Musk's xAI sued for turning three girls' real photos into AI CSAM
A tip from an anonymous Discord user led cops to find what may be the first confirmed Grok-generated child sexual abuse materials (CSAM) that Elon Musk's xAI can't easily dismiss as nonexistent. As recently as January, Musk denied that Grok generated any CSAM during a scandal in which xAI refused to update filters to block the chatbot from nudifying images of real people. At the height of the controversy, researchers from the Center for Countering Digital Hate estimated that Grok generated approximately three million sexualized images, of which about 23,000 images depicted apparent children. Rather than fix Grok, xAI limited access to the system to paying subscribers. That kept the most shocking outputs from circulating on X, but the worst of it was not posted there, Wired reported. Instead, it was generated on Grok Imagine. Digging into the standalone app, a researcher in January found that a little less than 10 percent of about 800 Imagine outputs reviewed appeared to include CSAM. In an X post following that revelation, Musk continued rejecting the evidence and insisted that he was "not aware of any naked underage images generated by Grok," emphasizing that he'd seen "literally zero." However, Musk may now be forced to finally confront Grok's CSAM problem after a Discord user reached out to a victim, prompting law enforcement to get involved. In a proposed class-action lawsuit filed Monday, three young girls from Tennessee and their guardians accused Musk of intentionally designing Grok to "profit off the sexual predation of real people, including children." They estimated that "at least thousands of minors" were victimized and have asked a US district court for an injunction to finally end Grok's harmful outputs. They also seek damages, including punitive damages, for all minors harmed. An attorney representing the girls, Annika K. Martin, confirmed in a press release that their lives were "shattered by the devastating loss of privacy and the deep sense of violation that no child should ever have to experience." "These are children whose school photographs and family pictures were turned into child sexual abuse material by a billion-dollar company's AI tool and then traded among predators. Elon Musk and xAI deliberately designed Grok to produce sexually explicit content for financial gain, with no regard for the children and adults who would be harmed by it," Martin said. The harm is so extensive that, for the girls seeking justice, it's not enough for Musk to acknowledge only the images that they can show Grok twisted into CSAM, Martin said. "We intend to hold xAI accountable for every child they harmed in this way," Martin said. Cops link Grok to Discord CSAM For one of the young girls, the nightmare started in December, the complaint said. That's when she got an anonymous message on Instagram from a Discord user warning that her explicit "pics" were shared in a folder along with many other minors. Eventually the user shared "a series of AI-generated images and videos, which depicted her" as well as 18 other minor girls, and then linked her to a Discord server that was created by the perpetrator. Now over 18, the first victim to receive the tip was "disturbed," the complaint said, finding it hard to distinguish the sexualized photos from her real-life content. She immediately knew which photos the images were based on, most of which were posted to her social media when she was still a minor. And troublingly, she recognized some of the other girls in the folder from her school. Her first instinct was to contact the other victims she knew, then "ultimately, local law enforcement was contacted, and a criminal investigation was opened," the complaint said. Investigating the Discord evidence, cops quickly determined that the perpetrator had access to the first victim's Instagram "because he had maintained a close and friendly relationship" with her. Searching his phone, cops found a third-party app that licensed or otherwise purchased access to Grok, which they concluded that the perpetrator used to morph the girls' photos. From there, the bad actor uploaded the images to a file-sharing platform called Mega and used them as a "bartering tool in Telegram group chats with hundreds of other users," trading away the AI CSAM files "for sexually explicit content of other minors." The harms to victims have been extensive, the lawsuit said, citing acute emotional and mental distress. For the victims who know the perpetrator, they remain uncertain if the Grok-generated CSAM was shared with classmates or distributed to others at their school, the lawsuit noted. One girl fears the scandal will impact her college admissions, while another feels too scared to attend her own graduation. Even more alarming than any acquaintances coming across the AI CSAM, however, is the fear that girls will now be stalked due to Grok's outputs. As the lawsuit explains, "it also appears the victims' true first names and the name of their school was attached to their files online, meaning other online predators may also be able to identify them, creating a substantial risk for stalking." xAI allegedly hosts Grok CSAM While it was previously reported that Grok Imagine's paying subscribers were generating more graphic outputs than the Grok outputs that sparked outcry on X, the lawsuit alleges that xAI has also taken other steps to hide how it profits from explicit content that harms real people. The lawsuit alleges that xAI also sells licenses and access to its Grok AI model to third-party apps like the one their perpetrator used. That arrangement supposedly gives xAI an additional profit source while insulating xAI from visibility that third parties are "using xAI servers and platforms to produce CSAM content requested by these apps' customers," a press release from their legal team said. Allegedly, all of the sexually explicit content generated by third parties is hosted on xAI servers, then distributed by xAI. "xAI has not made Grok' AI model publicly available and has not licensed Grok in its entirety but instead licenses the use of its servers to these middlemen companies, knowing that any illicit and unlawful content generated through prompts to these applications will ultimately be created and distributed from xAI servers," the lawsuit said. Victims claim that conflict puts xAI squarely in violation of child pornography laws: On information and belief, xAI possessed the CSAM of Plaintiffs on its servers after Grok produced their CSAM and then transported and distributed the unlawful contraband to its customer/user, namely, the perpetrator, using the cut-out or third-party middleman application. They're hoping the court will finally make clear if xAI knew Grok was generating CSAM and if xAI knowingly processed that content on its servers, then decided to distribute it to increase xAI's revenue. There can be no valid excuse for failing to protect minors if the court agrees with victims that xAI violated child porn laws or owed a duty of care, the lawsuit alleged. "The gravity of the harm inflicted by Defendants' practices vastly outweighs any purported benefit of Defendants' 'spicy mode' or other uncensored content features," the complaint said. "No legitimate business interest is served by designing an AI image-generation tool to produce CSAM." xAI did not immediately respond to Ars' request to comment. But the company has previously blamed users who generated CSAM for the backlash while threatening to suspend users who abuse Grok.
[2]
Elon Musk's xAI faces child porn lawsuit from minors Grok allegedly undressed | TechCrunch
Elon Musk's company, xAI should be held accountable for allowing its AI models to produce abusive sexual images of identifiable minors, three anonymous plaintiffs argued in a lawsuit filed Monday in California federal court. The three plaintiffs, who are aiming to turn this into a class action lawsuit, are seeking to represent anyone who had real images of them as minors altered into sexual content by Grok. They allege that xAI did not take basic precautions used by other frontier labs to prevent their image models from producing pornography depicting real people and minors. The case, JANE DOE 1, JANE DOE 2, a minor, and JANE DOE 3, a minor versus X.AI Corp and X.AI LLC, was filed in the U.S. District Court of California Northern District. Other deep-learning image generators employ various techniques to prevent the creation of child pornography from normal photographs. The lawsuit alleges that these standards were not adopted by xAI. Notably, if a model allows the generation of nude or erotic content from real images, it is virtually impossible to prevent it from generating sexual content featuring children. Musk's public promotion of Grok's ability to produce sexual imagery and depict real people in skimpy outfits features heavily in the suit. The company did not respond to a request for comment from TechCrunch. One plaintiff, Jane Doe 1, had pictures from her high school homecoming and yearbook altered by Grok to depict her unclothed. An anonymous tipster who contacted her on Instagram told her that the photos were circulating online, and sent her a link to a Discord server featuring sexualized images of her and other minors she recognized from school. A second plaintiff, Jane Doe 2, was informed by criminal investigators about altered, sexualized images of her created by a third-party mobile app that relies on Grok models. A third, Jane Doe 3, was also notified by criminal investigators who discovered an altered, pornographic image of her on the phone of a subject they had apprehended. Attorneys for the plaintiffs say that because third-party usage still requires xAI code and servers, the company should be held responsible. All three plaintiffs, two of whom are still minors, say they are experiencing extreme distress over the circulation of these images and what it could mean for their reputations and social life. They are asking for civil penalties under an array of laws intended to protect exploited children and prevent corporate negligence.
[3]
Teens Sue xAI Over Sexualized Images Generated by Grok
Earlier this year, Twitter was flooded with non-consensual nude images of people, including children, generated by xAI's Grok. It turns out, the problem wasn't limited to just Twitter. According to a lawsuit filed Monday in the Northern District of California and first reported by the Washington Post, a group of teenagers is suing Elon Musk's AI company over allegations that a person used xAI's model to generate sexualized images and videos of them. The case is the first instance in which minors have pursued legal action against the companies enabling the generation of non-consensual sexual material. The class action suitâ€"brought by three plaintiffs, including two minorsâ€"alleges that xAI knowingly designed, marketed, and profited from the use of its image and video generation model, which was used to create sexually explicit material of people, including more than 18 girls who were harassed in the case that ultimately led to this lawsuit. They also allege that xAI failed to implement child sexual abuse material prevention measures that are otherwise considered an industry standard protection. At the core of the case against xAI is a truly harrowing situation for these teenage girls, who were reportedly harassed by an individual who spent months generating and distributing sexualized images of them. The perpetrator, who was arrested in December following a police investigation, according to the Washington Post, reportedly took photos and videos from the social media accounts of the girls and used them to generate nude and sexually explicit images of them. Those images were then sold and traded across communities on Discord and Telegram, where they have continued to persist. Some of the girls became aware of the images after being contacted on social media and told they were being spread. When the police arrested the person responsible for making the images, they determined that he used Grok to create them. Grok was also used to generate non-consensual sexual images of people on Twitter, an estimated 23,000 photos that appeared to depict children in sexual situations, according to researchers who investigated the posts. At the time those images were spreading on Twitter, xAI (and Twitter) CEO Elon Musk, claimed, "I not aware of any naked underage images generated by Grok. Literally zero," and said "When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state." At the time, Grok was being used to depict people, including children, in bikinis without their consent. Musk made posts following this trend, including an image depicting a rocket in a bikiniâ€"seemingly suggesting he was aware of the trend, whether or not he was aware it was being used on images of children. Weeks later, the company announced that it would add restrictions to image generation and made reference to people who "attempt to abuse the Grok account to violate the law," but didn't directly acknowledge the generation of CSAM. Musk and xAI have also promoted Grok's ability to be used for sexually explicit activity via its "Spicy" mode, which can be used for text, image, and video generation. The class action suit alleges that the company and its CEO were more aware of how the tool was being used than they have let on, claiming they "saw a business opportunity: an opportunity to profit off the sexual predation of real people, including children.†xAI did not respond to a request for comment regarding the lawsuit.
[4]
Tennessee teens sue Elon Musk's xAI over AI-generated child sexual abuse material
Elon Musk's artificial intelligence company, xAI, which makes the Grok chatbot, is being sued by teenagers who say the company's AI models were used to create nonconsensual nudes of them. Nicolas Tucat/AFP via Getty Images hide caption Three Tennessee teenagers have filed a class action lawsuit against Elon Musk's artificial intelligence company, xAI, alleging its large language model powered an app that was used to make nonconsensual nude and sexually explicit images and videos of them when they were girls. "Like a rag doll brought to life through the dark arts, this [AI-generated] child can be manipulated into any pose, however sick, however fetishized, however unlawful. To the viewer, the resulting video appears entirely real," reads the complaint. "For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse." While the perpetrator didn't use xAI's chatbot, Grok or the social media platform X (also owned by xAI), the lawsuit claims that the perpetrator relied on an unnamed app that used xAI's algorithm, citing law enforcement. The plaintiffs accused xAI of deliberately licensing its technology to app makers, often outside the U.S. "In this way, xAI could attempt to outsource the liability of their incredibly dangerous tool," said the complaint. The lawsuit is the first in which xAI has been sued by underage people depicted in child sexual abuse material its model allegedly generated. xAI's image generation tools have been implicated in the production of millions of sexualized images of people over the past year. Influencer Ashley St. Clair, who has a child with Musk, sued the company earlier this year for AI-produced images on X depicting her nude when she was a teenager. According to the class action complaint, the perpetrator who made the sexualized images had a "close and friendly relationship" with one of the plaintiffs, and used photos the plaintiff sent to him as well as photos he gathered in a yearbook and on social media to make the images and videos. One video depicted one plaintiff "undressing until she was entirely nude." the complaint alleged. The plaintiffs were disturbed by how lifelike the images and videos were. What's more, the material was not labelled as AI-generated, according to the complaint. The perpetrator also made sexually explicit material of 18 other people, and traded them for images of other people online, the complaint alleged. He was arrested, according to the complaint The plaintiffs' attorney, Vanessa Baehr-Jones, said the teenagers, identified as Jane Does 1, 2 and 3 in the complaint, want to change how AI companies make business decisions about sexually explicit content, "We want to make it one[a business decision] that does not make any business sense anymore," she said. The plaintiffs are asking the court for damages to make up for emotional distress and other harms caused by the images. Apps with so-called nudifying functions have existed for years in the shadows of the internet. But last year, major AI companies including Google, OpenAI and xAI updated their image generation tools in a way that allows users to strip people down to bikinis. But the images made by Google and OpenAI include digital watermarks that disclose their AI origin. So far, xAI has not adopted such a standard.
Share
Share
Copy Link
Three Tennessee teenagers filed a class-action lawsuit against Elon Musk's xAI, alleging the company's Grok AI model was used to create non-consensual sexual images of them as minors. The lawsuit marks the first time minors have pursued legal action against AI companies for generating child sexual abuse material, with attorneys estimating thousands of children were victimized.
Three Tennessee teenagers have filed a class-action lawsuit against Elon Musk's artificial intelligence company, xAI, marking the first instance where minors have pursued legal action against AI companies for enabling the generation of child sexual abuse material. Filed Monday in the U.S. District Court of California Northern District, the lawsuit alleges that xAI deliberately designed its Grok AI model to "profit off the sexual predation of real people, including children"
1
. The plaintiffs, identified as Jane Does 1, 2, and 3, are seeking to represent anyone who had real images of them as minors altered into sexual content by Grok2
.
Source: NPR
The case centers on a harrowing situation where a perpetrator spent months generating and distributing sexualized images of more than 18 girls. Attorney Annika K. Martin confirmed that the victims' lives were "shattered by the devastating loss of privacy and the deep sense of violation that no child should ever have to experience"
1
. The lawsuit estimates that "at least thousands of minors" were victimized and seeks an injunction to end Grok's harmful outputs, along with punitive damages for all minors harmed.The nightmare began in December when one victim, now over 18, received an anonymous message on Instagram from a Discord user warning that her explicit "pics" were shared in a folder with many other minors
1
. The user eventually shared AI-generated nude images and videos depicting her and 18 other minor girls, then linked her to a Discord server created by the perpetrator. The victim immediately recognized some of the other girls from her school and contacted local law enforcement, prompting a criminal investigation.Investigating the Discord evidence, police determined that the perpetrator had maintained a close and friendly relationship with the first victim and had access to her Instagram
1
. When officers searched his phone, they found a third-party app that licensed or purchased access to Grok, which they concluded was used to morph the girls' photos from social media, yearbooks, and homecoming pictures. The perpetrator then uploaded the AI CSAM to a file-sharing platform called Mega and used them as a "bartering tool in Telegram group chats with hundreds of other users," trading the files "for sexually explicit content of other minors"1
. The perpetrator was arrested following the investigation3
.The lawsuit alleges that xAI failed to implement child sexual abuse material prevention measures that are considered industry standards by other frontier labs
2
. Other deep-learning image generators employ various techniques to prevent the creation of child pornography from normal photographs, including digital watermarks that disclose AI origin. Major AI companies including Google and OpenAI have updated their image generation tools to include such watermarks, but xAI has not adopted this standard4
.Notably, if a model allows the generation of nude or erotic content from real images, it becomes virtually impossible to prevent it from generating sexualized images of identifiable minors
2
. The complaint alleges that xAI deliberately licensed its technology to app makers, often outside the U.S., attempting to outsource liability for their "incredibly dangerous tool"4
. Attorneys for the plaintiffs argue that because third-party usage still requires xAI code and servers, the company should be held responsible for the negligence2
.As recently as January, Elon Musk denied that Grok could generate child sexual abuse material during a scandal in which xAI refused to update filters to block the chatbot from nudifying images of real people
1
. At the height of the controversy, researchers from the Center for Countering Digital Hate estimated that Grok generated approximately three million sexualized images, of which about 23,000 images depicted apparent children1
. Rather than fix Grok, xAI limited access to paying subscribers, keeping the most shocking outputs from circulating on X.
Source: Ars Technica
In an X post following revelations that nearly 10 percent of about 800 Grok Imagine outputs reviewed appeared to include AI CSAM, Musk insisted he was "not aware of any naked underage images generated by Grok," emphasizing that he'd seen "literally zero"
1
. However, Musk and xAI have actively promoted Grok's ability to be used for sexually explicit activity via its Spicy mode, which can be used for text, image, and video generation3
. Musk's public promotion of Grok's ability to produce sexual imagery and depict real people in skimpy outfits features heavily in the suit2
.Related Stories
The harms to victims have been extensive, with all three plaintiffs experiencing extreme emotional distress over the circulation of these images and what it could mean for their reputations and social life
2
. For victims who know the perpetrator, they remain uncertain if the Grok-generated AI CSAM was shared with classmates or distributed to others at their school1
. One girl fears the scandal will impact her college admissions, while another feels too scared to attend her own graduation. The plaintiffs were disturbed by how lifelike the AI-generated nude images and videos were, finding it hard to distinguish the sexualized photos from real-life content1
.Even more alarming is the fear that girls will now be stalked due to Grok's outputs, as the lawsuit notes that victims' true first names and identifying features are now forever attached to videos depicting their own child sexual abuse
4
. The plaintiffs are asking the court for damages under an array of child protection laws intended to protect exploited children and prevent corporate negligence2
.The plaintiffs' attorney, Vanessa Baehr-Jones, stated that the teenagers want to change how AI companies make business decisions about sexually explicit content. "We want to make it one that does not make any business sense anymore," she said
4
. Martin emphasized that it's not enough for Musk to acknowledge only the images that can be shown as Grok-generated AI CSAM, stating "We intend to hold xAI accountable for every child they harmed in this way"1
.This case represents a critical test of whether AI companies can be held liable when their models are used to generate child sexual abuse material, even through third-party apps. The outcome could establish precedent for how online platforms and image generators must balance innovation with child safety, potentially forcing companies to adopt stricter prevention measures or face significant legal and financial consequences. As AI-generated content becomes increasingly difficult to distinguish from reality, the lawsuit raises urgent questions about accountability in an era where technology can be weaponized against vulnerable children. xAI did not respond to requests for comment regarding the xAI lawsuit
2
3
.Summarized by
Navi
[2]
27 Jan 2026•Technology

16 Jan 2026•Policy and Regulation

09 Jan 2026•Policy and Regulation

1
Technology

2
Technology

3
Business and Economy
