25 Sources
25 Sources
[1]
Elon Musk's xAI sued for turning three girls' real photos into AI CSAM
A tip from an anonymous Discord user led cops to find what may be the first confirmed Grok-generated child sexual abuse materials (CSAM) that Elon Musk's xAI can't easily dismiss as nonexistent. As recently as January, Musk denied that Grok generated any CSAM during a scandal in which xAI refused to update filters to block the chatbot from nudifying images of real people. At the height of the controversy, researchers from the Center for Countering Digital Hate estimated that Grok generated approximately three million sexualized images, of which about 23,000 images depicted apparent children. Rather than fix Grok, xAI limited access to the system to paying subscribers. That kept the most shocking outputs from circulating on X, but the worst of it was not posted there, Wired reported. Instead, it was generated on Grok Imagine. Digging into the standalone app, a researcher in January found that a little less than 10 percent of about 800 Imagine outputs reviewed appeared to include CSAM. In an X post following that revelation, Musk continued rejecting the evidence and insisted that he was "not aware of any naked underage images generated by Grok," emphasizing that he'd seen "literally zero." However, Musk may now be forced to finally confront Grok's CSAM problem after a Discord user reached out to a victim, prompting law enforcement to get involved. In a proposed class-action lawsuit filed Monday, three young girls from Tennessee and their guardians accused Musk of intentionally designing Grok to "profit off the sexual predation of real people, including children." They estimated that "at least thousands of minors" were victimized and have asked a US district court for an injunction to finally end Grok's harmful outputs. They also seek damages, including punitive damages, for all minors harmed. An attorney representing the girls, Annika K. Martin, confirmed in a press release that their lives were "shattered by the devastating loss of privacy and the deep sense of violation that no child should ever have to experience." "These are children whose school photographs and family pictures were turned into child sexual abuse material by a billion-dollar company's AI tool and then traded among predators. Elon Musk and xAI deliberately designed Grok to produce sexually explicit content for financial gain, with no regard for the children and adults who would be harmed by it," Martin said. The harm is so extensive that, for the girls seeking justice, it's not enough for Musk to acknowledge only the images that they can show Grok twisted into CSAM, Martin said. "We intend to hold xAI accountable for every child they harmed in this way," Martin said. Cops link Grok to Discord CSAM For one of the young girls, the nightmare started in December, the complaint said. That's when she got an anonymous message on Instagram from a Discord user warning that her explicit "pics" were shared in a folder along with many other minors. Eventually the user shared "a series of AI-generated images and videos, which depicted her" as well as 18 other minor girls, and then linked her to a Discord server that was created by the perpetrator. Now over 18, the first victim to receive the tip was "disturbed," the complaint said, finding it hard to distinguish the sexualized photos from her real-life content. She immediately knew which photos the images were based on, most of which were posted to her social media when she was still a minor. And troublingly, she recognized some of the other girls in the folder from her school. Her first instinct was to contact the other victims she knew, then "ultimately, local law enforcement was contacted, and a criminal investigation was opened," the complaint said. Investigating the Discord evidence, cops quickly determined that the perpetrator had access to the first victim's Instagram "because he had maintained a close and friendly relationship" with her. Searching his phone, cops found a third-party app that licensed or otherwise purchased access to Grok, which they concluded that the perpetrator used to morph the girls' photos. From there, the bad actor uploaded the images to a file-sharing platform called Mega and used them as a "bartering tool in Telegram group chats with hundreds of other users," trading away the AI CSAM files "for sexually explicit content of other minors." The harms to victims have been extensive, the lawsuit said, citing acute emotional and mental distress. For the victims who know the perpetrator, they remain uncertain if the Grok-generated CSAM was shared with classmates or distributed to others at their school, the lawsuit noted. One girl fears the scandal will impact her college admissions, while another feels too scared to attend her own graduation. Even more alarming than any acquaintances coming across the AI CSAM, however, is the fear that girls will now be stalked due to Grok's outputs. As the lawsuit explains, "it also appears the victims' true first names and the name of their school was attached to their files online, meaning other online predators may also be able to identify them, creating a substantial risk for stalking." xAI allegedly hosts Grok CSAM While it was previously reported that Grok Imagine's paying subscribers were generating more graphic outputs than the Grok outputs that sparked outcry on X, the lawsuit alleges that xAI has also taken other steps to hide how it profits from explicit content that harms real people. The lawsuit alleges that xAI also sells licenses and access to its Grok AI model to third-party apps like the one their perpetrator used. That arrangement supposedly gives xAI an additional profit source while insulating xAI from visibility that third parties are "using xAI servers and platforms to produce CSAM content requested by these apps' customers," a press release from their legal team said. Allegedly, all of the sexually explicit content generated by third parties is hosted on xAI servers, then distributed by xAI. "xAI has not made Grok' AI model publicly available and has not licensed Grok in its entirety but instead licenses the use of its servers to these middlemen companies, knowing that any illicit and unlawful content generated through prompts to these applications will ultimately be created and distributed from xAI servers," the lawsuit said. Victims claim that conflict puts xAI squarely in violation of child pornography laws: On information and belief, xAI possessed the CSAM of Plaintiffs on its servers after Grok produced their CSAM and then transported and distributed the unlawful contraband to its customer/user, namely, the perpetrator, using the cut-out or third-party middleman application. They're hoping the court will finally make clear if xAI knew Grok was generating CSAM and if xAI knowingly processed that content on its servers, then decided to distribute it to increase xAI's revenue. There can be no valid excuse for failing to protect minors if the court agrees with victims that xAI violated child porn laws or owed a duty of care, the lawsuit alleged. "The gravity of the harm inflicted by Defendants' practices vastly outweighs any purported benefit of Defendants' 'spicy mode' or other uncensored content features," the complaint said. "No legitimate business interest is served by designing an AI image-generation tool to produce CSAM." xAI did not immediately respond to Ars' request to comment. But the company has previously blamed users who generated CSAM for the backlash while threatening to suspend users who abuse Grok.
[2]
Elon Musk's xAI faces child porn lawsuit from minors Grok allegedly undressed | TechCrunch
Elon Musk's company, xAI should be held accountable for allowing its AI models to produce abusive sexual images of identifiable minors, three anonymous plaintiffs argued in a lawsuit filed Monday in California federal court. The three plaintiffs, who are aiming to turn this into a class action lawsuit, are seeking to represent anyone who had real images of them as minors altered into sexual content by Grok. They allege that xAI did not take basic precautions used by other frontier labs to prevent their image models from producing pornography depicting real people and minors. The case, JANE DOE 1, JANE DOE 2, a minor, and JANE DOE 3, a minor versus X.AI Corp and X.AI LLC, was filed in the U.S. District Court of California Northern District. Other deep-learning image generators employ various techniques to prevent the creation of child pornography from normal photographs. The lawsuit alleges that these standards were not adopted by xAI. Notably, if a model allows the generation of nude or erotic content from real images, it is virtually impossible to prevent it from generating sexual content featuring children. Musk's public promotion of Grok's ability to produce sexual imagery and depict real people in skimpy outfits features heavily in the suit. The company did not respond to a request for comment from TechCrunch. One plaintiff, Jane Doe 1, had pictures from her high school homecoming and yearbook altered by Grok to depict her unclothed. An anonymous tipster who contacted her on Instagram told her that the photos were circulating online, and sent her a link to a Discord server featuring sexualized images of her and other minors she recognized from school. A second plaintiff, Jane Doe 2, was informed by criminal investigators about altered, sexualized images of her created by a third-party mobile app that relies on Grok models. A third, Jane Doe 3, was also notified by criminal investigators who discovered an altered, pornographic image of her on the phone of a subject they had apprehended. Attorneys for the plaintiffs say that because third-party usage still requires xAI code and servers, the company should be held responsible. All three plaintiffs, two of whom are still minors, say they are experiencing extreme distress over the circulation of these images and what it could mean for their reputations and social life. They are asking for civil penalties under an array of laws intended to protect exploited children and prevent corporate negligence.
[3]
Teen Girls Sue xAI, Alleging 'Devastating' Harm From Grok AI Child Sexual Abuse Images
A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk's xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech. "Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused," the filing says. "xAI's financial gain through the increased use of its image- and video-making product came at their expense and well-being." From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million "undressed" or "nudified" images, 41% of the total number of images created, over a period of nine days. X, xAI and its safety and child safety divisions did not immediately respond to a request for comment. The wave of "undressed" images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January. The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok's self-proclaimed "spicy AI" generation to the "dark arts" with its ease of subjecting children to "any pose, however sick, however fetishized, however unlawful." "To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse," the complaint reads. The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI's servers, which makes the company liable, the complaint argues. The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator. Ongoing investigations led the families of Jane Does 2 and 3 to learn their children's images had been transformed with xAI tech into abusive material.
[4]
Tennessee minors sue Musk's xAI, alleging Grok generated sexual images of them
March 16 (Reuters) - Three Tennessee plaintiffs, including two minors, sued Elon Musk's xAI on Monday, alleging that it knowingly designed its Grok image generator to let people create sexually explicit content by using real photos of others. The lawsuit, filed in the San Jose, California federal court, is seeking class-action status for people in the United States who were "reasonably identifiable" in sexualized images or videos generated by Grok based on real images of themselves. The artificial intelligence company did not immediately respond to a Reuters request for comment. After an outcry over sexually explicit content generated by the chatbot, xAI said in January that it had blocked all users from editing images of "real people in revealing clothing" and from generating images of people in revealing clothing in "jurisdictions where it's illegal." Reporting by Mrinmay Dey in Mexico City; Editing by Edwina Gibbs Our Standards: The Thomson Reuters Trust Principles., opens new tab
[5]
Teenagers sue Musk's xAI claiming image-generator made sexually explicit images of them as minors
NASHVILLE, Tenn. (AP) -- Three teenagers in Tennessee sued Elon Musk's xAI this week, claiming the company's image-generation tools were used to morph real photos of them into explicitly sexual images. The high school students, who are seeking to proceed under pseudonyms, filed the lawsuit in California, where xAI -- Musk's artificial intelligence company -- has its headquarters. They are seeking class-action status in order to represent what the lawsuit says are thousands of victims like themselves who either are minors or were minors when sexually explicit images of them were created. According to the lawsuit, Jane Doe 1 was alerted anonymously in December that someone was distributing sexually explicit images of her on a social media website. "At least five of these files, one video and four images, depicted her actual face and body in settings with which she was familiar, but morphed into sexually explicit poses," the lawsuit states. It claims the person distributing the images knew Doe and used xAI's image generation tools to turn real photos of her into sexually abusive ones. One of the images was taken from a homecoming photo. Another was taken from a high school yearbook. The person distributing the images also created explicit images of at least 18 other girls, two of whom are co-plaintiffs in the lawsuit. In late December, local police arrested the perpetrator and confiscated his phone. They found that he had uploaded the images to several platforms where he traded them for sexually explicit images of other minors. Other AI companies have prohibited their image-generators from producing any sexually explicit content, even of adults. Musk saw this as a business opportunity and promoted the ability of xAI's Grok chatbot to create "spicy" content, the lawsuit claims. However, there is currently no way to prevent the generation of explicit images of adults while completely blocking the generation of images of children, the lawsuit claims. It also claims that xAI knew Grok would be able to produce sexually explicit images of children but released it anyway. The lawsuit claims the person who distributed images of the plaintiffs used an application that licensed the xAI technology or "otherwise purchased its access to Grok, and was used as a cut-out or middleman." XAI did not respond to an email from The Associated Press seeking comment. But a Jan. 14 post about the controversy on the social media platform X said: "We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content. "We take action to remove high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity, taking appropriate action against accounts that violate our X Rules. We also report accounts seeking Child Sexual Exploitation materials to law enforcement authorities as necessary." Meanwhile, the students in the lawsuit said they worry that the images created of them will live forever on the internet. They fear stalking because their real first names and the name of their school are attached to the files. They worry that their friends and classmates have seen the photos and videos, which appear to be real, and they worry about who will see them in the future. Jane Doe 1 said she has suffered from anxiety, depression, stress. "She has difficulty eating and sleeping and suffers from recurring nightmares," the lawsuit states. Jane Doe 2 "has begun self-isolating and avoiding being on her school campus, and even dreads attending her own graduation." Jane Doe 3 suffers from constant fear and anxiety that someone will see the AI-generated images and recognize her face, according to the lawsuit.
[6]
Teens Sue xAI Over Sexualized Images Generated by Grok
Earlier this year, Twitter was flooded with non-consensual nude images of people, including children, generated by xAI's Grok. It turns out, the problem wasn't limited to just Twitter. According to a lawsuit filed Monday in the Northern District of California and first reported by the Washington Post, a group of teenagers is suing Elon Musk's AI company over allegations that a person used xAI's model to generate sexualized images and videos of them. The case is the first instance in which minors have pursued legal action against the companies enabling the generation of non-consensual sexual material. The class action suitâ€"brought by three plaintiffs, including two minorsâ€"alleges that xAI knowingly designed, marketed, and profited from the use of its image and video generation model, which was used to create sexually explicit material of people, including more than 18 girls who were harassed in the case that ultimately led to this lawsuit. They also allege that xAI failed to implement child sexual abuse material prevention measures that are otherwise considered an industry standard protection. At the core of the case against xAI is a truly harrowing situation for these teenage girls, who were reportedly harassed by an individual who spent months generating and distributing sexualized images of them. The perpetrator, who was arrested in December following a police investigation, according to the Washington Post, reportedly took photos and videos from the social media accounts of the girls and used them to generate nude and sexually explicit images of them. Those images were then sold and traded across communities on Discord and Telegram, where they have continued to persist. Some of the girls became aware of the images after being contacted on social media and told they were being spread. When the police arrested the person responsible for making the images, they determined that he used Grok to create them. Grok was also used to generate non-consensual sexual images of people on Twitter, an estimated 23,000 photos that appeared to depict children in sexual situations, according to researchers who investigated the posts. At the time those images were spreading on Twitter, xAI (and Twitter) CEO Elon Musk, claimed, "I not aware of any naked underage images generated by Grok. Literally zero," and said "When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state." At the time, Grok was being used to depict people, including children, in bikinis without their consent. Musk made posts following this trend, including an image depicting a rocket in a bikiniâ€"seemingly suggesting he was aware of the trend, whether or not he was aware it was being used on images of children. Weeks later, the company announced that it would add restrictions to image generation and made reference to people who "attempt to abuse the Grok account to violate the law," but didn't directly acknowledge the generation of CSAM. Musk and xAI have also promoted Grok's ability to be used for sexually explicit activity via its "Spicy" mode, which can be used for text, image, and video generation. The class action suit alleges that the company and its CEO were more aware of how the tool was being used than they have let on, claiming they "saw a business opportunity: an opportunity to profit off the sexual predation of real people, including children.†xAI did not respond to a request for comment regarding the lawsuit.
[7]
Tennessee teens sue Elon Musk's xAI over AI-generated child sexual abuse material
Elon Musk's artificial intelligence company, xAI, which makes the Grok chatbot, is being sued by teenagers who say the company's AI models were used to create nonconsensual nudes of them. Nicolas Tucat/AFP via Getty Images hide caption Three Tennessee teenagers have filed a class action lawsuit against Elon Musk's artificial intelligence company, xAI, alleging its large language model powered an app that was used to make nonconsensual nude and sexually explicit images and videos of them when they were girls. "Like a rag doll brought to life through the dark arts, this [AI-generated] child can be manipulated into any pose, however sick, however fetishized, however unlawful. To the viewer, the resulting video appears entirely real," reads the complaint. "For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse." While the perpetrator didn't use xAI's chatbot, Grok or the social media platform X (also owned by xAI), the lawsuit claims that the perpetrator relied on an unnamed app that used xAI's algorithm, citing law enforcement. The plaintiffs accused xAI of deliberately licensing its technology to app makers, often outside the U.S. "In this way, xAI could attempt to outsource the liability of their incredibly dangerous tool," said the complaint. The lawsuit is the first in which xAI has been sued by underage people depicted in child sexual abuse material its model allegedly generated. xAI's image generation tools have been implicated in the production of millions of sexualized images of people over the past year. Influencer Ashley St. Clair, who has a child with Musk, sued the company earlier this year for AI-produced images on X depicting her nude when she was a teenager. According to the class action complaint, the perpetrator who made the sexualized images had a "close and friendly relationship" with one of the plaintiffs, and used photos the plaintiff sent to him as well as photos he gathered in a yearbook and on social media to make the images and videos. One video depicted one plaintiff "undressing until she was entirely nude." the complaint alleged. The plaintiffs were disturbed by how lifelike the images and videos were. What's more, the material was not labelled as AI-generated, according to the complaint. The perpetrator also made sexually explicit material of 18 other people, and traded them for images of other people online, the complaint alleged. He was arrested, according to the complaint The plaintiffs' attorney, Vanessa Baehr-Jones, said the teenagers, identified as Jane Does 1, 2 and 3 in the complaint, want to change how AI companies make business decisions about sexually explicit content, "We want to make it one[a business decision] that does not make any business sense anymore," she said. The plaintiffs are asking the court for damages to make up for emotional distress and other harms caused by the images. Apps with so-called nudifying functions have existed for years in the shadows of the internet. But last year, major AI companies including Google, OpenAI and xAI updated their image generation tools in a way that allows users to strip people down to bikinis. But the images made by Google and OpenAI include digital watermarks that disclose their AI origin. So far, xAI has not adopted such a standard.
[8]
Teens Sue Elon Musk's Grok for Turning Their Photos into Pornographic Images
Three teens have filed a class action lawsuit claiming that Elon Musk's AI chatbot Grok was used to create nonconsensual sexually explicit images and videos of them when they were minors. The lawsuit, filed Monday in a federal California court, alleges that xAI's tools were used to alter photos of three Tennessee-based teenagers in which they were clothed, turning them into nude and sexualized images. According to a report by The Washington Post, the edited images spread on platforms such as Discord and Telegram, and some were traded for other child sexual abuse material (CSAM). The legal action focuses on Grok's controversial "spicy mode" released last year, which allowed users to remove a woman's clothing in photographs without asking permission. Lawyers for the plaintiffs say the features were created to increase usage of Grok and Musk's social media platform X. "Like a rag doll brought to life through the dark arts, this [AI-generated] child can be manipulated into any pose, however sick, however fetishized, however unlawful. To the viewer, the resulting video appears entirely real," the complaint reads. "For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse." The complaint also accuses xAI and Musk of knowing that Grok could produce such results, including by using images of children, yet releasing it publicly anyway. "xAI -- and its founder Elon Musk -- saw a business opportunity," the complaint states. The plaintiffs are seeking unspecified damages and an immediate court order barring Grok from creating such images, as well as preventing xAI from generating and distributing AI-generated CSAM. "Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety," lawyers write. Two of the teens are under 18, and all three are withholding their names to protect their privacy. One plaintiff says she discovered the images after receiving an anonymous Instagram message linking to altered photos and videos, including her high school yearbook picture, showing her nude and in sexually explicit acts. The material was shared on a private Discord server alongside similar AI-generated images of at least 18 other minor girls. The other two plaintiffs also found fake sexualized imagery of themselves online created through Grok. The Washington Post reports that xAI has not responded to requests for comment through its parent company. Grok, developed by xAI and hosted on Musk's platform X, was launched in 2023. Last year, xAI released Grok Imagine, more commonly called spicy mode, which allowed users to generate sexualized images. According to a study by the Center for Countering Digital Hate, Elon Musk's AI tool Grok generated an estimated three million sexualized images, including around 23,000 involving children, based on sampling. The study found that the tool produced roughly 190 sexualized images per minute over an 11-day period after the introduction of a one-click editing feature. Image credits: Header photo licensed via Depositphotos.
[9]
Teens sue xAI for Grok's reported sexual image generation issues
Three Jane Does, two under 18, filed a lawsuit against xAI yesterday about the generation of child sex abuse material (CSAM) on Grok, Elon Musk's AI tool. "Like a rag doll brought to life through the dark arts, this child can be manipulated into any pose, however sick, however fetishized, however unlawful," the complaint states. It goes on to allege that while other AI companies recognize the dangers of AI and put up guardrails to prevent child sex predators from using them, "xAI did not." "Instead, xAI -- and its founder Elon Musk -- saw a business opportunity: an opportunity to profit off the sexual predation of real people, including children," the suit, which is a class action, states. In January, Mashable reported that xAI admitted that Grok generates images of "minors in minimal clothing." A report by the Center for Countering Digital Hate stated weeks later that Grok was able to create around three million sexualized images, including 23,000 of apparent children, between Dec. 29 and Jan. 8. Several countries, including France, the UK, Ireland, India, and Brazil, have announced that they're now investigating Grok. Stateside, California is also investigating the chatbot. Now, the lawsuits have begun. This one, filed March 16, has been filed in federal court in California. It states that the Jane Does, who live in Tennessee, have suffered severe harm from xAI's production of CSAM. Around Dec. 6, 2025, Jane Doe 1 discovered from an anonymous Instagram message that images of her had been generated by someone she knows and disseminated on Discord. The perpetrator had created images of at least 18 other girls, many of whom Jane Doe 1 recognized. Jane Doe 1 is now an adult, but was a minor in the photos used of her, according to the lawsuit. Then, around Feb. 12 this year, Jane Does 2 and 3 were notified by local law enforcement that their images were used by the same perpetrator to generate CSAM. They are still minors. Another lawsuit against xAI was filed in the same court on Jan. 23. Another Jane Doe, an adult, is suing due to Grok "undressing" an image of her, rendering her in a bikini.
[10]
Three Tennessee teenagers are suing Elon Musk's xAI for creating sexually explicit images of them | Fortune
The high school students, who are seeking to proceed under pseudonyms, filed the lawsuit in California, where xAI -- Musk's artificial intelligence company -- has its headquarters. They are seeking class-action status in order to represent what the lawsuit says are thousands of victims like themselves who either are minors or were minors when sexually explicit images of them were created. According to the lawsuit, Jane Doe 1 was alerted anonymously in December that someone was distributing sexually explicit images of her on a social media website. "At least five of these files, one video and four images, depicted her actual face and body in settings with which she was familiar, but morphed into sexually explicit poses," the lawsuit states. It claims the person distributing the images knew Doe and used xAI's image generation tools to turn real photos of her into sexually abusive ones. One of the images was taken from a homecoming photo. Another was taken from a high school yearbook. The person distributing the images also created explicit images of at least 18 other girls, two of whom are co-plaintiffs in the lawsuit. In late December, local police arrested the perpetrator and confiscated his phone. They found that he had uploaded the images to several platforms where he traded them for sexually explicit images of other minors. Other AI companies have prohibited their image-generators from producing any sexually explicit content, even of adults. Musk saw this as a business opportunity and promoted the ability of xAI's Grok chatbot to create "spicy" content, the lawsuit claims. However, there is currently no way to prevent the generation of explicit images of adults while completely blocking the generation of images of children, the lawsuit claims. It also claims that xAI knew Grok would be able to produce sexually explicit images of children but released it anyway. The lawsuit claims the person who distributed images of the plaintiffs used an application that licensed the xAI technology or "otherwise purchased its access to Grok, and was used as a cut-out or middleman." XAI did not respond to an email from The Associated Press seeking comment. But a Jan. 14 post about the controversy on the social media platform X said: "We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content. "We take action to remove high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity, taking appropriate action against accounts that violate our X Rules. We also report accounts seeking Child Sexual Exploitation materials to law enforcement authorities as necessary." Meanwhile, the students in the lawsuit said they worry that the images created of them will live forever on the internet. They fear stalking because their real first names and the name of their school are attached to the files. They worry that their friends and classmates have seen the photos and videos, which appear to be real, and they worry about who will see them in the future. Jane Doe 1 said she has suffered from anxiety, depression, stress. "She has difficulty eating and sleeping and suffers from recurring nightmares," the lawsuit states. Jane Doe 2 "has begun self-isolating and avoiding being on her school campus, and even dreads attending her own graduation." Jane Doe 3 suffers from constant fear and anxiety that someone will see the AI-generated images and recognize her face, according to the lawsuit.
[11]
Teen girls sue xAI over AI-generated images
A class action lawsuit in California federal court claims that xAI's Grok model created explicit images of three teenagers using a licensed third-party app. The complaint, filed March 16, names xAI -- the artificial intelligence company founded by Elon Musk -- as the defendant in a case brought by three Tennessee teenagers who say its Grok model was used to produce child sexual abuse material depicting them. The complaint, filed in U.S. District Court for the Northern District of California, names X $TWTR.AI Corp. and X.AI LLC as defendants. The plaintiffs -- identified as Jane Doe 1, Jane Doe 2, and Jane Doe 3 -- allege that a perpetrator cultivated a relationship with at least one of them and used photos gathered from social media, a yearbook, and images she had sent him to generate explicit images and videos. The perpetrator did not access Grok or X directly, according to the complaint. He used a third-party app that the plaintiffs allege was a licensee of xAI's technology, and which the complaint says routed image generation through xAI's servers. Among the files was a video showing one of the plaintiffs removing her clothing until she appeared completely nude, the complaint said. The images were photorealistic and not labeled as AI-generated, the complaint said. Using images of at least 18 additional minors, the perpetrator generated sexually explicit material that he then bartered for content depicting other children on Telegram and Discord, the complaint said. Police arrested him in late December 2025. The suit is the first in which minors depicted in CSAM allegedly generated by xAI's model have sued the company, according to NPR. The complaint accuses xAI of deliberately bypassing industry-standard safeguards -- including content filters, red-team testing, hash-matching against known CSAM databases, and mandatory reporting to authorities -- that it says competitors such as Google $GOOGL and OpenAI have adopted. The suit also alleges xAI profited by licensing Grok to third-party developers, often based outside the U.S., as a way to distance itself from liability. Plaintiffs' attorney Vanessa Baehr-Jones said the goal was to make allowing sexually explicit AI content an economically untenable choice. "We want to make it one that does not make any business sense anymore," she told NPR. The complaint lists 13 charges, including federal counts for producing, distributing, and possessing child pornography under Masha's Law, claims under the Trafficking Victims Protection Act, and California state law claims for unfair competition, right of publicity, and several types of negligence. The plaintiffs are seeking at least $150,000 in statutory damages for each violation, along with punitive damages and a court order to make xAI stop the alleged actions. All three plaintiffs say they have suffered severe emotional distress. Jane Doe 1, who notified other victims and helped start a criminal investigation, has needed academic accommodations at school. Jane Doe 2, who is still a student, said anxiety has disrupted her sleep so much that she needed medical help and now dreads her own graduation. Jane Doe 3 said she constantly fears that people she meets may have already seen the images. XAI did not respond to a request for comment, according to NPR.
[12]
Minors Sue xAI in California Over Alleged Grok Deepfake Images - Decrypt
Filed amid global probes, the alleged victims are seeking $150,000 per violation plus damages and an injunction. Three Tennessee minors have sued Elon Musk's xAI in a federal class action, alleging Grok generated child sexual abuse material using their real photographs and that the company knowingly designed its AI chatbot without industry-standard safeguards, then profited from the result. The lawsuit, filed Monday in the Northern District of California, claims Grok was used to create and distribute AI-generated child sexual abuse material (CSAM) using their real images. The minors, identified as Jane Doe 1, 2, and 3, said the altered content was shared across platforms, including Discord, Telegram, and file-sharing sites, causing lasting emotional distress and reputational harm. "xAI -- and its founder Elon Musk -- saw a business opportunity: an opportunity to profit off the sexual predation of real people, including children," the lawsuit reads. "Knowing the type of harmful, illegal content that could -- and would -- be produced, xAI released Grok, a generative artificial intelligence model with image and video-making features that would respond to prompts to create sexual content with a person's real image or video." The alleged victims describe incidents between mid-2025 and early 2026, when their real photos were altered into explicit images and circulated online. In one instance, one of the victims was alerted by an anonymous user who found folders of AI-generated content being traded among hundreds of users. They allege a perpetrator accessed Grok through a third-party application that had licensed xAI's technology, a structure the filing says xAI deliberately used to distance itself from liability while continuing to profit from the underlying model. At the height of public backlash in January, Musk wrote on X that he was "not aware of any naked underage images," adding that "when asked to generate images, it will refuse to produce anything illegal." According to a finding by the Center for Countering Digital Hate, cited in the lawsuit, Grok produced an estimated 23,338 sexualized images of children between December 29, 2025, and January 9 of this year, roughly one every 41 seconds. The alleged victims are seeking damages of at least $150,000 per violation under Masha's Law, along with disgorgement of revenues, punitive damages, attorneys' fees, and a permanent injunction, as well as restitution of profits under California's Unfair Competition Law. The lawsuit is one of the first to hold an AI company directly liable for the alleged production and distribution of AI-generated CSAM depicting identifiable minors, and arrives as Grok faces simultaneous investigations across the U.S., EU, UK, France, Ireland, and Australia. "When a system is intentionally designed to manipulate real images into sexualized content, the downstream abuse is not an anomaly -- it is a foreseeable outcome," Even Alex Chandra, a partner at IGNOS Law Alliance, told Decrypt. Chandra said courts may not accept a simple platform defense, noting a generative AI system could be "treated as a platform in terms of user interaction" but "evaluated as a product" when assessing safety design, with "particularly strict scrutiny" applied in CSAM cases due to heightened child protection obligations. He also said courts will likely focus on safeguards, noting the company may be expected to show "risk assessments and safety-by-design measures before deployment," along with guardrails that actively block harmful outputs.
[13]
Teens are suing Elon Musk's xAI over sexually explicit images, seeking class action status
3 high school students from Tennessee filed the lawsuit in California, where xAI is headquartered. Three teenagers in Tennessee sued Elon Musk's xAI this week, claiming the company's image-generation tools were used to morph real photos of them into explicitly sexual images. The high school students, who are seeking to proceed under pseudonyms, filed the lawsuit in California, where xAI -- Musk's artificial intelligence company -- has its headquarters. They are seeking class-action status in order to represent what the lawsuit says are thousands of victims like themselves who either are minors or were minors when sexually explicit images of them were created. According to the lawsuit, Jane Doe 1 was alerted anonymously in December that someone was distributing sexually explicit images of her on a social media website. "At least five of these files, one video and four images, depicted her actual face and body in settings with which she was familiar, but morphed into sexually explicit poses," the lawsuit states. It claims the person distributing the images knew Doe and used xAI's image generation tools to turn real photos of her into sexually abusive ones. One of the images was taken from a homecoming photo. Another was taken from a high school yearbook. The person distributing the images also created explicit images of at least 18 other girls, two of whom are co-plaintiffs in the lawsuit. In late December, local police arrested the perpetrator and confiscated his phone. They found that he had uploaded the images to several platforms where he traded them for sexually explicit images of other minors. Other AI companies have prohibited their image-generators from producing any sexually explicit content, even of adults. Musk saw this as a business opportunity and promoted the ability of xAI's Grok chatbot to create "spicy" content, the lawsuit claims. However, there is currently no way to prevent the generation of explicit images of adults while completely blocking the generation of images of children, the lawsuit claims. It also claims that xAI knew Grok would be able to produce sexually explicit images of children but released it anyway. The lawsuit claims the person who distributed images of the plaintiffs used an application that licensed the xAI technology or "otherwise purchased its access to Grok, and was used as a cut-out or middleman." XAI did not respond to an email from The Associated Press seeking comment. But a Jan. 14 post about the controversy on the social media platform X said: "We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content. "We take action to remove high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity, taking appropriate action against accounts that violate our X Rules. We also report accounts seeking Child Sexual Exploitation materials to law enforcement authorities as necessary." Meanwhile, the students in the lawsuit said they worry that the images created of them will live forever on the internet. They fear stalking because their real first names and the name of their school are attached to the files. They worry that their friends and classmates have seen the photos and videos, which appear to be real, and they worry about who will see them in the future. Jane Doe 1 said she has suffered from anxiety, depression, stress. "She has difficulty eating and sleeping and suffers from recurring nightmares," the lawsuit states. Jane Doe 2 "has begun self-isolating and avoiding being on her school campus, and even dreads attending her own graduation." Jane Doe 3 suffers from constant fear and anxiety that someone will see the AI-generated images and recognize her face, according to the lawsuit.
[14]
Teenagers sue Musk's xAI claiming image-generator made sexually explicit images of them as minors
NASHVILLE, Tenn. (AP) -- Three teenagers in Tennessee sued Elon Musk's xAI this week, claiming the company's image-generation tools were used to morph real photos of them into explicitly sexual images. The high school students, who are seeking to proceed under pseudonyms, filed the lawsuit in California, where xAI -- Musk's artificial intelligence company -- has its headquarters. They are seeking class-action status in order to represent what the lawsuit says are thousands of victims like themselves who either are minors or were minors when sexually explicit images of them were created. According to the lawsuit, Jane Doe 1 was alerted anonymously in December that someone was distributing sexually explicit images of her on a social media website. "At least five of these files, one video and four images, depicted her actual face and body in settings with which she was familiar, but morphed into sexually explicit poses," the lawsuit states. It claims the person distributing the images knew Doe and used xAI's image generation tools to turn real photos of her into sexually abusive ones. One of the images was taken from a homecoming photo. Another was taken from a high school yearbook. The person distributing the images also created explicit images of at least 18 other girls, two of whom are co-plaintiffs in the lawsuit. In late December, local police arrested the perpetrator and confiscated his phone. They found that he had uploaded the images to several platforms where he traded them for sexually explicit images of other minors. Other AI companies have prohibited their image-generators from producing any sexually explicit content, even of adults. Musk saw this as a business opportunity and promoted the ability of xAI's Grok chatbot to create "spicy" content, the lawsuit claims. However, there is currently no way to prevent the generation of explicit images of adults while completely blocking the generation of images of children, the lawsuit claims. It also claims that xAI knew Grok would be able to produce sexually explicit images of children but released it anyway. The lawsuit claims the person who distributed images of the plaintiffs used an application that licensed the xAI technology or "otherwise purchased its access to Grok, and was used as a cut-out or middleman." XAI did not respond to an email from The Associated Press seeking comment. But a Jan. 14 post about the controversy on the social media platform X said: "We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content. "We take action to remove high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity, taking appropriate action against accounts that violate our X Rules. We also report accounts seeking Child Sexual Exploitation materials to law enforcement authorities as necessary." Meanwhile, the students in the lawsuit said they worry that the images created of them will live forever on the internet. They fear stalking because their real first names and the name of their school are attached to the files. They worry that their friends and classmates have seen the photos and videos, which appear to be real, and they worry about who will see them in the future. Jane Doe 1 said she has suffered from anxiety, depression, stress. "She has difficulty eating and sleeping and suffers from recurring nightmares," the lawsuit states. Jane Doe 2 "has begun self-isolating and avoiding being on her school campus, and even dreads attending her own graduation." Jane Doe 3 suffers from constant fear and anxiety that someone will see the AI-generated images and recognize her face, according to the lawsuit.
[15]
Teenagers Sue Musk's XAI Claiming Image-Generator Made Sexually Explicit Images of Them as Minors
NASHVILLE, Tenn. (AP) -- Three teenagers in Tennessee sued Elon Musk's xAI this week, claiming the company's image-generation tools were used to morph real photos of them into explicitly sexual images. The high school students, who are seeking to proceed under pseudonyms, filed the lawsuit in California, where xAI -- Musk's artificial intelligence company -- has its headquarters. They are seeking class-action status in order to represent what the lawsuit says are thousands of victims like themselves who either are minors or were minors when sexually explicit images of them were created. According to the lawsuit, Jane Doe 1 was alerted anonymously in December that someone was distributing sexually explicit images of her on a social media website. "At least five of these files, one video and four images, depicted her actual face and body in settings with which she was familiar, but morphed into sexually explicit poses," the lawsuit states. It claims the person distributing the images knew Doe and used xAI's image generation tools to turn real photos of her into sexually abusive ones. One of the images was taken from a homecoming photo. Another was taken from a high school yearbook. The person distributing the images also created explicit images of at least 18 other girls, two of whom are co-plaintiffs in the lawsuit. In late December, local police arrested the perpetrator and confiscated his phone. They found that he had uploaded the images to several platforms where he traded them for sexually explicit images of other minors. Other AI companies have prohibited their image-generators from producing any sexually explicit content, even of adults. Musk saw this as a business opportunity and promoted the ability of xAI's Grok chatbot to create "spicy" content, the lawsuit claims. However, there is currently no way to prevent the generation of explicit images of adults while completely blocking the generation of images of children, the lawsuit claims. It also claims that xAI knew Grok would be able to produce sexually explicit images of children but released it anyway. The lawsuit claims the person who distributed images of the plaintiffs used an application that licensed the xAI technology or "otherwise purchased its access to Grok, and was used as a cut-out or middleman." XAI did not respond to an email from The Associated Press seeking comment. But a Jan. 14 post about the controversy on the social media platform X said: "We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content. "We take action to remove high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity, taking appropriate action against accounts that violate our X Rules. We also report accounts seeking Child Sexual Exploitation materials to law enforcement authorities as necessary." Meanwhile, the students in the lawsuit said they worry that the images created of them will live forever on the internet. They fear stalking because their real first names and the name of their school are attached to the files. They worry that their friends and classmates have seen the photos and videos, which appear to be real, and they worry about who will see them in the future. Jane Doe 1 said she has suffered from anxiety, depression, stress. "She has difficulty eating and sleeping and suffers from recurring nightmares," the lawsuit states. Jane Doe 2 "has begun self-isolating and avoiding being on her school campus, and even dreads attending her own graduation." Jane Doe 3 suffers from constant fear and anxiety that someone will see the AI-generated images and recognize her face, according to the lawsuit.
[16]
Teenagers sue Elon Musk's xAI claiming image-generator made sexually explicit images of them as minors
Three teenagers from Tennessee are taking a stand against Elon Musk's xAI, filing suit over claims that the company's AI capabilities produced explicit images of them. The students aim for class-action certification, drawing attention to a broader issue that could impact thousands of minors. Three teenagers in Tennessee sued Elon Musk's xAI this week, claiming the company's image-generation tools were used to morph real photos of them into explicitly sexual images. The high school students, who are seeking to proceed under pseudonyms, filed the lawsuit in California, where xAI - Musk's artificial intelligence company - has its headquarters. They are seeking class-action status in order to represent what the lawsuit says are thousands of victims like themselves who either are minors or were minors when sexually explicit images of them were created. According to the lawsuit, Jane Doe 1 was alerted anonymously in December that someone was distributing sexually explicit images of her on a social media website. "At least five of these files, one video and four images, depicted her actual face and body in settings with which she was familiar, but morphed into sexually explicit poses," the lawsuit states. It claims the person distributing the images knew Doe and used xAI's image generation tools to turn real photos of her into sexually abusive ones. One of the images was taken from a homecoming photo. Another was taken from a high school yearbook. The person distributing the images also created explicit images of at least 18 other girls, two of whom are co-plaintiffs in the lawsuit. In late December, local police arrested the perpetrator and confiscated his phone. They found that he had uploaded the images to several platforms where he traded them for sexually explicit images of other minors. Other AI companies have prohibited their image-generators from producing any sexually explicit content, even of adults. Musk saw this as a business opportunity and promoted the ability of xAI's Grok chatbot to create "spicy" content, the lawsuit claims. However, there is currently no way to prevent the generation of explicit images of adults while completely blocking the generation of images of children, the lawsuit claims. It also claims that xAI knew Grok would be able to produce sexually explicit images of children but released it anyway. The lawsuit claims the person who distributed images of the plaintiffs used an application that licensed the xAI technology or "otherwise purchased its access to Grok, and was used as a cut-out or middleman." XAI did not respond to an email from The Associated Press seeking comment. But a Jan. 14 post about the controversy on the social media platform X said: "We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content. "We take action to remove high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity, taking appropriate action against accounts that violate our X Rules. We also report accounts seeking Child Sexual Exploitation materials to law enforcement authorities as necessary." Meanwhile, the students in the lawsuit said they worry that the images created of them will live forever on the internet. They fear stalking because their real first names and the name of their school are attached to the files. They worry that their friends and classmates have seen the photos and videos, which appear to be real, and they worry about who will see them in the future. Jane Doe 1 said she has suffered from anxiety, depression, stress. "She has difficulty eating and sleeping and suffers from recurring nightmares," the lawsuit states. Jane Doe 2 "has begun self-isolating and avoiding being on her school campus, and even dreads attending her own graduation." Jane Doe 3 suffers from constant fear and anxiety that someone will see the AI-generated images and recognize her face, according to the lawsuit.
[17]
Tennessee Minors Sue Musk's XAI, Alleging Grok Generated Sexual Images of Them
March 16 (Reuters) - Three Tennessee plaintiffs, including two minors, sued Elon Musk's xAI on Monday, alleging that it knowingly designed its Grok image generator to let people create sexually explicit content by using real photos of others. The lawsuit, filed in the San Jose, California federal court, is seeking class-action status for people in the United States who were "reasonably identifiable" in sexualized images or videos generated by Grok based on real images of themselves. The artificial intelligence company did not immediately respond to a Reuters request for comment. After an outcry over sexually explicit content generated by the chatbot, xAI said in January that it had blocked all users from editing images of "real people in revealing clothing" and from generating images of people in revealing clothing in "jurisdictions where it's illegal." (Reporting by Mrinmay Dey in Mexico City; Editing by Edwina Gibbs)
[18]
xAI Allegedly Generated Deepfake Nude Images Of Minors, Lawsuit States
Three Tennessee teenagers have filed a federal class-action lawsuit against Elon Musk's xAI, alleging the company's AI chatbot, Grok, generated sexualized images of them without their consent, Reuters reported. The lawsuit, filed Monday in the Northern District of California, claims Grok was used to create child sexual abuse material by manipulating real photos of the plaintiffs. The minors -- identified as Jane Doe 1, 2, and 3 -- allege these altered images were widely distributed on platforms such as Discord, Telegram and other file-sharing services, resulting in lasting emotional trauma and reputational harm. According to the complaint, xAI disregarded standard safety protocols in the development of Grok, enabling the creation and monetization of harmful content. "xAI and its founder Elon Musk saw a business opportunity: an opportunity to profit off the sexual predation of real people, including children. Knowing the type of harmful, illegal content that could -- and would -- be produced, xAI released Grok, a generative artificial intelligence model with image and video-making features that would respond to prompts to create sexual content with a person's real image or video," the lawsuit states. The alleged victims are seeking damages of at least $150,000 per violation under Masha's Law, which allows victims of child pornography to sue for damages in federal court. In addition, they are requesting disgorgement of xAI's revenues, punitive damages, attorneys' fees and a permanent injunction. The plaintiffs also seek restitution of profits under California's Unfair Competition Law. Grok Under Scrutiny Grok has come under recent scrutiny for generating nonconsensual sexualized images of real individuals, including minors. In January, Gov. Gavin Newsom condemned Musk's artificial intelligence company xAI, urging state authorities to investigate its chatbot Grok over the creation and spread of nonconsensual AI-generated sexual images involving children. In a post on X, Newsom called xAI's actions "vile," accusing the company of enabling predators to exploit AI technology. "xAI's decision to create and host a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children, is vile," Newsom wrote. India's Ministry of Electronics and Information Technology has also expressed concerns, calling for a comprehensive review of the platform and the removal of any content that contravenes Indian law. The Center for Countering Digital Hate reported Grok produced more than 23,000 sexualized images of children over 11 days from December to January. Photo: Shutterstock This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[19]
Tennessee minors sue Musk's xAI, alleging Grok generated sexual images of them
Three Tennessee plaintiffs, including two minors, sued Elon Musk's xAI on Monday, alleging that it knowingly designed its Grok image generator to let people create sexually explicit content by using real photos of others. The lawsuit, filed in the San Jose, California federal court, is seeking class-action status for people in the United States who were "reasonably identifiable" in sexualized images or videos generated by Grok based on real images of themselves. The artificial intelligence company did not immediately respond to a request for comment. After an outcry over sexually explicit content generated by the chatbot, xAI said in January that it had blocked all users from editing images of "real people in revealing clothing" and from generating images of people in revealing clothing in "jurisdictions where it's illegal." Governments and regulators around the world have also since launched probes, imposed bans and demanded safeguards in a growing push to curb illegal and offensive material. The lawsuit claims xAI failed to install safeguards to prevent its systems from generating sexual content involving minors. All three plaintiffs were minors at the time the images were generated. Plaintiffs allege their real images were digitally altered into explicit content and then shared online through platforms, causing emotional distress and creating a public nuisance. They are seeking unspecified damages, legal fees, and an injunction requiring xAI to halt the alleged practices. "These are children whose school photographs and family pictures were turned into child sexual abuse material," plaintiffs' counsel Annika Martin of Lieff Cabraser Heimann & Bernstein said in a statement. "Elon Musk and xAI deliberately designed Grok to produce sexually explicit content for financial gain, with no regard for the children and adults who would be harmed."
[20]
Tennessee plaintiffs sue Elon Musk's AI company over Grok sexually explicit content
After an outcry over sexually explicit content generated by the chatbot, xAI said in January that it had blocked all users from editing images of "real people in revealing clothing." Three Tennessee plaintiffs, including two minors, sued Elon Musk's xAI on Monday, alleging that it knowingly designed its Grok image generator to let people create sexually explicit content by using real photos of others. The lawsuit, filed in the San Jose, California, federal court, is seeking class-action status for people in the United States who were "reasonably identifiable" in sexualized images or videos generated by Grok based on real images of themselves. The artificial intelligence company did not immediately respond to a Reuters request for comment. After an outcry over sexually explicit content generated by the chatbot, xAI said in January that it had blocked all users from editing images of "real people in revealing clothing" and from generating images of people in revealing clothing in "jurisdictions where it's illegal." Governments and regulators around the world have also since launched probes, imposed bans and demanded safeguards in a growing push to curb illegal and offensive material. Minors claim Grok AI turned real photos into sexualized images The lawsuit claims xAI failed to install safeguards to prevent its systems from generating sexual content involving minors. All three plaintiffs were minors at the time the images were generated. Plaintiffs allege their real images were digitally altered into explicit content and then shared online through platforms, causing emotional distress and creating a public nuisance. They are seeking unspecified damages, legal fees, and an injunction requiring xAI to halt the alleged practices. "These are children whose school photographs and family pictures were turned into child sexual abuse material," plaintiffs' counsel Annika Martin of Lieff Cabraser Heimann & Bernstein said in a statement. "Elon Musk and xAI deliberately designed Grok to produce sexually explicit content for financial gain, with no regard for the children and adults who would be harmed."
[21]
xAI Sued Over Grok AI Child Abuse Image Claims
Following the controversy over xAI's Grok AI "undressing" feature, which allegedly generated child sexual abuse material (CSAM), a class-action lawsuit has been filed in the United States District Court for the Northern District of California on behalf of three unnamed minor girls. The lawsuit claims that Grok morphed their images to create sexually explicit content. The major claims of the lawsuit are: What counts has the lawsuit been filed under? The lawsuit includes the following counts against xAI: What relief is the lawsuit seeking? The lawsuit seeks a permanent injunction to stop xAI from generating illegal content. It also seeks damages for emotional distress and reputational harm, along with attorney's fees. Monetary relief sought includes: Who is liable, the user or the AI model?: "When a user prompts a generative AI model to create an image or video, the model draws upon its training data to create the new content, typically through a process called diffusion. In basic terms, the model iteratively refines visual static to create a coherent image. Each step in this refinement is performed by the model itself, and the user does not direct or control any individual step of this generation process beyond the original prompt." Therefore, the lawsuit argues that the generated image is the model's own creation: "It did not exist before the model generated it, and it could not have existed but for the model." Is age-gating AI models possible? The lawsuit argues that age-gating is insufficient for image and video generation tools. Unlike text models, these tools cannot reliably distinguish or constrain age-specific outputs. As a result, the complaint claims that the only effective way to prevent CSAM generation is to prohibit all sexually explicit image and video generation. The lawsuit cites several best practices, including:
[22]
Teenagers sue Musk's xAI claiming image-generator made sexually explicit images of them as minors
NASHVILLE, Tenn. -- Three teenagers in Tennessee sued Elon Musk's xAI this week, claiming the company's image-generation tools were used to morph real photos of them into explicitly sexual images. The high school students, who are seeking to proceed under pseudonyms, filed the lawsuit in California, where xAI -- Musk's artificial intelligence company -- has its headquarters. They are seeking class-action status in order to represent what the lawsuit says are thousands of victims like themselves who either are minors or were minors when sexually explicit images of them were created. According to the lawsuit, Jane Doe 1 was alerted anonymously in December that someone was distributing sexually explicit images of her on a social media website. "At least five of these files, one video and four images, depicted her actual face and body in settings with which she was familiar, but morphed into sexually explicit poses," the lawsuit states. It claims the person distributing the images knew Doe and used xAI's image generation tools to turn real photos of her into sexually abusive ones. One of the images was taken from a homecoming photo. Another was taken from a high school yearbook. The person distributing the images also created explicit images of at least 18 other girls, two of whom are co-plaintiffs in the lawsuit. In late December, local police arrested the perpetrator and confiscated his phone. They found that he had uploaded the images to several platforms where he traded them for sexually explicit images of other minors. Other AI companies have prohibited their image-generators from producing any sexually explicit content, even of adults. Musk saw this as a business opportunity and promoted the ability of xAI's Grok chatbot to create "spicy" content, the lawsuit claims. However, there is currently no way to prevent the generation of explicit images of adults while completely blocking the generation of images of children, the lawsuit claims. It also claims that xAI knew Grok would be able to produce sexually explicit images of children but released it anyway. The lawsuit claims the person who distributed images of the plaintiffs used an application that licensed the xAI technology or "otherwise purchased its access to Grok, and was used as a cut-out or middleman." XAI did not respond to an email from The Associated Press seeking comment. But a Jan. 14 post about the controversy on the social media platform X said: "We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content. "We take action to remove high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity, taking appropriate action against accounts that violate our X Rules. We also report accounts seeking Child Sexual Exploitation materials to law enforcement authorities as necessary." Meanwhile, the students in the lawsuit said they worry that the images created of them will live forever on the internet. They fear stalking because their real first names and the name of their school are attached to the files. They worry that their friends and classmates have seen the photos and videos, which appear to be real, and they worry about who will see them in the future. Jane Doe 1 said she has suffered from anxiety, depression, stress. "She has difficulty eating and sleeping and suffers from recurring nightmares," the lawsuit states. Jane Doe 2 "has begun self-isolating and avoiding being on her school campus, and even dreads attending her own graduation." Jane Doe 3 suffers from constant fear and anxiety that someone will see the AI-generated images and recognize her face, according to the lawsuit.
[23]
Elon Musk's xAI Faces Lawsuit: Here's What Went Wrong
According to the latest news, the case was filed on March 16, 2026, in a federal court in California. The plaintiffs have asked the court to allow the lawsuit to proceed as a class action. The names of the plaintiffs are Jane Doe 1, Jane Doe 2, and Jane Doe 3. "If the allegations get approved, the case could represent anyone whose real childhood images were turned into sexual content using Grok," an official statement highlighted. In the complaint, the plaintiffs argued that xAI failed to implement basic . They also stated that other AI image generators actually prevent the creation of sexual images involving real people. One of the plaintiffs, Jane Doe 1, claimed that some photos from her high school homecoming and yearbook were . The altered pictures depicted her unclothed. She also said she learned about the incident after an anonymous person contacted her on Instagram. The unknown person had also shared a link to a Discord server. In this server, where her explicit images, along with those of other minors from her school, were circulating. Other allegations highlighted that several altered explicit images of the plaintiffs were created through a third-party mobile app that uses Grok models.
[24]
Tennessee minors sue Musk's xAI, alleging Grok generated sexual images of them
March 16 (Reuters) - Three Tennessee plaintiffs, including two minors, sued Elon Musk's xAI on Monday, alleging that it knowingly designed its Grok image generator to let people create sexually explicit content by using real photos of others. The lawsuit, filed in the San Jose, California federal court, is seeking class-action status for people in the United States who were "reasonably identifiable" in sexualized images or videos generated by Grok based on real images of themselves. The artificial intelligence company did not immediately respond to a Reuters request for comment. After an outcry over sexually explicit content generated by the chatbot, xAI said in January that it had blocked all users from editing images of "real people in revealing clothing" and from generating images of people in revealing clothing in "jurisdictions where it's illegal." * Governments and regulators around the world have alsosince launched probes, imposed bans and demanded safeguards in agrowing push to curb illegal and offensive material. * The lawsuit claims xAI failed to install safeguards toprevent its systems from generating sexual content involvingminors. All three plaintiffs were minors at the time the imageswere generated. * Plaintiffs allege their real images were digitally alteredinto explicit content and then shared online through platforms,causing emotional distress and creating a public nuisance. * They are seeking unspecified damages, legal fees, and aninjunction requiring xAI to halt the alleged practices. * "These are children whose school photographs and familypictures were turned into child sexual abuse material,"plaintiffs' counsel Annika Martin of Lieff Cabraser Heimann &Bernstein said in a statement. "Elon Musk and xAI deliberatelydesigned Grok to produce sexually explicit content for financialgain, with no regard for the children and adults who would beharmed." (Reporting by Mrinmay Dey in Mexico City; Editing by Edwina Gibbs)
[25]
Elon Musk's xAI faces lawsuit from minors alleging Grok created their explicit AI images
Another plaintiff was informed by criminal investigators that altered explicit images of her had been created through a third-party mobile app that uses Grok models. Elon Musk's artificial intelligence company xAI is facing a lawsuit in the US after three anonymous plaintiffs accused its Grok AI models of generating sexually explicit images of them. The case was filed on Monday in a federal court in California. The plaintiffs, identified as Jane Doe 1, Jane Doe 2, and Jane Doe 3, have asked the court to allow the lawsuit to proceed as a class action. If approved, the case could represent anyone whose real childhood images were turned into sexual content using Grok. According to the complaint, the plaintiffs argue that xAI failed to implement basic safety measures that many other AI image generators use to prevent the creation of sexual images involving real people, reports TechCrunch. Also read: OpenAI declares code red, calls Anthropic's success a wake-up call If a system allows nude or erotic content to be generated from real photos, it becomes extremely difficult to stop users from producing sexual images of children. Musk's public promotion of Grok's ability to create sexualised images and depict real people in revealing outfits is also cited in the lawsuit. One of the plaintiffs, Jane Doe 1, says photos from her high school homecoming and yearbook were altered using Grok to show her unclothed. She reportedly learned about the images after an anonymous person contacted her on Instagram and shared a link to a Discord server where sexualised images of her and other minors from her school were circulating. Another plaintiff, Jane Doe 2, was informed by criminal investigators that altered explicit images of her had been created through a third-party mobile app that uses Grok models. Jane Doe 3 also learned about a manipulated pornographic image of her after investigators discovered it on the phone of a suspect they had arrested. Also read: Meta plans to lay off 20% of staff as AI costs rise: Report Lawyers for the plaintiffs argue that even when Grok models are used by third-party apps, they still rely on xAI's code and servers, which means the company should be held responsible. Two of the plaintiffs are still minors. All three say the spread of these images has caused severe emotional distress and could damage their reputations and social lives. The lawsuit is seeking civil penalties under laws meant to protect children from exploitation and hold companies accountable for negligence.
Share
Share
Copy Link
Three Tennessee teenagers filed a proposed class-action lawsuit against Elon Musk's xAI, alleging the company deliberately designed Grok AI to generate sexually explicit images of real people, including minors. The lawsuit claims xAI prioritized profit over safety by refusing to implement industry-standard guardrails, resulting in thousands of victimized children whose school photos were transformed into child sexual abuse material.
Three teenage girls from Tennessee and their guardians filed a proposed class-action lawsuit on Monday against Elon Musk's xAI, alleging the company intentionally designed Grok AI to generate sexual images of minors for financial gain
1
. The complaint, filed in California federal court in San Jose, accuses Elon Musk's xAI of transforming innocent school photographs and family pictures into child sexual abuse material through its Grok image generator2
. The plaintiffs seek to represent anyone who had real images of them as minors altered into sexual content by Grok AI, estimating that at least thousands of minors were victimized1
.
Source: Analytics Insight
Attorney Annika K. Martin, representing the plaintiffs, confirmed that the teenagers' lives were "shattered by the devastating loss of privacy and the deep sense of violation that no child should ever have to experience"
1
. The lawsuit alleges negligence and seeks an injunction to end Grok's harmful outputs, along with punitive damages for all identifiable minors harmed by the AI child abuse images1
.The nightmare began in December when Jane Doe 1 received an anonymous Instagram message from a Discord user warning that her explicit pictures were circulating in a folder with 18 other minors
1
. The anonymous tipster sent her a link to a Discord server featuring sexualized images of her and other minors she recognized from school2
. At least five files depicted her actual face and body morphed into sexually explicit poses, including images taken from her homecoming photo and high school yearbook5
.Local law enforcement opened a criminal investigation and arrested the perpetrator in late December
5
. Searching his phone, investigators found a third-party app that licensed or purchased access to Grok, which the perpetrator used to morph the girls' photos1
. The bad actor uploaded the AI-generated CSAM to file-sharing platform Mega and used them as a "bartering tool in Telegram group chats with hundreds of other users," trading the files for sexually explicit content of other minors1
. Jane Doe 2 and Jane Doe 3 were subsequently informed by criminal investigators about altered, sexualized images created using xAI technology2
.
Source: Benzinga
The teenagers sue xAI for failing to implement industry-standard guardrails that other frontier labs use to prevent image models from producing pornography depicting real people and minors
2
. The complaint argues that xAI demonstrated inadequate guardrails compared to other AI companies that have prohibited their image generators from producing any sexually explicit content, even of adults5
.Musk promoted Grok's ability to create "spicy" content and saw the ability to generate sexually explicit images as a business opportunity
5
. His public promotion of Grok's capability to produce sexual imagery and depict real people in skimpy outfits features heavily in the suit2
. The lawsuit claims there is currently no way to prevent the generation of explicit images of adults while completely blocking images of children, and that xAI knew Grok would be able to produce sexually explicit images of children but released it anyway5
.Reports estimate that Grok users created 4.4 million "undressed" or "nudified" images over nine days, representing 41% of total image generation during that period
3
. Researchers from the Center for Countering Digital Hate estimated that Grok generated approximately three million sexualized images, of which about 23,000 images depicted apparent children1
. A researcher in January found that nearly 10 percent of about 800 Grok Imagine outputs reviewed appeared to include CSAM1
.
Source: Ars Technica
Related Stories
All three plaintiffs, two of whom are still minors, report experiencing severe emotional distress over the circulation of these images and what it could mean for their reputations and social life
2
. Jane Doe 1 has suffered from anxiety, depression, and stress, with difficulty eating and sleeping and recurring nightmares5
. Jane Doe 2 has begun self-isolating and avoiding her school campus, even dreading attending her own graduation5
.The victims who know the perpetrator remain uncertain if the Grok-generated CSAM was shared with classmates or distributed to others at their school
1
. One girl fears the scandal will impact her college admissions1
. The teenagers worry that the images will live forever on the internet and fear stalking because their real first names and school name are attached to the files5
.The plaintiffs are asking for civil penalties under child pornography laws and regulations intended to protect exploited children and prevent corporate negligence
2
. The case, filed in the U.S. District Court of California Northern District, seeks class-action status for people in the United States who were "reasonably identifiable" in sexualized images or videos generated by Grok based on real images of themselves4
.Attorneys for the plaintiffs argue that because third-party usage still requires xAI code and servers, the company should be held responsible
2
. Martin emphasized that the harm is so extensive that it's not enough for Elon Musk to acknowledge only the images the girls can show Grok twisted into CSAM, stating "We intend to hold xAI accountable for every child they harmed in this way"1
.The wave of "undressed" images stirred outrage globally, prompting the European Commission to launch an investigation while Malaysia and Indonesia banned X within their borders
3
. After an outcry over sexually explicit content generated by the chatbot, xAI said in January that it had blocked all users from editing images of "real people in revealing clothing" and from generating images of people in revealing clothing in "jurisdictions where it's illegal"4
. However, rather than fix Grok initially, xAI limited access to the system to paying subscribers1
. A similar, separate class-action lawsuit was filed by a South Carolina woman in late January3
.Summarized by
Navi
[2]
16 Jan 2026•Policy and Regulation

24 Mar 2026•Policy and Regulation

17 Oct 2025•Technology
1
Technology

2
Science and Research

3
Science and Research
