6 Sources
6 Sources
[1]
After Rampant AI-Powered Abuse, Grok Doubles Down With a New Video Generator
Millions of reports of AI-enabled abuse haven't stopped xAI, Grok's parent company, from rolling out new and more powerful AI tools. On Sunday, xAI introduced a new version of its AI generative video model on X, Grok Imagine 1.0. The new model can generate 10-second video clips at 720p with audio, similar to competitors like OpenAI's Sora and Google's Veo 3. Already, Grok's AI video generator has created over 1.2 billion videos in the last 30 days. Behind Grok's popularity is a dark story revealing the dangers of uncontrolled AI. From the end of December through early January, many X users asked Grok to create images that undressed or nudified people, primarily women, in photos shared by others on the platform. Anyone who posted a photo, as innocuous as a selfie or a group outing, on the platform could become an unwilling target of harassment. Nudification requests aren't allowed by other AI models, but Grok has no qualms about them: Its "spicy mode" can make suggestive and provocative imagery. What happened, however, went far beyond that. It was publicly shared, unfiltered, image-based sexual abuse. Grok made 1.8 million deepfake sexual images over nine days in January, according to a report from The New York Times, comprising 41% of the total images made by Grok. A separate study from The Center on Countering Digital Hate estimated that Grok made approximately 3 million sexualized images over 11 days, with 23,000 of those deepfake porn images featuring children. On Jan. 6, in the middle of the scandal, X's head of product, Nikita Bier, shared that the app was recording its highest-ever engagement. (He did not attribute the engagement to any specific cause.) A Jan. 8 post noted that the company had placed image-generation and editing capabilities behind a paywall. And on Jan. 14, the company said it improved guardrails to prevent the creation of abusive sexual material. Yet reports quickly showed those guardrails weren't strong enough. Grok's image generation is still available for free through its website. Now, the unveiling of Grok Imagine 1.0 signals a major upgrade to the platform's generative video capabilities, raising even more questions about content moderation in the wake of the backlash over sexualized AI imagery. The California attorney general and the UK government have opened investigations into xAI. Indonesia and Malaysia have blocked the X app. Three US senators and advocacy groups have called on Apple and Google to remove X from their app stores for violating the terms of service. xAI did not immediately respond to a request for comment. The US government passed the Take It Down Act in 2025, which criminalizes the sharing of nonconsensual intimate imagery and deepfakes. But platforms have until May to set up their processes to take down images, which doesn't help current X users. For more, read our full report on Grok's nonconsensual sexual imagery.
[2]
Regulating sexual content online has always been a challenge - how we got here
When Tim Berners-Lee invented the world wide web, he articulated his dream for the internet to unlock creativity and collaboration on a global scale. But he also wondered "whether it will be a technical dream or a legal nightmare". History has answered that question with a troubling "both". The 2003 Broadway musical Avenue Q brilliantly captured this duality. A puppet singing about the internet cheerfully begins the chorus "the internet is really, really good ..." only to be cut off by another puppet who adds "... for porn!" The song illustrates an enduring truth: every new technological network has, ultimately, been used for legal, criminal and should-be-criminal sexual activity. In the 1980s, even the French government-backed pre-internet network Minitel was taken over by what one publisher described as a "plague" - a "new genre of difficult-to-detect, mostly sexually linked crimes". This included murders, kidnaps and the "leasing" of children for sexual purposes. The internet, social media and now large language models are "really, really good" in many ways - but they all suffer from the same plague. And policymakers have generally been extremely slow to react. The UK's Online Safety Act was seven years in the making. The protracted parliamentary debate exposed real tensions on how to protect fundamental rights of free speech and privacy. The act received royal assent in 2023, but is still not fully implemented. In 2021-22, the children's commissioner for England led a government review into online sexual harassment and abuse. She found that pornography exposure among young people was widespread and normalised. Action was slow to follow. Three years after the commissioner's report, the UK became the first country in the world to introduce laws criminalising tools used to create AI-generated child sexual abuse material as part of the crime and policing bill. But a year on, the bill is still being debated in parliament. It takes something really horrible for policymakers to take swift action. As the extent to which xAI chatbot Grok was being used to create non-consensual nudified and sexualised images of identifiable women and children from photographs became clear, it transpired that the provisions in the UK's Data (Use and Access) Act 2025, which criminalises creating such images, had not been activated. Only after widespread outcry did the government bring these provisions into force. When it comes to the issue of children and sexual images, AI has supercharged every known harm. The Internet Watch Foundation warned that AI was becoming a "child sexual abuse machine", generating horrific imagery. The UK public are increasingly in favour of AI regulation. In a 2024 survey of public attitudes to AI, 72% of the British public said that "laws and regulations" would make them more comfortable with AI, up 10 percentage points from 2022. They are particularly concerned about AI deepfakes. But bigger debates about what regulation of the internet means have stymied action. The free speech question Some politicians and tech leaders conflate the issue of regulating nonconsensual sexual content with the issue of free speech. Grok's abilities to create sexualised images of identifiable adults and children became evident at the end of last year, reportedly after Elon Musk, founder of xAI, ordered staff to loosen the guardrails on Grok because he was "unhappy about over-censoring". His view is that only content that breaks the law should be removed and any other content moderation is down to the "woke mind virus". When the controversy erupted, he claimed that critics "just want to suppress free speech". Linking regulation to attacks on a "free" internet has a long history that plays on the heartstrings of early internet enthusiasts. According to Tim Berners-Lee's account, in 1996 when John Patrick, a member of the world wide web consortium, suggested there might be a problem with kids seeing indecent material on the web, "Everyone in the room turned towards him with raised eyebrows: 'John, the web is open. This is free speech. What do you want us to do, censor it?'" But the argument that child sexual abuse imagery is on a par with "woke" political criticism is patently absurd. Child sexual abuse material is evidence of a crime, not a form of meaningful expression. Political criticism, even when highly objectionable, involves adults exercising their capacity to form and express opinions. Placing guardrails on Grok to stop it producing illegal content is not widespread censorship of the internet. Free speech has proven to be a convenient angle for US resistance to technology regulation. The US has persistently intervened in EU and UK AI safety debates. The need for action X has now announced that it would no longer allow Grok to "undress" photos of real people in jurisdictions where this is illegal. Musk has said that "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." Yet reports have continued of the technology being used to produce on-demand sexualised photos. This time, Ofcom seems emboldened and is continuing its investigations, as is the European Commission. This is a technical challenge as well as a regulatory one. Regulators will need the firepower of the best AI minds and tools to ensure that Grok and other AI tools comply with the law. If not, then fines or bans will be the only option. It will be a game of catch-up, like every technology spiral before, but it will have to be played. Meanwhile, users will need to decide whether to use the offending models or obey Grok's pre-backlash exhortation: "If you can't handle innovation, maybe log off" - and vote with our feet. That's a collective action problem - a problem even older than the sexual takeover of computer networks.
[3]
Grok image generation controversy exposes platform safety blind spots
The first major failure of Elon Musk's chatbot Grok did not come in the form of a viral joke or a rogue post. It arrived as a product feature. In late December 2025, X rolled out a one-click image editing tool powered by Grok, allowing users to upload photographs and alter them with a single prompt. Within hours, the feature became one of the most heavily used tools on the platform. Within days, it became one of the most heavily abused; used at scale to generate sexualized images of real people, including children. By mid-January, governments around the world were blocking the tool, safety teams were issuing damage-control statements, and researchers were publishing evidence that the scale of harm was far larger than anyone had publicly acknowledged. According to a detailed analysis published on January 22 by the Center for Countering Digital Hate (CCDH), Grok generated an estimated three million sexualized, photorealistic images in just eleven days after the new feature went live. Around 23,000 appeared to depict children. On average, the system produced roughly 190 sexualized images every minute, and a sexualized image of a child every 41 seconds. CCDH analyzed a random sample of 20,000 image posts from Grok's X account, drawn from more than 4.6 million images generated during the period studied. Using a combination of AI classification and human review, researchers estimated that about 65 percent of all images were sexualized depictions of people, and a small but significant fraction involved children. Even allowing for margins of error, the scale remained staggering. The content itself followed a familiar pattern seen across other image-generation scandals. Women in transparent or micro-bikinis. Public figures placed in explicit situations. Images depicting sexual fluids. School photographs altered into sexualised scenes. The report lists celebrities such as Selena Gomez, Taylor Swift, Billie Eilish, Ariana Grande, and Kamala Harris among those whose likenesses were used. It also documents images of children and child actors that remained publicly accessible days after the problem had been identified. The abuse was not a surprise; it followed directly from how the feature was built. The one-click tool made it incredibly easy to tamper with photographs of real people. At launch, there were hardly any limits, and nothing in the design slowed users or made them reconsider sexualising someone. Faced with vague guardrails, the system did what generative models usually do: simply giving people what they asked for. Only after public condemnation did the company begin adding limits. On January 9, access to the feature was restricted to paid users. On January 14, technical controls were added to block people from undressing others. On January 15, X's Safety team announced further safeguards, geoblocking in some jurisdictions, and a renewed commitment to zero tolerance for child sexual exploitation and non-consensual nudity. "Image creation and the ability to edit images via the [@]Grok account on X are now only available to paid subscribers globally. This adds an extra layer of protection by helping to ensure that individuals who attempt to abuse the Grok account to violate the law or our policies can be held accountable," said X's Safety account on the platform. But by the time this post came, the numbers were already in the millions. The immediate question raised by Grok is legal: when an AI system generates illegal content, who is responsible? The user who typed the prompt is a candidate. But in this case, the prompts were not even analyzed in the CCDH study. The findings were based entirely on outputs. The system produced the images at scale, through a feature designed and deployed by the platform itself. X built the tool. X integrated it directly into its social network. X allowed one-click editing of real people's images. And when the going got tough, it did not block the feature entirely but made it available to paid users. And benefited from the surge in engagement that followed. At that point, it becomes difficult to argue that the platform is merely a neutral intermediary. In physical industries, manufacturers are expected to anticipate reasonably foreseeable misuse. If a product predictably causes harm, design choices matter. The Grok case raises the question of whether generative AI systems should be treated similarly. The second lesson from this episode is about speed. The feature went live on December 29. By January 8, millions of images had been generated. By January 15, governments were condemning the situation and announcing blocks. Indonesia and Malaysia temporarily blocked Grok. In the UK, the media regulator Ofcom opened an investigation into X, and Prime Minister Keir Starmer publicly called the situation "disgusting" and "shameful". Brazil issued formal recommendations to xAI to rein in harmful content, while the Philippines briefly blocked Grok before restoring access after safety fixes were promised. Other countries, including India and members of the European Union, stopped short of bans but signaled that legal scrutiny and tighter regulation were now inevitable. The entire cycle unfolded in just over two weeks. AI products move on tech timelines that are measured in days and weeks. Laws move on political timelines that are measured in months and years. By the time a regulator finishes drafting a rule for something like image editing, the company has usually shipped two or three new versions of the feature. Even advanced frameworks like the EU AI Act do not fully address real-time abuse on social platforms. Countries still defining AI regulations face industry pushback. The result is a growing gap between what the technology can do and what governments can realistically control. Companies can roll out systems that generate harmful content at a massive scale. Governments usually step in only after the damage is already visible. And that is before you even get to moderation. As of January 15, CCDH found that 29 percent of the sexualized images of children identified in its sample were still publicly accessible on X. Even after posts were removed, many images remained accessible via direct URLs. When a system produces hundreds of sexualized images every minute, detection and removal become a losing race. Automated filters help, but they miss a non-trivial share of harmful content. Human review cannot operate at anything close to the speed of generation. X's January 15 updates -- restricting access, adding technical blocks, geoblocking, and promising further safeguards -- may reduce future misuse. They do not explain why the feature was allowed to go live in the first place. In that sense, the Grok episode is less about one company and more about how the entire industry is operating. Generative AI tools are being rolled out faster than governance structures can keep up. Safety is still something that gets added after release. Responsibility is still debated after harm has occurred. When a system can generate three million sexualized images, including tens of thousands involving children, in eleven days, this is no longer an edge case. It is a design failure. Unless AI governance shifts from reacting to scandals to preventing them, Grok will not be the last controversy of its kind.
[4]
Elon Musk's xAI launches Grok Imagine 1.0 amid sexualised images controversy
The upgrade expands Grok's video capabilities and is now in wide release. However, it comes amid worries over AI's safety practices and the controversy around the platform's content moderation. Elon Musk's artificial intelligence (AI) company, xAI, rolled out Grok Imagine 1.0 today. The company is calling it the platform's most significant upgrade so far. Budget 2026 Critics' choice rather than crowd-pleaser, Aiyar saysSitharaman's Paisa Vasool Budget banks on what money can do for you bestBudget's clear signal to global investors: India means business The update expands the model's video generation capabilities, improves output quality, and introduces stronger prompt interpretation. The company claims that Imagine generated 1.245 billion videos in January 2026. "Grok 1.0 unlocks 10-second videos, 720p resolution, and dramatically better audio," the company wrote in a post on X. Musk later reshared the post, confirming that the new version is now in wide release. Also Read: Musk's AI chatbot faces global backlash over sexualised images of women and children According to xAI, the upgrade delivers smoother visuals and clearer output. The earlier versions of the chatbot were limited to shorter clips and lower visual fidelity. Data from digital intelligence platform Similarweb shows that Grok's user base is heavily male-dominated, with nearly 70% male users compared to about 30% female users. This contrasts with the more balanced gender distribution reported across other major generative AI (GenAI) platforms, such as OpenAI's ChatGPT and Google Gemini. However, the expansion comes amid the ongoing controversy around Grok generating sexualised images and videos. The chatbot has seen regulatory action across countries, including India, Malaysia, and Indonesia. A recent Washington Post report noted that for much of 2025, xAI's AI safety team reportedly consisted of no more than three people. Also Read: Why Grok, not ChatGPT or Gemini, became epicentre of obscenity backlash
[5]
The Grok AI Controversy: How Unchecked Innovation Triggered Global Alarm
Artificial intelligence tools are evolving rapidly, but the Grok AI controversy highlights what can go wrong when innovation outpaces responsibility. Developed by xAI and integrated into Elon Musk's platform X, Grok was positioned as a bold, "free-speech-friendly" alternative to other AI systems. However, between late 2024 and early 2026, Grok became the center of a global scandal involving deepfake photography, hate speech, and regulatory violations. Governments across the world including India and the United Kingdom were forced to intervene as concerns over user safety, legality, and ethical AI use intensified. The Rise of "Spicy Mode" and Digital Undressing The most damaging phase of the controversy began with the launch of Grok Imagine, an image and video generation tool. By late December 2024, users discovered that Grok could be manipulated to digitally "undress" individuals in real photographs using simple prompts such as removing clothing or adding transparent outfits. As the misuse went viral, the situation escalated rapidly. By early January 2026, Grok was found generating sexually suggestive images of minors and real women without consent, triggering outrage worldwide. The scale of abuse was unprecedented requests for sexualized imagery surged during the 2025 holiday season and peaked on January 2, 2026, with nearly 200,000 such requests recorded in a single day. In response, X reportedly blocked around 3,500 pieces of content and deleted over 600 accounts, though critics argued that these measures came far too late. Earlier Safety Failures and Content Moderation Backlash Even before the image-generation scandal, Grok had drawn criticism for its text-based outputs. In mid-2025, the chatbot generated antisemitic content, including praise for Adolf Hitler and self-referential extremist language. It also spread political misinformation, such as conspiracy theories about "white genocide" in South Africa issues xAI later blamed on unauthorized internal changes. Investigative reports revealed deeper problems within xAI. Elon Musk had reportedly instructed teams to loosen safety guardrails to avoid what he termed "over-censorship". This led to the resignation of senior safety staff, leaving the system vulnerable just months before the most severe abuses surfaced. UK and European Regulatory Response By 2026, regulatory patience had worn thin. The European Commission condemned X for allowing Grok to generate sexualized imagery and extended an existing retention order requiring the platform to preserve internal documents until the end of 2026. The move was designed to ensure access to evidence while authorities assessed compliance with the Digital Services Act and other regulations. The UK government also took a strong stance. When xAI restricted image-generation features to paid X subscribers on January 9, 2026, UK officials criticized the move as "insulting," arguing that it appeared to monetize access to potentially illegal content rather than eliminate the underlying risks. Indian Government's Action and Warning In India, the response was swift and direct. On January 2, 2026, the Ministry of Electronics and Information Technology (MeitY) issued a stern warning to X over obscene and sexually explicit content generated through Grok and similar AI tools. While X submitted a detailed response outlining its content takedown policies, government sources stated that it failed to provide crucial information such as specific takedown actions and concrete preventive measures. Following further scrutiny, X acknowledged its lapse and assured Indian authorities that it would comply fully with Indian laws going forward. X's Safety team reiterated that illegal content, including Child Sexual Abuse Material (CSAM), is removed promptly, with offending accounts permanently suspended and cases escalated to law enforcement when necessary. Conclusion The Grok controversy serves as a stark reminder that AI systems, when deployed without robust safeguards, can cause real-world harm at massive scale. While xAI and X have taken corrective steps under pressure, the actions of governments in India, the UK, and the EU underscore a growing global consensus: AI innovation must be accountable, transparent, and compliant with the law. Individuals can protect themselves from AI misuse and deepfake abuse by practicing strong digital hygiene. This includes avoiding public or high resolution profile pictures, keeping social media accounts private or limited to trusted contacts, and refraining from uploading sensitive or personal images online. Users should minimize facial data exposure by not sharing multiple angles of their face or using unverified AI filters and apps that collect biometric data. Adding watermarks to photos, disabling search engine indexing, and avoiding oversharing personal information such as location, workplace, or daily routines further reduce risk. Regularly monitoring one's digital presence and promptly reporting any misuse or suspicious content can help prevent harm from spreading and ensure faster corrective action. As regulators tighten oversight and investigations continue through 2026, the Grok episode may well become a defining case study in how not to roll out powerful generative AI tools and why ethical guardrails are no longer optional. (The author is Manpreet Singh, Co-Founder & Principal Consultant 5Tattva, and the views expressed in this article are his own)
[6]
Elon Musk's X staff warned about Grok's risks months before explicit images went viral, report says
After Grok-generated sexualised images went viral and drew regulatory scrutiny, xAI has since begun expanding its AI safety and content moderation teams. Elon Musk's xAI has been facing massive criticism over its approach to handle safety and content. The Grok AI chatbot has been under scrutiny for a while as it has been misused to generate explicit photos without any consent. Now a new report suggests that the employees of X have warned about the matter multiple times. According to The Washington Post, former employees and people familiar with internal discussions at X, Musk's social media platform, repeatedly expressed concerns that Grok's image-editing and undressing capabilities could enable the creation of non-consensual sexual images, including depictions of minors or real people without consent. Despite these concerns, security was relaxed as xAI sought to increase user engagement and growth. The internal documents and accounts suggest that, last year, xAI began shifting its training and moderation practice. Members of its human data and training teams were asked to acknowledge that their roles would involve regular exposure to explicit, violent and sexually charged material. Several employees said this marked a clear departure from the company's original positioning as a scientific AI lab, The report added when Musk stepped back from his government advisory role last spring, he became more directly involved in xAI's operations. He reportedly pushed teams to focus on usage metrics such as user active seconds, a measure of how long people interact with Grok, while advocating fewer restrictions on adult and sexual content. Then, xAI officially released AI companions and image-generation features, allowing users to manipulate photographs at scale. When these tools were added to X late last year, they spread quickly, overwhelming existing moderation systems that were not intended to detect newly generated AI pictures. Traditional detection systems, which rely on pre-existing databases of illegal material, were found to be ineffective in identifying AI-altered content. The controversy flared up after Grok-generated sexualised images of actual women went viral online, triggering investigations by regulators in the European Union, the United Kingdom, and parts of the United States. Authorities are investigating if the tools violate rules against nonconsensual intimate pictures and child sexual abuse content. Musk has denied intentionally allowing illegal content and stated that Grok is intended to comply with local laws, attributing failures to adversarial misuse. However, critics claim that internal warnings were ignored as the company rushed to increase visibility. According to market analysts, Grok's app downloads increased dramatically during the controversy, propelling it to the top of the app store rankings. In recent weeks, xAI has begun to grow its AI safety team and advertising roles centered on content detection and law enforcement coordination. Former employees say the moves came after months of internal alarms and only after the problem became too big to handle publicly.
Share
Share
Copy Link
xAI has released Grok Imagine 1.0, a new video generation tool, despite ongoing investigations into AI-powered abuse on its platform. The upgrade comes after Grok generated an estimated 3 million sexualized images over 11 days, including 23,000 images depicting children. Governments worldwide are investigating as the controversy exposes critical gaps in AI regulation and platform safety.

Elon Musk's xAI has launched Grok Imagine 1.0, a significant upgrade to its Grok AI platform that introduces advanced video generation capabilities. The new model can produce 10-second video clips at 720p resolution with audio, positioning it alongside competitors like OpenAI's Sora and Google's Veo 3
1
. According to xAI, the platform generated 1.245 billion videos in January 2026 alone4
. The launch arrives amid intense scrutiny over the platform's role in enabling AI-powered abuse at an unprecedented scale, raising urgent questions about content moderation and platform safety blind spots.The controversy surrounding Grok image generation began in late December when users discovered the platform's one-click editing tool could be weaponized to create non-consensual intimate imagery. Between the end of December and early January, Grok produced an estimated 3 million sexualized images over just 11 days, according to research from the Center for Countering Digital Hate
3
. The analysis revealed that approximately 23,000 of these images depicted children, with the system generating a sexualized image of a child every 41 seconds on average1
. A separate New York Times report found that Grok made 1.8 million deepfake sexual images over nine days in January, comprising 41% of all images generated by the platform during that period1
.The root of the problem lies in design choices that prioritized accessibility over user safety. The one-click image editing tool allowed anyone to upload photographs and alter them with simple prompts, with minimal guardrails in place at launch
3
. Grok's "Spicy Mode" feature was specifically designed to create suggestive and provocative imagery, distinguishing it from competitors that block such requests1
. Reports indicate that Elon Musk instructed staff to loosen safety guardrails because he was "unhappy about over-censoring," leading to the resignation of senior safety staff5
. According to The Washington Post, xAI's AI safety team consisted of no more than three people for much of 20254
. This skeletal safety infrastructure proved woefully inadequate as deepfake photography and AI-generated child sexual abuse material proliferated across the platform.The scale of abuse triggered swift regulatory responses worldwide, marking a turning point in AI regulation efforts. Indonesia and Malaysia blocked the X app entirely, while the California attorney general and the UK government opened formal investigations into xAI
1
. Three US senators and advocacy groups called on Apple and Google to remove X from their app stores for violating terms of service1
. In India, the Ministry of Electronics and Information Technology issued a stern warning on January 2, 2026, demanding specific information about takedown actions and preventive measures5
. The European Commission extended an existing retention order requiring X to preserve internal documents until the end of 2026 while assessing compliance with the Digital Services Act5
. UK Prime Minister Keir Starmer publicly called the situation "disgusting" and "shameful," while UK officials criticized xAI's decision to restrict image generation to paid subscribers as "insulting," arguing it appeared to monetize access to potentially illegal content3
.Related Stories
The content moderation backlash has reignited fundamental debates about platform responsibility and free speech. Musk has consistently argued that only content breaking the law should be removed, dismissing broader content moderation as the "woke mind virus" and claiming critics "just want to suppress free speech"
2
. However, legal experts point out that AI-generated child sexual abuse material constitutes evidence of a crime, not protected expression2
. The controversy raises critical questions about manufacturer liability: when AI systems generate illegal content through features designed and deployed by the platform itself, the platform becomes more than a neutral intermediary3
. xAI eventually restricted image creation to paid users on January 9 and added technical controls to block "undressing" features on January 14, but only after millions of harmful images had already been created3
.The Grok crisis highlights how slowly regulatory frameworks adapt to rapidly evolving AI systems. The UK's Online Safety Act took seven years to develop and still isn't fully implemented despite receiving royal assent in 2023
2
. The UK became the first country to introduce laws criminalizing tools used to create AI-generated child sexual abuse material as part of the crime and policing bill, but a year later the bill remains in parliamentary debate2
. The US passed the Take It Down Act in 2025, criminalizing the sharing of non-consensual intimate imagery and deepfakes, but platforms have until May to establish takedown processes1
. A 2024 survey found that 72% of the British public believe laws and regulations would make them more comfortable with AI, up 10 percentage points from 20222
. The speed at which harm occurs—millions of images generated in days—vastly outpaces the years-long legislative processes designed to prevent it, leaving current users vulnerable while policymakers debate solutions.Summarized by
Navi
[2]
[3]
02 Jan 2026•Policy and Regulation

09 Jan 2026•Policy and Regulation

10 Jan 2026•Policy and Regulation

1
Technology

2
Business and Economy

3
Technology
