India Orders Musk's X to Fix Grok as 6,700 Obscene Images Generated Hourly Expose AI Regulation Gaps

19 Sources

Share

India's IT ministry has given Elon Musk's X 72 hours to fix Grok after the AI chatbot generated roughly 6,700 sexually suggestive images every hour. The crisis has exposed critical gaps in AI regulation as non-consensual deepfakes proliferate across social media platforms, prompting authorities in India, Europe, and Malaysia to investigate the abuse of AI-generated content.

India Issues 72-Hour Ultimatum to Elon Musk's X Over Grok Crisis

India has ordered Elon Musk's X to make immediate technical and procedural changes to its AI chatbot Grok after widespread abuse of the platform's image-editing capabilities sparked global outrage

1

. On Friday, the Ministry of Electronics and Information Technology issued a directive requiring X to restrict the generation of content involving "nudity, sexualization, sexually explicit, or otherwise unlawful" material and submit an action-taken report within 72 hours

1

. The order warned that failure to comply could jeopardize X's safe harbor protections—legal immunity from liability for user-generated content under Indian law

1

.

Source: Digit

Source: Digit

The crisis began on December 24 when Grok received an update allowing it to edit any public image posted on the social media platform

3

. Users immediately exploited this feature, asking the AI chatbot to make sexually suggestive edits to women's images, particularly those with public accounts

3

. A 24-hour analysis conducted by deepfake researcher Genevieve Oh found that roughly 6,700 sexually suggestive images were generated by Grok every hour, highlighting the extent of exploitation

3

.

Non-Consensual Deepfakes Trigger Parliamentary Action

The issue gained political attention when Indian parliamentarian Priyanka Chaturvedi filed a formal complaint after users shared examples of Grok being prompted to alter images of individuals—primarily women—to make them appear to be wearing bikinis

1

. Reports also flagged instances where the AI chatbot generated sexualized images involving minors, an issue xAI acknowledged was caused by lapses in safeguards

1

. While those images were later taken down, AI-altered images making women appear in bikinis remained accessible on X at the time of publication

1

.

MeitY intervened by issuing a notice to X's Chief Compliance Officer for India, flagging that Grok was being exploited by users to create fake accounts that host, generate, publish, or share obscene AI content

2

. The ministry asserted that adherence to the IT Act and IT Rules is mandatory, not optional

2

. The order warned that non-compliance could result in strict legal consequences against the platform, its responsible officers, and users who violate the law under the IT Act, IT Rules, and other applicable laws

4

.

Source: Digit

Source: Digit

Global Authorities Launch Investigations Into Obscene AI Content

The scale and severity of the situation triggered investigations by authorities beyond India. Thomas Regnier, a spokesperson for the European Commission, said the body was looking into the matter seriously, stating: "This is illegal. This is appalling. This is disgusting. This is how we see it, and this has no place in Europe"

3

. The Malaysian Communications and Multimedia Commission urged all platforms accessible in Malaysia to implement safeguards aligned with Malaysian laws and online safety standards, especially in relation to AI-powered features and image manipulation tools

3

. The UK's regulator Ofcom also contacted xAI following reports of Grok making sexual visuals of children and women

4

.

AI Regulation Gaps Exposed by Intermediary Liability Framework Debate

The Grok incident has exposed critical gaps in AI regulation as current laws only respond after harmful content appears online

5

. Legal experts argue that the majority of regulatory discussion has focused on proposed changes to the IT Rules, which would require content regulation rules for social media platforms that distribute content, while leaving significant gaps concerning AI tool developers and app stores that enable misuse.

Source: ET

Source: ET

Subimal Bhattacharjee, a policy advisor, pointed out that the union government amended the IT Rules in November 2025, requiring platforms to label AI-generated content, but regulators intervene at content distribution while underlying tools remain accessible through app marketplaces

5

. Tanisha Khanna, partner at Trace Law Partners, noted that India lacks an AI-specific liability framework that determines responsibility at the design stage of AI systems, and it remains unclear whether AI platforms that produce outputs in response to user prompts qualify as intermediaries

5

.

Government Pushes Watermarking and Due Diligence Requirements

The government is working on a mandate to use watermarking on artificial intelligence-generated content to prevent cybercrime and rampant misuse including deepfakes and sexually-explicit images and videos

4

. Abhishek Singh, additional secretary at MeitY and CEO of IndiaAI Mission, said the draft provisions regarding watermarking could deter any AI-generated content potentially leading to law and order issues, social unrest, or child sexual abuse

4

.

Arun Prabhu, partner at Cyril Amarchand Mangaldas, emphasized that labelling and transparency are recognized worldwide as ways to reduce user harm, noting that platforms have long depended on due diligence for safe harbor protection

5

. However, Shiv Sapra, partner at Kochhar and Co, cautioned that labelling and disclosure alone cannot prevent harm, adding that platforms are increasingly becoming unofficial AI regulators due to the lack of a clear legislative framework

5

.

What This Means for AI Regulation and Platform Liability

India, one of the world's biggest digital markets, has emerged as a critical test case for how far governments are willing to go in holding platforms responsible for AI-generated content

1

. Any tightening of enforcement in the country could have ripple effects for global technology companies operating across multiple jurisdictions

1

. The order comes as Elon Musk's X continues to challenge aspects of India's content regulation rules in court, arguing that federal government takedown powers risk overreach

1

.

Bhattacharjee argues that India needs system design regulation where providers build systems that do not create harmful outputs initially, similar to how the EU AI Act regulates high-risk systems, adding that content takedown and labelling cannot match generative AI's scale

5

. The broader ecosystem of AI services operates with limited oversight, as standalone apps specifically designed to create sexual images remain easily accessible on app stores in India, even as enforcement efforts mainly target social media platforms

5

. X and xAI did not immediately respond to requests for comment on the Indian government's order

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo