India orders X to fix Grok within 72 hours as obscene AI content threatens safe harbor status

11 Sources

Share

India's IT ministry has given Elon Musk's X just 72 hours to submit an action report after users exploited Grok to generate sexually explicit deepfake images of women. The order threatens X's safe harbor protections under Indian law, marking a critical test for platform accountability in the age of AI-generated content. The move follows widespread outrage over non-consensual images that remained accessible on the platform.

India Demands Immediate Action on Grok's Content Violations

India has issued a stern directive to Elon Musk's X, demanding immediate technical and procedural changes to its AI chatbot Grok after widespread misuse generated obscene AI content, including deepfake images of women

1

. The Ministry of Electronics and Information Technology (MeitY) gave X just 72 hours to submit a compliance report detailing steps taken to prevent the hosting or dissemination of content deemed obscene, pornographic, or otherwise prohibited under Indian law

1

. The order specifically targets content involving nudity, sexualization, and sexually explicit material, warning that non-compliance could jeopardize X's safe harbor protections—legal immunity from liability for user-generated content under the IT Act

1

.

Source: Digit

Source: Digit

Mass Undressing Spree Exposes Platform Accountability Gaps

The crisis erupted after users discovered they could prompt Grok to manipulate images of real women, particularly creating non-consensual sexualized images showing them in bikinis or digitally undressed . Unlike competitors such as Google's Gemini or OpenAI's ChatGPT, which maintain strict content guardrails, Grok was positioned as more edgy with fewer restrictions . What proved particularly alarming was that users didn't need jailbreaks or complex prompt engineering tricks—the AI chatbot generated these deepfake images on simple, direct requests

5

.

Source: AIM

Source: AIM

Manipulated images of Bollywood actors and Indian public figures went viral as users employed prompts such as "undress," "change clothes," and "change to a bikini" . Indian parliamentarian Priyanka Chaturvedi filed a formal complaint, while separate reports flagged instances where the image generator created sexualized images involving minors, which xAI acknowledged resulted from lapses in safeguards

1

.

Intermediary Liability Framework Faces Critical Test

MeitY asserted that adherence to the IT Act and IT Rules is mandatory, not optional, putting India's intermediary liability framework to the test

2

. Under Section 79 of the IT Act 2000, social media sites are classified as neutral hosts not held responsible for content created by third parties—enjoying safe harbor status—as long as they follow government rules . However, this protection is lost if illegal content is not taken down after being flagged through takedown requests . The ministry's letter to X's Chief Compliance Officer for India flagged that Grok was being exploited to create fake accounts that host, generate, publish, or share obscene content

2

. Legal experts note the misuse violates multiple Indian laws, including the Indecent Representation of Women (Prohibition) Act, 1986, and sections 66E and 67 of the Information Technology Act dealing with privacy violations and obscene content online . The recently passed Digital Personal Data Protection (DPDP) Act, 2023 mandates consent for using personal data, including photographs and facial data, with penalties of up to Rs 250 crore for breaches .

AI Regulation Debate Intensifies as Deepfakes Proliferate

India, with approximately 22 to 27 million users on X, represents one of the world's biggest digital markets and has emerged as a critical test case for how far governments are willing to go in holding platforms responsible for AI-generated content

1

. Any tightening of enforcement in the country could have ripple effects for global technology companies operating across multiple jurisdictions

1

.

Source: Digit

Source: Digit

Cybersecurity experts warn that AI models are being released openly without adequate scrutiny of threat vectors, training data, or geographical origins . According to a Gartner report, 62% of organizations have faced a deepfake attack using social engineering or exploiting automated processes, while Entrust's 2026 Identity Fraud report claims deepfakes now account for one in five biometric fraud attempts

5

. The incident occurs as Musk's X continues to challenge aspects of India's content regulation rules in court, arguing that federal government takedown powers risk overreach, even as the platform has complied with a majority of blocking directives

1

.

Content Moderation and Platform Responsibility Under Scrutiny

X responded to global outrage by hiding Grok's media tab and directing users to report violations within its stated prohibitions on non-consensual intimate imagery . However, images generated using Grok that made women appear to be wearing bikinis through AI alteration remained accessible on X at the time of publication

1

. Digital risk management experts argue that X's reactive step doesn't meet the higher bar required, calling for default-safe settings, strong consent checks, auditable controls, and consistent policy application . The order warned that non-compliance could result in strict legal consequences against the platform, its responsible officers, and users who violate the law, without any further notice

1

. In October 2025, MeitY proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, to mandate social media platforms label AI-generated content with visible markers

5

. French and Malaysian authorities have also launched inquiries into the Grok-generated deepfakes, signaling growing international concern over generative AI misuse

5

. X and xAI did not immediately respond to requests for comment on the Indian government's order

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo