India enforces three-hour deepfake takedown as new IT rules test global platforms

Reviewed byNidhi Govil

12 Sources

Share

India has introduced sweeping amendments to its IT Rules requiring social media platforms to remove deepfakes and AI-generated content within three hours, down from 36. The new regulations, effective February 20, mandate labeling of all synthetic content and deploy automated detection tools. With over 1 billion internet users, India's move could reshape global content moderation practices, though digital rights groups warn the compressed timelines may trigger automated censorship and eliminate meaningful human review.

India IT Rules Introduce Aggressive Deepfakes Compliance Timeline

India has ordered social media platforms to accelerate their policing of deepfakes and other AI-generated impersonations, implementing a three-hour takedown window that replaces the previous 36-hour deadline

1

3

. The changes, published as amendments to India's 2021 IT Rules, take effect on February 20 and bring deepfakes under a formal regulatory framework while mandating the labeling and content traceability of synthetic audio and visual content

1

. Certain urgent user complaints must be addressed within just two hours, creating what digital rights activist Nikhil Pahwa calls "automated censorship"

5

.

Source: MediaNama

Source: MediaNama

With over 1 billion internet users and a predominantly young population, India represents a critical market for platforms like Meta, Google, and X (formerly Twitter)

1

. The country has approximately 500 million social media users, including 500 million YouTube users, 481 million Instagram users, 403 million Facebook users, and 213 million Snapchat users

2

. India's importance as a digital market amplifies the impact of these rules, making it likely that compliance measures adopted there will influence global product and moderation practices.

Source: ET

Source: ET

Mandatory Labeling and User Disclosures for AI Content

Under the amended India IT Rules, social media platforms that allow users to upload or share audio-visual content must require user disclosures for AI content on whether material is synthetically generated

1

. Platforms must deploy automated tools to verify those claims and ensure that deepfakes are clearly labeled with traceable provenance data embedded in the content

1

4

. Any AI-generated content that isn't blocked must include "permanent metadata or other appropriate technical provenance mechanisms," and platforms are ordered not to allow these markers to be modified, hidden, or removed

2

.

The requirement to label synthetic content aims to help users immediately identify AI-generated materials, such as adding verbal disclosures to AI audio or overlaying text on images identifying them as synthetic

2

4

. Certain categories of synthetic content—including deceptive impersonations, non-consensual intimate imagery, and material linked to serious crimes—are barred outright

1

4

.

Loss of Safe-Harbour Protections Raises Liability Stakes

Non-compliance with the new regulations, particularly in cases flagged by authorities or users, can expose companies to greater liability by jeopardizing their loss of safe-harbour protections under Indian law

1

4

. Rohit Kumar, founding partner at New Delhi-based policy consulting firm The Quantum Hub, noted that "the significantly compressed grievance timelines—such as the two- to three-hour takedown windows—will materially raise compliance burdens and merit close scrutiny, particularly given that non-compliance is linked to the loss of safe harbour protections"

1

.

The rules lean heavily on automated systems to meet these obligations, expecting platforms to deploy technical tools to verify user disclosures, identify and label deepfakes, and prevent the creation or sharing of prohibited synthetic content

1

. According to transparency reports, more than 28,000 URLs or web links were blocked in 2024 following government requests

3

.

Digital Rights Concerns and Censorship Fears Mount

The Internet Freedom Foundation warned that these changes risk accelerating automated censorship by drastically compressing the three-hour takedown window, leaving little scope for human review and pushing platforms toward over-removal

1

2

. "These impossibly short timelines eliminate any meaningful human review, forcing platforms toward automated over-removal," the group stated, warning that the changes could undermine free speech protections and due process

1

3

.

Anushka Jain, a research associate at the Digital Futures Lab, acknowledged that the labeling requirement could improve transparency but cautioned that the three-hour deadline could push companies toward full automation

3

. "Companies are already struggling with the 36-hour deadline because the process involves human oversight. If it gets completely automated, there is a high risk that it will lead to censoring of content," she told the BBC

3

. Delhi-based technology analyst Prasanto K Roy described the new regime as "perhaps the most extreme content takedown regime in any democracy"

3

.

Source: The Verge

Source: The Verge

C2PA Standards Face Critical Stress Test

The best methods currently available for detecting and labeling deepfakes online are about to face a critical stress test

2

. C2PA, also known as content credentials, is one of the leading systems for both detection and labeling, working by attaching detailed metadata to images, videos, and audio at the point of creation or editing

2

. However, Meta, Google, Microsoft, and many other tech giants are already using C2PA, and it clearly isn't working as intended

2

.

Interoperability is one of C2PA's biggest issues, and while India's new rules may encourage adoption, C2PA metadata is far from permanent—it's so easy to remove that some online platforms can unintentionally strip it during file uploads

2

. Social media platforms can't label anything that doesn't include provenance data to begin with, such as materials produced by open-source AI models or so-called "nudify apps" that refuse to embrace the voluntary C2PA standard

2

. Platforms like X that haven't implemented any AI labeling systems at all now have just nine days to comply

2

.

Limited Consultation Process Sparks Industry Concerns

Two industry sources told TechCrunch that the amendments followed a limited consultation process, with only a narrow set of suggestions reflected in the final rules

1

. While the Indian government appears to have taken on board proposals to narrow the scope of information covered—focusing on AI-generated content rather than all online material—other recommendations were not adopted

1

. The scale of changes between the draft and final rules warranted another round of consultation to give companies clearer guidance on compliance expectations, the sources indicated

1

.

Meta, Google, Snap, X, and the Indian IT ministry did not respond to requests for comments

1

2

. The new regulations take effect on February 20—the final day of an international AI summit in New Delhi featuring leading global tech figures

5

. With widespread access to AI tools enabling a new wave of online hate facilitated by photorealistic images and videos, the US-based Center for the Study of Organized Hate warned that the laws "may encourage proactive monitoring of content which may lead to collateral censorship"

5

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo