3 Sources
3 Sources
[1]
India orders social media platforms to take down deepfakes faster | TechCrunch
India has ordered social media platforms to step up policing of deepfakes and other AI-generated impersonations, while sharply shortening the time they have to comply with takedown orders. It's a move that could reshape how global tech firms moderate content in one of the world's largest and fastest growing market for internet services. The changes, published (PDF) on Tuesday as amendments to India's 2021 IT Rules, bring deepfakes under a formal regulatory framework, mandating the labelling and traceability of synthetic audio and visual content, while also slashing compliance timelines for platforms, including a three-hour deadline for official takedown orders and a two-hour window for certain urgent user complaints. India's importance as a digital market amplifies the impact of the new rules. With over a billion internet users and a predominantly young population, the South Asian nation is a critical market for platforms like Meta and YouTube, making it likely that compliance measures adopted in India will influence global product and moderation practices. Under the amended rules, social media platforms that allow users to upload or share audio-visual content must require disclosures on whether material is synthetically generated, deploy tools to verify those claims, and ensure that deepfakes are clearly labelled and embedded with traceable provenance data. Certain categories of synthetic content -- including deceptive impersonations, non-consensual intimate imagery, and material linked to serious crimes -- are barred outright in the rules. Non-compliance, particularly in cases flagged by authorities or users, can expose companies to greater legal liability by jeopardising their safe-harbour protections under Indian law. The rules lean heavily on automated systems to meet those obligations. Platforms are expected to deploy technical tools to verify user disclosures, identify, and label deepfakes, and prevent the creation or sharing of prohibited synthetic content in the first place. "The amended IT Rules mark a more calibrated approach to regulating AI-generated deepfakes," said Rohit Kumar, founding partner at New Delhi-based policy consulting firm The Quantum Hub. "The significantly compressed grievance timelines -- such as the two- to three-hour takedown windows -- will materially raise compliance burdens and merit close scrutiny, particularly given that non-compliance is linked to the loss of safe harbour protections." Aprajita Rana, a partner at AZB & Partners, a leading Indian corporate law firm, said the rules now focus on AI-generated audio-visual content rather than all online information, while carving out exceptions for routine, cosmetic or efficiency-related uses of AI. However, she cautioned that the requirement for intermediaries to remove content within three hours once they become aware of it departs from established free-speech principles. "The law, however, continues to require intermediaries to remove content upon being aware or receiving actual knowledge, that too within three hours," Rana said, adding that the labelling requirements would apply across formats to curb the spread of child sexual abuse material and deceptive content. New Delhi-based digital advocacy group Internet Freedom Foundation said the rules risk accelerating censorship by drastically compressing takedown timelines, leaving little scope for human review and pushing platforms toward automated over-removal. In a statement posted on X, the group also raised concerns about the expansion of prohibited content categories and provisions that allow platforms to disclose the identities of users to private complainants without judicial oversight. "These impossibly short timelines eliminate any meaningful human review," the group said, warning that the changes could undermine free-speech protections and due process. Two industry sources told TechCrunch that the amendments followed a limited consultation process, with only a narrow set of suggestions reflected in the final rules. While the Indian government appears to have taken on board proposals to narrow the scope of information covered -- focusing on AI-generated audio-visual content rather than all online material -- other recommendations were not adopted. The scale of changes between the draft and final rules warranted another round of consultation to give companies clearer guidance on compliance expectations, the sources said. Government takedown powers have already been a point of contention in India. Social media platforms and civil-society groups have long criticized the breadth and opacity of content removal orders, and even Elon Musk's X challenged New Delhi in court over directives to block or remove posts, arguing that they amounted to overreach and lacked adequate safeguards. Meta, Google, Snap, X, and the Indian IT ministry did not respond to requests for comments. The latest changes come just months after the Indian government, in October 2025, reduced the number of officials authorized to order content removals from the internet in response to a legal challenge by X over the scope and transparency of takedown powers. The amended rules will come into effect on February 20, giving platforms little time to adjust compliance systems. The rollout coincides with India's hosting of the AI Impact Summit in New Delhi from February 16 to 20, which is expected to draw senior global technology executives and policymakers to the country.
[2]
Explained: As govt tightens AI content rules, what must social media platforms & others do
India's government has mandated social media platforms to clearly label all AI-generated or modified content. Amendments to IT rules require intermediaries to use visible disclosures or embedded metadata for identification, with a strict three-hour window for takedown orders. These measures aim to regulate synthetically generated information, including deepfakes, following recent controversies. The central government issued guidelines today mandating social media platforms, among others, to clearly label all artificial intelligence-generated or modified content. Which rules have changed? The Ministry of Electronics and Information Technology (MeitY) made amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, on Tuesday. India's intermediary framework was first set out in 2011 and later replaced by the 2021 amendments, which expanded due diligence obligations for major social media intermediaries and introduced regulation for digital news and curated audio-visual content. The latest directions build on these rules, bringing synthetically generated information (SGI), including deepfakes, into a stricter regulatory framework. What has changed? Per the new rules, intermediaries must ensure that AI-generated or modified content is labelled or identifiable, either through visible disclosures or embedded metadata. The rules permit the use of technical measures such as embedded metadata as identifiers to enable flexible compliance while ensuring traceability. Further, the rules make such identifiers irreversible once they have been applied. Platforms must also warn users about the consequences of AI misuse at least once every three months. Further, the government has mandated the deployment of automated tools to detect and prevent the spread of illegal, sexually exploitative, or deceptive AI-generated content. Previously, a 36-hour window was offered to intermediaries to comply with takedown orders. However, under stricter enforcement measures, platforms must now remove or disable access to AI-generated content within three hours of receiving an order from the court or government. What are intermediaries? Entities that store or transmit data on behalf of end users are intermediaries. These include telecom service providers, online marketplaces, search engines, and social media like Jio, Amazon, Google, Meta, etc. How will the rules be enforced? The initial phase of enforcement focusess on large social media intermediaries with five million or more registered users in India. This means the rules will largely impact foreign players such as Meta and X (formerly Twitter). Why now? These measures come amid the recent Grok controversy where the AI chatbot generated non-consensual explicit deepfakes. The changes also reportedly follow from the centre's recent consultations with industry bodies such as IAMAI and Nasscom. The rules will ensure that platforms inform users about SGI and even identify those involved in producing such content.
[3]
Social media platforms must detect, label AI-generated content under new rules
India has issued a new order for social media platforms. All AI-generated content must now be clearly labeled. These labels and their embedded identifiers cannot be removed. Companies must use tools to detect and stop illegal or deceptive AI content. Users will also receive warnings about AI misuse. These warnings will be sent every three months. India has directed social media platforms to clearly label all AI-generated content and ensure that such synthetic material carries embedded identifiers, according to an official order. Platforms have also been barred from allowing the removal or suppression of AI labels or associated metadata once they have been applied, the order said. To curb misuse, companies will be required to deploy automated tools to detect and prevent the circulation of illegal, sexually exploitative or deceptive AI-generated content. Platforms have also been asked to regularly warn users about the consequences of violating rules related to AI misuse. Such warnings must be issued at least once every three months, the government said. In a stricter enforcement measure, the government has set a three-hour deadline for social media companies to take down AI-generated or deepfake content once it is flagged by the government or ordered by a court.
Share
Share
Copy Link
India has ordered social media platforms to label all AI-generated content and remove deepfakes within three hours of receiving takedown orders. The amendments to India's 2021 IT Rules introduce a formal regulatory framework for synthetic content, requiring embedded metadata, automated detection tools, and quarterly user warnings about AI misuse. With over a billion internet users, India's new compliance measures could reshape how global tech firms like Meta and YouTube moderate content worldwide.
India has introduced sweeping amendments to its 2021 IT Rules that fundamentally change how social media platforms must handle AI-generated content and deepfakes
1
. Published on Tuesday by the Ministry of Electronics and Information Technology (MeitY), the new regulations mandate that intermediaries must clearly label AI-generated content using visible disclosures or embedded metadata, while slashing compliance timelines to a three-hour deadline for takedown orders from the previous 36-hour window2
. The changes bring synthetically generated information under a formal regulatory framework, requiring platforms to deploy automated tools for detection and ensure traceability of all synthetic audio and visual content1
.
Source: ET
The amended India IT Rules impose stringent requirements on social media platforms operating in the country. Platforms must now remove or disable access to flagged content within three hours of receiving government or court orders, with a two-hour window for certain urgent user complaints
1
. Once applied, the labels and embedded identifiers on AI-generated content cannot be removed or suppressed, making them irreversible . Platforms are also required to issue warnings about AI misuse to users at least once every three months2
. The rules lean heavily on automated tools to verify user disclosures, identify and label deepfakes, and prevent the creation or sharing of prohibited synthetic content1
.
Source: TechCrunch
With over a billion internet users and a predominantly young population, India represents a critical market for platforms like Meta and YouTube, making compliance measures adopted there likely to influence global product and content moderation practices
1
. The initial phase of enforcement focuses on large intermediaries with five million or more registered users in India, which means the rules will largely impact foreign players2
. Non-compliance can expose companies to greater legal liability by jeopardizing their safe-harbour protections under Indian law, particularly in cases flagged by authorities or users1
.Related Stories
The new regulations have sparked concerns among legal experts and digital advocacy groups about potential censorship and erosion of free-speech principles. Rohit Kumar, founding partner at The Quantum Hub, noted that "the significantly compressed grievance timelines will materially raise compliance burdens and merit close scrutiny"
1
. Aprajita Rana, a partner at AZB & Partners, cautioned that requiring intermediaries to remove content within three hours upon becoming aware of it departs from established free-speech principles1
. The Internet Freedom Foundation warned that these impossibly short takedown timelines eliminate meaningful human review, pushing platforms toward automated over-removal and potentially undermining due process1
.The measures follow recent controversies, including the Grok incident where the AI chatbot generated non-consensual explicit deepfakes
2
. The rules bar certain categories of synthetic content outright, including deceptive AI-generated impersonations, non-consensual intimate imagery, and material linked to serious crimes1
. Industry sources indicated that the amendments followed a limited consultation process, with only a narrow set of suggestions reflected in the final rules, and that the scale of changes warranted another round of consultation1
. As platforms race to implement these requirements, observers will be watching whether the policing of deepfakes and accelerated takedown orders set a precedent for other jurisdictions grappling with similar challenges around synthetic media and content moderation.Summarized by
Navi
22 Oct 2025•Policy and Regulation

02 Jan 2026•Policy and Regulation

12 Nov 2025•Policy and Regulation

1
Policy and Regulation

2
Technology

3
Technology
