12 Sources
12 Sources
[1]
India orders social media platforms to take down deepfakes faster | TechCrunch
India has ordered social media platforms to step up policing of deepfakes and other AI-generated impersonations, while sharply shortening the time they have to comply with takedown orders. It's a move that could reshape how global tech firms moderate content in one of the world's largest and fastest growing market for internet services. The changes, published (PDF) on Tuesday as amendments to India's 2021 IT Rules, bring deepfakes under a formal regulatory framework, mandating the labelling and traceability of synthetic audio and visual content, while also slashing compliance timelines for platforms, including a three-hour deadline for official takedown orders and a two-hour window for certain urgent user complaints. India's importance as a digital market amplifies the impact of the new rules. With over a billion internet users and a predominantly young population, the South Asian nation is a critical market for platforms like Meta and YouTube, making it likely that compliance measures adopted in India will influence global product and moderation practices. Under the amended rules, social media platforms that allow users to upload or share audio-visual content must require disclosures on whether material is synthetically generated, deploy tools to verify those claims, and ensure that deepfakes are clearly labelled and embedded with traceable provenance data. Certain categories of synthetic content -- including deceptive impersonations, non-consensual intimate imagery, and material linked to serious crimes -- are barred outright in the rules. Non-compliance, particularly in cases flagged by authorities or users, can expose companies to greater legal liability by jeopardising their safe-harbour protections under Indian law. The rules lean heavily on automated systems to meet those obligations. Platforms are expected to deploy technical tools to verify user disclosures, identify, and label deepfakes, and prevent the creation or sharing of prohibited synthetic content in the first place. "The amended IT Rules mark a more calibrated approach to regulating AI-generated deepfakes," said Rohit Kumar, founding partner at New Delhi-based policy consulting firm The Quantum Hub. "The significantly compressed grievance timelines -- such as the two- to three-hour takedown windows -- will materially raise compliance burdens and merit close scrutiny, particularly given that non-compliance is linked to the loss of safe harbour protections." Aprajita Rana, a partner at AZB & Partners, a leading Indian corporate law firm, said the rules now focus on AI-generated audio-visual content rather than all online information, while carving out exceptions for routine, cosmetic or efficiency-related uses of AI. However, she cautioned that the requirement for intermediaries to remove content within three hours once they become aware of it departs from established free-speech principles. "The law, however, continues to require intermediaries to remove content upon being aware or receiving actual knowledge, that too within three hours," Rana said, adding that the labelling requirements would apply across formats to curb the spread of child sexual abuse material and deceptive content. New Delhi-based digital advocacy group Internet Freedom Foundation said the rules risk accelerating censorship by drastically compressing takedown timelines, leaving little scope for human review and pushing platforms toward automated over-removal. In a statement posted on X, the group also raised concerns about the expansion of prohibited content categories and provisions that allow platforms to disclose the identities of users to private complainants without judicial oversight. "These impossibly short timelines eliminate any meaningful human review," the group said, warning that the changes could undermine free-speech protections and due process. Two industry sources told TechCrunch that the amendments followed a limited consultation process, with only a narrow set of suggestions reflected in the final rules. While the Indian government appears to have taken on board proposals to narrow the scope of information covered -- focusing on AI-generated audio-visual content rather than all online material -- other recommendations were not adopted. The scale of changes between the draft and final rules warranted another round of consultation to give companies clearer guidance on compliance expectations, the sources said. Government takedown powers have already been a point of contention in India. Social media platforms and civil-society groups have long criticized the breadth and opacity of content removal orders, and even Elon Musk's X challenged New Delhi in court over directives to block or remove posts, arguing that they amounted to overreach and lacked adequate safeguards. Meta, Google, Snap, X, and the Indian IT ministry did not respond to requests for comments. The latest changes come just months after the Indian government, in October 2025, reduced the number of officials authorized to order content removals from the internet in response to a legal challenge by X over the scope and transparency of takedown powers. The amended rules will come into effect on February 20, giving platforms little time to adjust compliance systems. The rollout coincides with India's hosting of the AI Impact Summit in New Delhi from February 16 to 20, which is expected to draw senior global technology executives and policymakers to the country.
[2]
Instagram and X have an impossible deepfake detection deadline
The best methods we currently have for detecting and labelling deepfakes online are about to get a stress test. India announced mandates on Tuesday that require social media platforms to remove illegal AI-generated materials much faster, and ensure that all synthetic content is clearly labeled. Tech companies have said for years that they wanted to achieve this on their own, and now they have mere days before they're legally obligated to implement it. The rules take effect on February 20th. India has 1 billion internet users who skew young, making it one of the most critical growth markets for social platforms. So, any obligations there could impact deepfake moderation efforts across the world -- either by advancing detection to the point where it actually works, or forcing tech companies to acknowledge that new solutions are needed. Under India's amended Information Technology Rules, digital platforms will be required to deploy "reasonable and appropriate technical measures" to prevent their users from making or sharing illegal synthetically-generated audio and visual content, aka, deepfakes. Any such generative AI content that isn't blocked must be embedded with "permanent metadata or other appropriate technical provenance mechanisms." Specific obligations are also called out for social media platforms, such as requiring users to disclose AI-generated or edited materials, deploying tools that verify those disclosures, and prominently labeling AI content in a way that allows people to immediately identify that it's synthetic, such as adding verbal disclosures to AI audio. That's easier said than done, given how woefully underdeveloped AI detection and labelling systems currently are. C2PA (also known as content credentials) is one of the best systems we currently have for both, and works by attaching detailed metadata to images, videos, and audio at the point of creation or editing, to invisibly describe how it was made or altered. But here's the thing: Meta, Google, Microsoft, and many other tech giants are already using C2PA, and it clearly isn't working. Some platforms like Facebook, Instagram, YouTube, and LinkedIn add labels to content flagged by the C2PA system, but those labels are difficult to spot, and some synthetic content that should carry that metadata is slipping through the cracks. Social media platforms can't label anything that doesn't include provenance metadata to begin with, such as materials produced by open-source AI models or so-called "nudify apps" that refuse to embrace the voluntary C2PA standard. India has over 500 million social media users, according to DataReportal research shared by Reuters. When broken down, that's 500 million YouTube users, 481 million Instagram users, 403 million Facebook users, and 213 million Snapchat users. It's also estimated to be X's third-largest market. Interoperability is one of the C2PA's biggest issues, and while India's new rules may encourage adoption, C2PA metadata is far from permanent. It's so easy to remove that some online platforms can unintentionally strip it during file uploads. The new rules order platforms not to allow metadata or labels to be modified, hidden, or removed, but there isn't much time to figure out how to comply. Social media platforms like X that haven't implemented any AI labeling systems at all now have just nine days to do so. Meta, Google, and X did not respond to our request for comment. Adobe, the driving force behind the C2PA standard, also did not respond. Adding to the pressure in India is a mandate that social media companies remove unlawful materials within three hours of it being discovered or reported, replacing the existing 36-hour deadline. That also applies to deepfakes and other harmful AI content. The Internet Freedom Foundation (IFF) warns that these imposed changes risk forcing platforms into becoming "rapid fire censors." "These impossibly short timelines eliminate any meaningful human review, forcing platforms toward automated over-removal," the IFF said in a statement. Given the amendments specify provenance mechanisms that should be implemented to the "extent technically feasible," the officials behind India's order are probably aware that our current AI detection and labeling tech isn't ready yet. The organizations backing C2PA have long sworn that the system will work if enough people are using it, so this is the chance to prove it.
[3]
India reduces takedown window to three hours for YouTube, Meta, X and others
India has introduced new rules that make it mandatory for social media companies to remove unlawful material within three hours of being notified, in a sharp tightening of the existing 36-hour deadline. The amended guidelines will take effect from 20 February and apply to major platforms including Meta, YouTube and X. They will also apply to AI-generated content. The government did not provide a reason for reducing the takedown window. But critics worry the move is part of a broader tightening of oversight of online content and could lead to censorship in the world's largest democracy with more than a billion internet users In recent years, Indian authorities have used existing Information Technology rules to order social media platforms to remove content deemed illegal under laws dealing with national security and public order. Experts say they give authorities wide-ranging power over social media content. According to transparency reports, more than 28,000 URLs or web links were blocked in 2024 following government requests. The BBC has contacted the ministry of electronics and information technology for comment on the latest changes. Meta declined to respond to the amendments. The BBC has also approached X and Google, which owns YouTube, for a response. The amendments also introduce new rules for AI-generated content. For the first time, the law defines AI-generated material, including audio and video that has been created or altered to look real, such as deepfakes. Ordinary editing, accessibility features and genuine educational or design work are excluded. The rules mandate that platforms that allow users to create or share such material must clearly label it. Where possible, they must also add permanent markers to help trace where it came from. Companies will not be allowed to remove these labels once they are added. They must also use automated tools to detect and prevent illegal AI content, including deceptive or non-consensual material, false documents, child sexual abuse material, explosives-related content and impersonation. Digital rights groups and technology experts have raised concerns about the feasibility and implications of the new rules. The Internet Freedom Foundation said the compressed timeline would transform platforms into "rapid fire censors". "These impossibly short timelines eliminate any meaningful human review, forcing platforms toward automated over-removal," the group said in a statement. Anushka Jain, a research associate at the Digital Futures Lab, welcomed the labelling requirement, saying it could improve transparency. However, she warned that the three-hour deadline could push companies towards full automation. "Companies are already struggling with the 36-hour deadline because the process involves human oversight. If it gets completely automated, there is a high risk that it will lead to censoring of content," she told the BBC. Delhi-based technology analyst Prasanto K Roy described the new regime as "perhaps the most extreme takedown regime in any democracy". He said compliance would be "nearly impossible" without extensive automation and minimal human oversight, adding that the tight timeframe left little room for platforms to assess whether a request was legally appropriate. On AI labelling, Roy said the intention was positive but cautioned that reliable and tamper-proof labelling technologies were still developing. The BBC has reached out to the Indian government for a response to these concerns. Follow BBC News India on Instagram, YouTube, Twitter and Facebook.
[4]
India Orders Social Media Platforms to Remove Deepfakes Within Three Hours
India has introduced new rules requiring social media companies to remove deepfakes and other illegal AI-generated content within three hours of receiving a takedown order -- a major shift in how platforms must operate in one of the world's largest online markets. On Tuesday, India announced mandates that require social media platforms to remove illegal AI-generated content much faster and ensure that all synthetic content is clearly labeled. According to a report by TechCrunch, these requirements become legally binding on February 20. The legislation could significantly affect how tech companies moderate content in India -- which has nearly 1.02 billion internet users and about 500 million unique social media users. Social media platforms will be expected to deploy technical tools to detect and label deepfakes, verify user disclosures, and prevent the creation or distribution of banned synthetic content. TechCrunch reports that the new mandate is part of several changes to India's 2021 Information Technology rules. The amendments bring deepfakes under a formal regulatory framework and require labeling and traceability for synthetic audio and visual content. They also sharply reduce the time platforms have to comply with takedown orders. Under the updated rules, social media companies must comply with official takedown orders within three hours. Certain urgent user complaints must be addressed within two hours. This replaces the previous 36-hour deadline for removing unlawful material, according to The Verge. The shorter timeline applies to deepfakes and other harmful AI-generated content. India's amended Information Technology Rules require digital platforms to deploy "reasonable and appropriate technical measures" to prevent users from creating or sharing illegal synthetically generated audio and visual content, commonly known as deepfakes. If such content is not blocked, it must include "permanent metadata or other appropriate technical provenance mechanisms." The rules also set out specific obligations for social media companies. Users must disclose when content has been generated or edited using AI. Platforms are required to use tools to verify those disclosures and clearly label AI-generated material so that users can immediately recognize it as synthetic. For example, AI-generated images may need overlaying text identifying it as fake. Certain types of synthetic content are prohibited outright, including deceptive impersonations, non-consensual intimate imagery, and material linked to serious crimes. Companies that fail to comply -- particularly when content has been flagged by authorities or users -- risk losing safe-harbor protections under Indian law, which could increase their legal liability.
[5]
India's tougher AI social media rules spark censorship fears
New Delhi (AFP) - India has tightened rules governing the use of artificial intelligence on social media to combat a flood of disinformation, but also prompting warnings of censorship and an erosion of digital freedoms. The new regulations are set to take effect on February 20 -- the final day of an international AI summit in New Delhi featuring leading global tech figures -- and will sharply reduce the time platforms have to remove content deemed problematic. With more than a billion internet users, India is grappling with AI-generated disinformation swamping social media. Companies such as Instagram, Facebook and X will have three hours, down from 36, to comply with government takedown orders, in a bid to stop damaging posts from spreading rapidly. Stricter regulation in the world's most populous country ups the pressure on social media giants facing growing public anxiety and regulatory scrutiny globally over the misuse of AI, including the spread of misinformation and sexualised imagery of children. But rights groups say tougher oversight of AI if applied too broadly risks eroding freedom of speech. India under Prime Minister Narendra Modi has already faced accusations from rights groups of curbs on freedom of expression targeting activists and opponents, which his government denies. The country has also slipped in global press freedom rankings during his tenure. The Internet Freedom Foundation (IFF), a digital‑rights group, said the compressed timeframe of the social media take-down notices would force platforms to become "rapid-fire censors". 'Automated censorship' Last year, India's government launched an online portal called Sahyog -- meaning "cooperate" in Hindi -- to automate the process of sending takedown notices to platforms including X and Facebook. Platforms must now clearly and permanently label synthetic or AI‑manipulated media with markings that cannot be removed or suppressed. Under the new rules, problematic content could disappear almost immediately after a government notification. The timelines are "so tight that meaningful human review becomes structurally impossible at scale", said IFF chief Apar Gupta. The system, he added, shifts control "decisively away from users", with "grievance processes and appeals operate on slower clocks", Gupta added. Most internet users were not informed of authorities' orders to delete their content. "It is automated censorship," digital rights activist Nikhil Pahwa told AFP. The rules also require platforms to deploy automated tools to prevent the spread of illegal content, including forged documents and sexually abusive material. "Unique identifiers are un-enforceable," Pahwa added. "It's impossible to do for infinite synthetic content being generated." Gupta likewise questioned the effectiveness of labels. "Metadata is routinely stripped when content is edited, compressed, screen-recorded, or cross-posted," he said. "Detection is error-prone." 'Online hate' The US-based Center for the Study of Organized Hate (CSOH), in a report with the IFF, warned the laws "may encourage proactive monitoring of content which may lead to collateral censorship", with platforms likely to err on the side of caution. The regulations define synthetic data as information that "appears to be real" or is "likely to be perceived as indistinguishable from a natural person or real-world event." Gupta said the changes shift responsibility "upstream" from users to the platforms themselves. "Users must declare if content is synthetic, and platforms must verify and label before publication," said Gupta. But he warned that the parameters for takedown are broad and open to interpretation. "Satire, parody, and political commentary using realistic synthetic media can get swept in, especially under risk-averse enforcement," Gupta said. At the same time, widespread access to AI tools has "enabled a new wave of online hate "facilitated by photorealistic images, videos, and caricatures that reinforce and reproduce harmful stereotypes", the CSOH report added. In the most recent headline-grabbing case, Elon Musk's AI chatbot Grok sparked outrage in January when it was used to make millions of sexualised images of women and children, by allowing users to alter online images of real people. "The government had to act because platforms are not behaving responsibly," Pahwa said. "But the rules are without thought."
[6]
MeitY Amends IT Rules to Regulate AI Content and Deepfakes
MeitY has also removed the 10 percent requirement for AI labels The Ministry of Electronics and Information Technology (MeitY) notified the amendments to the IT Rules, 2021, on Tuesday. The fresh rules focus heavily on artificial intelligence (AI)-generated content and deepfakes, bringing more flexible labelling guidelines for AI content and stricter takedown timelines for inappropriate media and deepfakes. These new sets of rules also more clearly define deepfakes and outline consequences for users who violate them. The framework will come into effect starting February 20, which also coincides with the last day of the inaugural AI Impact Summit, which is set to be hosted by India. MeitY Introduces New Rules for AI Content Regulation In a new notification (via Live Law), the ministry notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. As mentioned above, the rules will come into effect starting February 20, after a 10-day compliance window. These rules are mainly for social media platforms and their designated intermediaries, and mostly deal with how these platforms handle AI-generated content. With this, the Government has now provided a specific definition of deepfakes. It is now defined as "audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event." The notified amendments also bring changes to several rules that were suggested by the Government in October 2025. The most notable is the takedown time for deepfakes and inappropriate content that was previously mentioned as 36 hours. With the notification, it has now been reduced to three hours after such content is first reported or detected. However, on the flip side, MeitY has now removed the space requirement for visible AI labels. In October 2025, the Government said that a label should be applied to AI-generated content which covers at least 10 percent of the space. The new rule now mentions that the label should be "prominently" visible. MeitY has also told social media intermediaries to inform users at least once every three months that the social media platforms will remove posts, ban accounts, and can also take legal action against users violating the rules or participating in illegal practices. Along the same vein, the new rules now introduce consequences for violation of the rules. MeitY mandates that contraventions will now lead to suspension or termination of user accounts. Additionally, platforms will help in "identification of such user and disclosure of the identity of the violating user to the complainant." Social media intermediaries will also need to ask users to declare whenever a post contains synthetically generated information (SGI). They have also been mandated to equip appropriate tools to verify the accuracy of the declaration and to ensure that an AI label is prominently added to the post.
[7]
India's tougher AI social media rules spark censorship fears
India has tightened rules governing the use of artificial intelligence on social media to combat a flood of disinformation, but also prompting warnings of censorship and an erosion of digital freedoms. With more than a billion internet users, India is grappling with AI-generated disinformation swamping social media. India has tightened rules governing the use of artificial intelligence on social media to combat a flood of disinformation, but also prompting warnings of censorship and an erosion of digital freedoms. The new regulations are set to take effect on February 20 -- the final day of an international AI summit in New Delhi featuring leading global tech figures -- and will sharply reduce the time platforms have to remove content deemed problematic. With more than a billion internet users, India is grappling with AI-generated disinformation swamping social media. Companies such as Instagram, Facebook and X will have three hours, down from 36, to comply with government takedown orders, in a bid to stop damaging posts from spreading rapidly. Stricter regulation in the world's most populous country ups the pressure on social media giants facing growing public anxiety and regulatory scrutiny globally over the misuse of AI, including the spread of misinformation and sexualised imagery of children. But rights groups say tougher oversight of AI if applied too broadly risks eroding freedom of speech. India under Prime Minister Narendra Modi has already faced accusations from rights groups of curbs on freedom of expression targeting activists and opponents, which his government denies. The country has also slipped in global press freedom rankings during his tenure. The Internet Freedom Foundation (IFF), a digital‑rights group, said the compressed timeframe of the social media take-down notices would force platforms to become "rapid-fire censors". 'Automated censorship' Last year, India's government launched an online portal called Sahyog -- meaning "cooperate" in Hindi -- to automate the process of sending takedown notices to platforms including X and Facebook. The latest rules have been expanded to apply to content"created, generated, modified or altered through any computer resource" except material changed during routine or good‑faith editing. Platforms must now clearly and permanently label synthetic or AI‑manipulated media with markings that cannot be removed or suppressed. Under the new rules, problematic content could disappear almost immediately after a government notification. The timelines are "so tight that meaningful human review becomes structurally impossible at scale", said IFF chief Apar Gupta. The system, he added, shifts control "decisively away from users", with "grievance processes and appeals operate on slower clocks", Gupta added. Most internet users were not informed of authorities' orders to delete their content. "It is automated censorship," digital rights activist Nikhil Pahwa told AFP. The rules also require platforms to deploy automated tools to prevent the spread of illegal content, including forged documents and sexually abusive material. "Unique identifiers are un-enforceable," Pahwa added. "It's impossible to do for infinite synthetic content being generated." Gupta likewise questioned the effectiveness of labels. "Metadata is routinely stripped when content is edited, compressed, screen-recorded, or cross-posted," he said. "Detection is error-prone." 'Online hate' The US-based Center for the Study of Organized Hate (CSOH), in a report with the IFF, warned the laws "may encourage proactive monitoring of content which may lead to collateral censorship", with platforms likely to err on the side of caution. The regulations define synthetic data as information that "appears to be real" or is "likely to be perceived as indistinguishable from a natural person or real-world event." Gupta said the changes shift responsibility "upstream" from users to the platforms themselves. "Users must declare if content is synthetic, and platforms must verify and label before publication," said Gupta. But he warned that the parameters for takedown are broad and open to interpretation. "Satire, parody, and political commentary using realistic synthetic media can get swept in, especially under risk-averse enforcement," Gupta said. At the same time, widespread access to AI tools has "enabled a new wave of online hate "facilitated by photorealistic images, videos, and caricatures that reinforce and reproduce harmful stereotypes", the CSOH report added. In the most recent headline-grabbing case, Elon Musk's AI chatbot Grok sparked outrage in January when it was used to make millions of sexualised images of women and children, by allowing users to alter online images of real people. "The government had to act because platforms are not behaving responsibly," Pahwa said. "But the rules are without thought."
[8]
Decoding 3-hour deadline for companies to axe flagged AI-made posts
The latest IT rules in India now bring artificial intelligence under legal oversight for the first time. Intermediaries must act faster to remove flagged unlawful content, including deepfakes targeting women and children. The amendments also introduce a labelling mandate for AI-generated audio, visual, or audio-visual content. The latest amendments to the Information Technology rules have brought artificial intelligence within the legal ambit for the first time, while also mandating drastically shorter timelines for technological intermediaries to take down flagged unlawful content. Focussed on bringing in a labelling mandate for AI-generated content in India, the rules have gone through a series of changes since the government floated draft amendments in October last year. Meanwhile, the Ministry of Electronics and Information Technology argued for much quicker compliance in a series of cases, necessitated by unlawful content and deepfakes targeting women and children going viral within hours of being posted. Subhayan Chakraborty decodes the shifting legal landscape. Key terms: Intermediaries: Entities that receive, store, or transmit electronic records on behalf of another person, or provide services concerning such records. Synthetically Generated Information (AI content): Audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource making it appear real, authentic or true. Also, information depicting any individual or event in a manner which is likely to be perceived as indistinguishable from a natural person or real-world event. Good Faith Edits: Those which use AI to only format or enhance content quality, such as technical correction, colour adjustment, noise reduction without materially altering the underlying information. Unlawful content: Any information which is prohibited under any law, including relating to national sovereignty, integrity, state security, friendly relations with foreign countries, public order, decency or morality, contempt of court, defamation and incitement to an offence. Changes from the draft rules released in October, 2025: Good faith and routine edits exempted from mandatory labelling Reason: Officials say monitoring all instances of AI-generated or -modified content unnecessary, would take away resources of intermediaries from combating deepfakes. Condition of minimum 10% of the surface area of images, and audios being devoted to labelling dropped in favour of 'prominent labelling' Reason: According to officials, industry argued that the 10% rule would take up too much space, making content difficult to view, especially on small screens. Child sexual exploitative and abuse material, non-consensual intimate imagery, obscene, pornographic, paedophilic content clearly spelt out Reason: Rising instances of deepfakes targeting vulnerable groups seen across platforms. In January, the Centre sent notices to social media platform X over its AI chatbot Grok churning out controversial images. Major changes in compliance timeline: For mandatorily taking down flagged content, whether AI-generated or not, which is used to commit an unlawful act prohibited under any law in force: 3 hours (36 hours earlier) For resolving all user complaints received by the grievance officer: 7 days (15 days earlier) For resolving grievances specifically related to content which is pornographic, invades another person's privacy, harms a child, impersonates another person, contains a virus, misleads communication or advertises banned online games: 36 hours (72 hours earlier) For removing content showcasing nudity, sexual acts, or non-consensual intimate imagery of individuals after receiving complaints from that individual: 2 hours (24 hours earlier) For informing users that their access rights may be terminated in case of non-compliance with the intermediaries' rules and regulations, privacy policy or user agreement: once every 3 months (earlier at least once every year)
[9]
Tighter takedown rules are for all social media content
The amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules drastically shortening the time frames for intermediaries to remove content are applicable to all content, and not only those generated with artificial intelligence (AI), according to legal experts. Amended rules, notified by the electronics and information technology ministry on Tuesday, brought synthetically generated information (SGI) within the regulatory framework and significantly tightened content removal time frames for intermediaries. Intermediaries must now act on court orders or directives issued by designated law enforcement authorities within three hours in specified cases, including those related to the security and sovereignty of the state, public order, defamation, foreign relations and violations of applicable law. Requests from users seeking removal of impersonation or intimate imagery must be addressed within two hours. Previously, the time frames were 36 hours and 24 hours, respectively. Legal experts say the amendments apply broadly to all intermediaries and all types of information made available on their platforms, not just AI-generated content. Clauses 3(1)(d) and 3(2) governing removal and disabling of access apply to all information, including SGI. Failure to comply could jeopardise safe harbour protections under the IT Act. While officials said the industry had been consulted since October last year on the need for tightening content removal time frames, they did not clarify whether industry participants had confirmed their ability to meet the new deadlines. However, the government is not expected to walk back on the takedown mandate. "What we (government agencies) ask to be taken down is barely 0.1-0.2% of the total takedowns intermediaries perform," said a senior official, who did not wish to be identified. Detection of AI content a challenge The final rules are set to take effect from February 20, providing perhaps the smallest window that has been allowed for intermediaries ever, said Arya Tripathy, partner, Cyril Amarchand Mangaldas. It would be an impossible feat for many intermediaries to ensure compliance in less than 10 days, she said, adding that in many ways, the government's approach was devoid of business realities. This can also result in hasty adoption of blanket compliance frameworks that may have a chilling effect on freedom of speech and expression for users, Tripathy added. Naqeeb Ahmed Kazia, partner at CMS IndusLaw, said the inclusion of synthetically generated information under the IT Intermediary Rules marked a significant development. "While the rules clarify what SGI is, technical detection will be a challenge as there is currently no universal detection standard for AI-generated content. For AI providers to develop and implement such standards within a strict timeline appears difficult," he said. Kazia added that the reduced takedown time frames may be manageable for larger intermediaries with round-the-clock compliance teams, but could place severe operational strain on smaller platforms. "This effectively requires automated or immediate takedowns to meet due diligence requirements," he said, adding that the stringent compliance time frames were not part of the consultation draft, raising concerns about regulatory predictability and stakeholder engagement. Kalindi Bhatia, partner (Technology, Media and Communications) at BTG Advaya, said that the amendments were not limited to SGI alone. "The amendment applies as a whole to content takedown requests and is not targeted solely at synthetic generated information," she said. According to Bhatia, while regulating deepfakes and impersonation has been at the forefront of policy discussions, the short compliance windows present practical and technological hurdles. "At first glance, this poses a significant challenge for intermediaries given the tight timelines," she said, adding that law enforcement orders will need to be specific and clear to enable swift action. She also said that allowing multiple police officers to issue takedown orders could lead to decentralisation of powers and an uptick in requests, requiring platforms to put in place dedicated standard operating procedures and revised turnaround processes. Suril Desai, leader, disruptive technologies practice at Nishith Desai Associates, said intermediaries, particularly significant social media intermediaries with heightened obligations, would need to undertake technical adjustments, update policies, train moderation teams and deploy detection tools appropriately. "The shorter timelines leave little room for verification before takedown, risking potential for over-moderation, particularly in high-volume scenarios," he stressed. Also Read: Explained: As govt tightens AI content rules, what must social media platforms & others do Safe harbour impact Akash Karmakar, partner at Panag & Babu, said adherence to the three-hour time frame might necessitate AI tools capable of identifying and automatically taking down content. "If there is a human in the decision-making loop, the timeline presupposes compliance without any opportunity to challenge even arbitrary decisions," he said. "Due process, including approaching a court to challenge a takedown order, may not be feasible within three hours." He added that compliance might require significant technological upgrades, including end-to-end watermarking and provenance mechanisms for AI-generated outputs that are robust against user editing. "This assumes 24x7 legal, content monitoring and operational readiness, effectively raising the cost of doing business in India," he said. Shreya Suri, partner at CMS IndusLaw, said the amendments might shift intermediaries from neutral hosts to active content regulators. "This could potentially undermine safe harbour protections under the IT Act," she said, pointing to heavy compliance burdens on smaller platforms that lack tiered regulatory support. While the government has positioned the move as a necessary step to curb deepfakes, impersonation and other unlawful online content, the industry now faces the challenge of reconciling accelerated compliance with technological constraints and constitutional free speech concerns. The amendment marks a significant shift in India's intermediary liability regime, according to experts, one that could reshape content moderation practices across the digital ecosystem. Also Read: Tech groups flag implementation challenges with proposed AI rules
[10]
MeitY Notifies New Amendments to IT Rules on Synthetic Media
The Ministry of Electronics and Information Technology (MeitY) on February 10 notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, explicitly bringing synthetically generated information, including deepfakes, within the scope of the IT Rules' due diligence framework. The amendments regulating synthetic media will come into force on February 20, 2026, giving platforms a 10-day compliance window. The changes expand due diligence obligations for intermediaries, introduce new definitions, shorten takedown and grievance timelines, and lay down a framework for how platforms must handle synthetic audio-visual content online. The amendments introduce a statutory definition of "synthetically generated information" (SGI) and insert Rule 2(1A). This provision clarifies that references to "information" used to commit an unlawful act also include SGI. The clarification applies across key due-diligence provisions, including Rule 3(1)(b), Rule 3(1)(d), and Rules 4(2) and 4(4). As a result, the rules no longer treat synthetic media as a separate category requiring special handling. Instead, they explicitly bring SGI within the same unlawful-content compliance framework that already governs other forms of illegal online information. The notification first defines "audio, visual or audio-visual information" broadly. It covers any audio, image, photograph, graphic, video, moving visual recording, sound recording, or similar content, with or without accompanying audio, whether created, generated, modified, or altered through any computer resource. It then defines "synthetically generated information" as such audio-visual content that a computer resource artificially or algorithmically creates or alters in a manner that appears real, authentic, or true, and depicts or portrays an individual or event in a way that is, or is likely to be perceived as, indistinguishable from a natural person or a real-world event. Notably, the definition focuses on how real the content appears to an ordinary viewer, rather than merely on whether artificial intelligence tools were involved in its creation. The SGI definition expressly excludes certain categories of content. The rules do not treat audio-visual content as SGI where it arises from: These carve-outs ensure that everyday digital editing and assistive uses of AI do not automatically trigger compliance obligations meant for deceptive synthetic media. Rule 3(3)(a) divides synthetically generated information into two categories. Under Rule 3(3)(a)(i), intermediaries offering computer resources that enable or facilitate SGI must not allow users to create or share SGI that violates any law in force. For all other SGIs, Rule 3(3)(a)(ii) allows such intermediaries to host the content, provided they clearly label it as synthetically generated and attach provenance information, where technically feasible. This distinction determines whether an intermediary must block synthetic content outright or allow it to remain online with disclosures. Rule 3(3)(a)(i) requires intermediaries that offer computer resources enabling or facilitating SGI to deploy reasonable and appropriate technical measures, including automated tools, to not allow unlawful SGI. This includes SGI that: By using the phrase "not allow," the rule creates an expectation of proactive technical controls rather than purely reactive takedowns. For SGI that does not fall within the prohibited category, Rule 3(3)(a)(ii) requires intermediaries to: Rule 3(3)(b) further requires intermediaries not to enable the modification, suppression, or removal of these labels or provenance markers. These provisions aim to enable traceability of synthetic media without mandating a single technical standard. Rule 3(1)(cb) sets out the steps an intermediary must take once it becomes aware of a violation involving SGI that falls within the labelling-and-disclosure category. The rules treat an intermediary as becoming aware either on its own accord, upon receipt of actual knowledge through lawful notices, or on the basis of any grievance, complaint, or information received under the rules. Once awareness arises, the intermediary must take expeditious and appropriate action. This may include disabling access to the content, suspending or terminating the relevant user account without vitiating evidence, disclosing the user's identity to a victim-complainant in accordance with applicable law, and reporting the matter to authorities where mandatory reporting obligations apply. Even where a platform initially permitted SGI to be published with labels, failure to act after awareness is established can amount to a due diligence failure. Rule 4(1A) mandates a significant social media intermediary to, (prior to display, upload, or publication): The proviso retains the "knowingly permitted, promoted, or failed to act upon" standard for due diligence failure. Collectively, this creates a pre-publication obligation that goes beyond notice-and-takedown for large platforms. Rule 3(1)(c) requires intermediaries to inform users at least once every three months that platforms may remove content, suspend or terminate accounts, impose legal liability for unlawful content, and report offences that require mandatory reporting. Rule 3(1)(ca) adds an SGI-specific notice for intermediaries offering computer resources under Rule 3(3). This notice must warn users that violations may lead to content removal, account suspension or termination without evidence, disclosure of identity to a victim-complainant in accordance with law, and reporting to authorities where required. The rules, therefore, convert user-facing deterrence into a formal compliance obligation. Rule 3(1)(ca)(ii) lists the consequences platforms may impose for violations. These include immediate disabling or removal of content, suspension or termination of user accounts without vitiating evidence, disclosure of the violator's identity to a victim or their representative in accordance with law, and reporting to authorities where mandatory reporting applies. The emphasis on preserving evidence aligns platform enforcement with criminal investigation requirements. The notification replaces existing timelines as follows: The rules also require that police-authority intimations come from officers not below the rank of Deputy Inspector General of Police, specifically authorised by the appropriate government through a written order. These changes significantly narrow discretion and accelerate response obligations for platforms. Finally, Rule 2(1B) clarifies that intermediaries do not violate Section 79(2)(a) or (b) of the Information Technology Act when they remove or disable access to content, including SGI, in compliance with the rules. This protection extends to actions taken through reasonable and appropriate technical measures, including automated tools. The provision seeks to reassure platforms that proactive moderation carried out in accordance with the rules will not, by itself, jeopardise intermediary safe harbour.
[11]
Explained: As govt tightens AI content rules, what must social media platforms & others do
India's government has mandated social media platforms to clearly label all AI-generated or modified content. Amendments to IT rules require intermediaries to use visible disclosures or embedded metadata for identification, with a strict three-hour window for takedown orders. These measures aim to regulate synthetically generated information, including deepfakes, following recent controversies. The central government issued guidelines today mandating social media platforms, among others, to clearly label all artificial intelligence-generated or modified content. Which rules have changed? The Ministry of Electronics and Information Technology (MeitY) made amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, on Tuesday. India's intermediary framework was first set out in 2011 and later replaced by the 2021 amendments, which expanded due diligence obligations for major social media intermediaries and introduced regulation for digital news and curated audio-visual content. The latest directions build on these rules, bringing synthetically generated information (SGI), including deepfakes, into a stricter regulatory framework. What has changed? Per the new rules, intermediaries must ensure that AI-generated or modified content is labelled or identifiable, either through visible disclosures or embedded metadata. The rules permit the use of technical measures such as embedded metadata as identifiers to enable flexible compliance while ensuring traceability. Further, the rules make such identifiers irreversible once they have been applied. Platforms must also warn users about the consequences of AI misuse at least once every three months. Further, the government has mandated the deployment of automated tools to detect and prevent the spread of illegal, sexually exploitative, or deceptive AI-generated content. Previously, a 36-hour window was offered to intermediaries to comply with takedown orders. However, under stricter enforcement measures, platforms must now remove or disable access to AI-generated content within three hours of receiving an order from the court or government. What are intermediaries? Entities that store or transmit data on behalf of end users are intermediaries. These include telecom service providers, online marketplaces, search engines, and social media like Jio, Amazon, Google, Meta, etc. How will the rules be enforced? The initial phase of enforcement focusess on large social media intermediaries with five million or more registered users in India. This means the rules will largely impact foreign players such as Meta and X (formerly Twitter). Why now? These measures come amid the recent Grok controversy where the AI chatbot generated non-consensual explicit deepfakes. The changes also reportedly follow from the centre's recent consultations with industry bodies such as IAMAI and Nasscom. The rules will ensure that platforms inform users about SGI and even identify those involved in producing such content.
[12]
Social media platforms must detect, label AI-generated content under new rules
India has issued a new order for social media platforms. All AI-generated content must now be clearly labeled. These labels and their embedded identifiers cannot be removed. Companies must use tools to detect and stop illegal or deceptive AI content. Users will also receive warnings about AI misuse. These warnings will be sent every three months. India has directed social media platforms to clearly label all AI-generated content and ensure that such synthetic material carries embedded identifiers, according to an official order. Platforms have also been barred from allowing the removal or suppression of AI labels or associated metadata once they have been applied, the order said. To curb misuse, companies will be required to deploy automated tools to detect and prevent the circulation of illegal, sexually exploitative or deceptive AI-generated content. Platforms have also been asked to regularly warn users about the consequences of violating rules related to AI misuse. Such warnings must be issued at least once every three months, the government said. In a stricter enforcement measure, the government has set a three-hour deadline for social media companies to take down AI-generated or deepfake content once it is flagged by the government or ordered by a court.
Share
Share
Copy Link
India has introduced sweeping amendments to its IT Rules requiring social media platforms to remove deepfakes and AI-generated content within three hours, down from 36. The new regulations, effective February 20, mandate labeling of all synthetic content and deploy automated detection tools. With over 1 billion internet users, India's move could reshape global content moderation practices, though digital rights groups warn the compressed timelines may trigger automated censorship and eliminate meaningful human review.
India has ordered social media platforms to accelerate their policing of deepfakes and other AI-generated impersonations, implementing a three-hour takedown window that replaces the previous 36-hour deadline
1
3
. The changes, published as amendments to India's 2021 IT Rules, take effect on February 20 and bring deepfakes under a formal regulatory framework while mandating the labeling and content traceability of synthetic audio and visual content1
. Certain urgent user complaints must be addressed within just two hours, creating what digital rights activist Nikhil Pahwa calls "automated censorship"5
.
Source: MediaNama
With over 1 billion internet users and a predominantly young population, India represents a critical market for platforms like Meta, Google, and X (formerly Twitter)
1
. The country has approximately 500 million social media users, including 500 million YouTube users, 481 million Instagram users, 403 million Facebook users, and 213 million Snapchat users2
. India's importance as a digital market amplifies the impact of these rules, making it likely that compliance measures adopted there will influence global product and moderation practices.
Source: ET
Under the amended India IT Rules, social media platforms that allow users to upload or share audio-visual content must require user disclosures for AI content on whether material is synthetically generated
1
. Platforms must deploy automated tools to verify those claims and ensure that deepfakes are clearly labeled with traceable provenance data embedded in the content1
4
. Any AI-generated content that isn't blocked must include "permanent metadata or other appropriate technical provenance mechanisms," and platforms are ordered not to allow these markers to be modified, hidden, or removed2
.The requirement to label synthetic content aims to help users immediately identify AI-generated materials, such as adding verbal disclosures to AI audio or overlaying text on images identifying them as synthetic
2
4
. Certain categories of synthetic content—including deceptive impersonations, non-consensual intimate imagery, and material linked to serious crimes—are barred outright1
4
.Non-compliance with the new regulations, particularly in cases flagged by authorities or users, can expose companies to greater liability by jeopardizing their loss of safe-harbour protections under Indian law
1
4
. Rohit Kumar, founding partner at New Delhi-based policy consulting firm The Quantum Hub, noted that "the significantly compressed grievance timelines—such as the two- to three-hour takedown windows—will materially raise compliance burdens and merit close scrutiny, particularly given that non-compliance is linked to the loss of safe harbour protections"1
.The rules lean heavily on automated systems to meet these obligations, expecting platforms to deploy technical tools to verify user disclosures, identify and label deepfakes, and prevent the creation or sharing of prohibited synthetic content
1
. According to transparency reports, more than 28,000 URLs or web links were blocked in 2024 following government requests3
.The Internet Freedom Foundation warned that these changes risk accelerating automated censorship by drastically compressing the three-hour takedown window, leaving little scope for human review and pushing platforms toward over-removal
1
2
. "These impossibly short timelines eliminate any meaningful human review, forcing platforms toward automated over-removal," the group stated, warning that the changes could undermine free speech protections and due process1
3
.Anushka Jain, a research associate at the Digital Futures Lab, acknowledged that the labeling requirement could improve transparency but cautioned that the three-hour deadline could push companies toward full automation
3
. "Companies are already struggling with the 36-hour deadline because the process involves human oversight. If it gets completely automated, there is a high risk that it will lead to censoring of content," she told the BBC3
. Delhi-based technology analyst Prasanto K Roy described the new regime as "perhaps the most extreme content takedown regime in any democracy"3
.
Source: The Verge
Related Stories
The best methods currently available for detecting and labeling deepfakes online are about to face a critical stress test
2
. C2PA, also known as content credentials, is one of the leading systems for both detection and labeling, working by attaching detailed metadata to images, videos, and audio at the point of creation or editing2
. However, Meta, Google, Microsoft, and many other tech giants are already using C2PA, and it clearly isn't working as intended2
.Interoperability is one of C2PA's biggest issues, and while India's new rules may encourage adoption, C2PA metadata is far from permanent—it's so easy to remove that some online platforms can unintentionally strip it during file uploads
2
. Social media platforms can't label anything that doesn't include provenance data to begin with, such as materials produced by open-source AI models or so-called "nudify apps" that refuse to embrace the voluntary C2PA standard2
. Platforms like X that haven't implemented any AI labeling systems at all now have just nine days to comply2
.Two industry sources told TechCrunch that the amendments followed a limited consultation process, with only a narrow set of suggestions reflected in the final rules
1
. While the Indian government appears to have taken on board proposals to narrow the scope of information covered—focusing on AI-generated content rather than all online material—other recommendations were not adopted1
. The scale of changes between the draft and final rules warranted another round of consultation to give companies clearer guidance on compliance expectations, the sources indicated1
.Meta, Google, Snap, X, and the Indian IT ministry did not respond to requests for comments
1
2
. The new regulations take effect on February 20—the final day of an international AI summit in New Delhi featuring leading global tech figures5
. With widespread access to AI tools enabling a new wave of online hate facilitated by photorealistic images and videos, the US-based Center for the Study of Organized Hate warned that the laws "may encourage proactive monitoring of content which may lead to collateral censorship"5
.Summarized by
Navi
22 Oct 2025•Policy and Regulation

02 Jan 2026•Policy and Regulation

18 Feb 2026•Policy and Regulation

1
Business and Economy

2
Policy and Regulation

3
Policy and Regulation
