5 Sources
5 Sources
[1]
Tech industry groups urge MeitY to refine AI content rules to boost innovation, for global alignment
The draft rules proposed by MeitY require platforms and users to label AI-generated visuals with a visible marker covering at least 10% of the display area, or, for audio, add a disclaimer for the first 10% of the content duration. The technology industry has called on the Ministry of Electronics and Information Technology (MeitY) to adopt a flexible and globally harmonised approach to the proposed amendments to the Information Technology Intermediary Rules and Digital Media Ethics Code to label artificial intelligence-generated content in the public domain. The draft rules proposed by MeitY require platforms and users to label AI-generated visuals with a visible marker covering at least 10% of the display area, or, for audio, add a disclaimer for the first 10% of the content duration. The submissions to MeitY reflect broad industry concern that while addressing the threats of AI-generated content is crucial, overly rigid regulation could stifle technological progress and complicate compliance for global-facing businesses. Nasscom, which represents India's tech sector, urged the ministry to clarify the definitions of "synthetically generated information" and "deepfake synthetic content", arguing that the rules should focus on harmful and malicious content rather than sweeping in all algorithmically altered media. The association raised concerns about the technical feasibility of some labelling proposals and called for distinct obligations based on whether technology is consumed by businesses or individuals. The organisation warned that uniform rules for platforms with vastly different business models and capabilities could impose unworkable burdens -- potentially hampering startups and small firms disproportionately. BSA, representing major global software firms, echoed several of these points in its own submission. It said tackling challenges posed by synthetically generated information, including deepfakes and malign disinformation campaigns, is urgent. However, the group cautioned MeitY against imposing inflexible standards that it said might undermine innovation. It recommended that India avoid requiring visible watermarks or labels on AI-generated content, warning that such marks are easily removed and could make Indian digital outputs less attractive globally. Instead, they suggested machine-readable markers and advocated for alignment with international protocols like the Coalition for Content Provenance and Authenticity (C2PA), making compliance simpler for multinational platforms without sacrificing transparency or user safety. Both industry groups pushed for policy frameworks that encourage responsible innovation and allow rapid adaptation to technical advances. They highlighted the risk of India falling out of step with global digital standards if it moves too fast without international alignment. Their submissions said India can be both a standard-bearer for ethical technology and a global digital powerhouse, provided its laws remain pragmatic, clear, and future-ready. MeitY's proposed changes come as governments around the world scramble to address the ethical, social, and security issues created by generative AI. The rule changes will impact all significant social media intermediaries (SSMIs) or those with 5 million or more registered users in India. Google-owned YouTube; Meta's Facebook, Instagram, Threads and WhatsApp; X (formerly Twitter); Snap; LinkedIn and ShareChat will have to obtain user declarations on whether the uploaded content is synthetic, deploy automated tools to verify these declarations, and ensure synthetic content is clearly marked with appropriate labels, or will be considered non-compliant. Also Read: New IT rules explained: Deepfakes must be labelled, takedowns only by senior officials Anybody enabling the creation or modification of synthetic content must prominently label such material. All firms would have to embed the disclaimer in their content whether they are a social media intermediary or are just providing software. This opens up a long list of popular AI-based software, apps and services including OpenAI's ChatGPT, Google's Gemini, Microsoft's Copilot, Meta's AI assistant to scrutiny. One industry source said a major point of contention is how these amendments may affect the "safe harbour" provisions, long seen as a crucial shield for online intermediaries. Under current law, platforms enjoy conditional immunity from liability for third-party content, provided they meet due diligence requirements and act promptly upon receiving takedown notices. The new draft does not abolish this framework but explicitly clarifies that due diligence obligations now include verification and labelling of AI-generated material. Failure to comply -- with respect to either flagging unlabelled content or authenticating user declarations about AI use -- could strip platforms of their conditional immunity, triggering what an industry executive described as "secondary liability" for harmful content. While the safe harbour itself remains intact, "these changes further clarify the due diligence obligations for platforms, making the burden heavier", the executive added. For providers of AI or content hosting platforms, the mandate presents significant technical challenges. "From a technology provider perspective, implementing a reliable 10% watermark is an incredibly heavy lift. Image generation models are by nature probabilistic and non-deterministic-prompt instructions like 'make the watermark cover 10%' often fail," the executive explained. Such requirements add latency and cost to AI outputs, making the burden of compliance disproportionately high for platforms compared to the ease with which users may circumvent the rules by cropping, editing, or screenshotting content. Three strategies for meeting the draft's verification demands were outlined by the executive: detecting visible labels and metadata, using hidden watermarking and embedded metadata, and deploying classifiers to infer whether media is AI-generated. Each of these is "imperfect and prone to error," the executive said, and can potentially result in "overreaching or under-enforcement", wrongly flagging edited photographs or missing more sophisticated forgeries. Creative professionals and advertising agencies are also expressing concerns, pointing out the practical limitations of dedicating 10% of an audio ad to disclaimers or reliably watermarking images at scale. The technology sector notes that many global AI tools do not embed detectable signals, and cross-platform sharing often strips metadata, further complicating enforcement. The public consultation on the draft rules closes on November 13, with no clarity yet on timelines for implementation or compliance grace periods. Industry participants anticipate a protracted period of negotiation over technical feasibility. The rules have prompted questions over the procedural safeguards available to users whose legitimate content could be wrongly taken down. As per current practice, affected users can pursue in-app appeals, escalate to a grievance officer, and eventually approach statutory grievance appellate committees or the courts. The industry executive warned that the draft rules fundamentally expand "due diligence obligations" and could result in more frequent and uncertain content moderation risks, especially as "the reasonable person test" for authenticity is difficult to operationalise against the evolving landscape of digital manipulation and user behaviour.
[2]
IAMAI, Nasscom Calls Synthetic Information Rules 'Premature'
Labeling requirements under the draft synthetic information rules are premature and impose significant burdens without consummate benefits, the digital services industry body Internet and Mobile Association of India (IAMAI) says. This comes as part of its submission to the Ministry of Electronics and Information Technology (MeitY) on the draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021). The amendment adds regulations for synthetically generated information (SGI), which the rules define as "information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true." One of the requirements mentioned under the draft synthetic information rules is for SGI service providers to prominently label, or embed metadata or a unique identifier within said content. "We respectfully submit that the inclusion of proposed Rule 3(3), which would mandate prominent, persistent labelling and the embedding of permanent unique identifiers on synthetic content by intermediaries that enable its creation or modification, is premature at this stage. The proposed rule risks mandating technologies that are not yet mature, reliable, interoperable, or privacy-preserving," IAMAI explained. Similarly, IT and business process outsourcing (BPO) trade body Nasscom has also pushed back against the rules, arguing that, based on the definition of synthetically generated information in the rules, a bulk of the data on the internet would need a label. The definition of intermediary under the IT Act applies to any person who receives, stores, or transmits content on behalf of another person. IAMAI says that this definition applies to platforms that host third-party content instead of the delivery of direct AI services. "Rule 3(3) of the Draft Amendments applies to any intermediary offering 'a computer resource which may enable, permit, or facilitate the creation, generation, modification, or alteration' of SGI - this inadvertently encompasses both intermediary platforms (including content disseminators) as well as AI service providers (including content generators)," IAMAI explains. By imposing intermediary obligations on AI service providers, the draft synthetic information rules expand liability beyond the existing regime without a clear legal basis for doing so. As such, it urges the government to remove this rule from the amendment. Both IAMAI and Nasscom expressed concern about the broad nature of the SGI definition. To address this, Nasscom suggested that the government should conceptualise 'deceptive synthetic content' as content that: The industry body adds that the government should exempt text from the scope of this definition. IAMAI emphasised on the ambiguities in the phrase 'reasonably appears to be authentic or true' within the SGI definition, arguing that reasonable authenticity depends on context. "For example, a colour-corrected photograph or denoised audio track will typically appear authentic to lay viewers, yet it is not misleading in any material sense. Conversely, satire, parody, or clearly stylised content can be 'inauthentic' by design without any risk of deception," it explains. As such, it urges the government to remove the definition from the draft synthetic information rules. One of the amendments suggests that the significant social media intermediaries (SSMIs) must obtain user declarations about whether a piece of content is SGI and deploy reasonable measures to verify them. Further, they must ensure SGI is clearly labeled. IAMAI argues that this amendment "effectively mandates intermediaries to adjudicate the legality or lawfulness of user content ex ante". It says that this requirement to verify user declarations may require SSMIs to review content before publication, thereby conflicting with the actual knowledge standard and safe harbour. The industry body notes that SSMIs becoming potential arbiters of content legality goes against Section 79(3)(b) of the IT Act and the law laid down by the Supreme Court in the Shreya Singhal case. "The Supreme Court noted that it would be unreasonable and impracticable to expect intermediaries that receive millions of items of content daily -- and numerous complaints -- to judge which content or requests are legitimate, and which are not. Given the volume of content, it is impracticable (if not impossible) to expect industry members to locate and identify all instances of SGI," IAMAI explains. Intermediaries could end up becoming 'over-compliant' and err on the side of content removal to avoid liability, the industry body suggests. This is the exact dynamic that the Shreya Singhal judgment had cautioned against. IAMAI also expressed concern about the infeasibility of obtaining user declarations for content originating outside India. "This would lead to an exponential increase in the due diligence obligations that intermediaries are currently subjected to," it stated. Automated content detection is probabilistic in nature and prone to mistakes, especially given the large scale of uploads on an SSMI, Nasscom explains. It suggests that the government can explore requiring SSMIs to inform users that detection may not be fully accurate. "This could set realistic expectations, encourage user discretion, and help ensure that verification is viewed as a safeguard rather than a certification of authenticity," it adds. Largely, the industry body is pushing back against pre-publication content verification, stating that SSMI-level checks and user declarations before content publication could create queues during high activity periods. "This risk becomes particularly visible during real-world events such as elections, natural disasters or public-health emergencies, where timely posting of verified information may be critical," Nasscom highlights. It suggests that the government should consider rapid post-publication labeling to maintain the timeliness of content publication. Quoting the AI Governance guidelines that the IT Ministry released on November 5, IAMAI argued that the Information Technology Act, 2000 (IT Act), the existing Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules 2021) and the Bharatiya Nyaya Sanhita can address many of the emerging risks from AI. The industry body highlighted various provisions of the IT Act, such as Section 66D, which addresses impersonation, and Section 79, which allows the government to seek content takedowns. Similar provisions directing platforms to make reasonable efforts not to publish prohibited content categories, as well as removal timelines for such content, also exist within the IT Rules, 2021. "Therefore, introducing Rule 2(1A) to 'clarify' that references to 'information' include SGI in contexts of unlawful acts risks redundancy and uncertainty," IAMAI explains. It argues that adding a separate clause for SGI could "complicate interpretation of due diligence standards and increase risk of inconsistent enforcement across intermediaries". IAMAI urges the government to retain the technology-neutral language of the IT Rules to avoid overreach and address new technology for content manipulation as it emerges. "Persistent identifiers and metadata may expose sensitive information and create novel vectors for tracking or misuse, including for vulnerable users," IAMAI mentions. Besides this, it suggests that the government's demand for visible watermarks does not account for real-time use cases such as voice assistants or live videos. "Adding watermarks or processing identifiers causes delays, often between 50 and 200 milliseconds, which makes voice assistants or live video feel laggy and unnatural. This creates computational overhead incompatible with low-latency applications, leading to delays that disrupt natural conversation flow," the industry body explains. IAMAI points out that even the IT Ministry has highlighted the inherent limitations of content authentication methods. The guidelines recommend setting up a committee of experts with representatives from the government, industry, academia, and standards-setting bodies to develop content authentication standards. Given the content labeling mandate under the draft amendment, the industry body suggests that the draft rules are "at odds" with the AI governance guidelines. Nasscom suggests that the government should support and monitor ongoing industry efforts to improve labeling and preservation to ensure information authenticity. "MeitY may consider exploring a voluntary, cohort-based sandbox, inviting willing participants to run focused trials and publishing brief notes on outcomes and learnings," Nasscom suggests. The ministry can convene dialogues with domestic stakeholders and engage with peers from other countries to ensure interoperability of labeling solutions.
[3]
Labelling Deepfakes May Fail at Scale, Say Experts at #NAMA
"The problem statement of these particular rules, which seems to be deepfakes and harmful deepfakes, perhaps that is better resolved through a value chain approach where you're assigning appropriate responsibilities across various layers of the AI value chain," a participant pointed out at MediaNama's discussion on Regulating For Deepfakes in India on November 5. The discussion focused on a draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. This amendment aims to legally define synthetically generated information; mandate labelling, watermarking, and metadata embedding for such content; require user declarations and automated verification by Significant Social Media Intermediaries (SSMIs); and extend due diligence obligations to intermediaries that facilitate or host synthetic content. The challenge, however, comes with implementing this harm-based approach to regulation. A service provider can use multiple AI models to generate a final output, and in such cases, it becomes difficult to attribute liability, a participant stated. "I mean, for example, pretty much every hosting service provider today has built on its itself AI tools that its clients, which are companies that are hosting with them, can actually use and leverage. And it's a B2B2C [business to business to consumer] model, so to speak. So at times, they use part of what they're using is open source as well. So the complexity of getting to an output that you then make available in the world is fairly, it's pretty crazy," they argued. [Note: The bulk of the discussion was held under the Chatham House Rule. Under the Rule, participants' remarks may be reported, but their names and affiliations cannot be disclosed. Where speakers specifically waived the Rule, we have added attribution to their comments.] Allocating responsibility across the value chain: The government can map out the AI value chain, identify the various risks, and then place risk-based obligations on developers, a participant suggested. "To me, that is a much more rational approach than using the IT Rules as the primary instrument for regulating AI. I want to caveat everything by saying that this is a super new debate. I don't know that anyone in the world has arrived at the right answers just yet," they added. When asked whether there is a need for a separate AI law, the participant said that their broader point is that the government's inclusion of AI companies within the intermediary liability regime is only creating confusion. Comparing India's regulatory approach to the EU AI Act, the participant stated that the EU took a risk-based, broader accountability approach. Another participant agreed, saying that instead of focusing on social media platforms, which are content distribution mechanisms, the government's regulatory approach should prioritize harm assessment on an issue-by-issue basis. They gave the example of child sexual abuse material (CSAM) online, noting that hash matching has helped reduce its prevalence. "That is a very specific problem which has a specific solution which has been found out. Now, we need to find those specific problems to solve instead of doing this definition, which covers everything," they added. Another participant pointed out that hash matching would not work for deepfakes because, instead of selecting one particular type of content to do the matching, you basically create hashes for everything under the sun, making the regulation unenforceable. The sheer scale of labeling may make the rules unenforceable: "There are billions of videos that are being uploaded on the internet every month. There are even larger numbers of images being shared every month. I'm just wondering if, for anyone who's asking for labeling, whether you comprehend this is almost unenforceable at the inception level," a participant said, emphasising the challenge of implementing the regulation in its current form. The government should invert the authentication model: "I think we need to have authentication systems for content generated by humans and humans alone," a participant argued. They noted that in cases like CCTV footage, one may want to ensure the footage has not been AI-altered. Human verification, they added, should be voluntary because there is an incentive for humans to authenticate their content. Human and AI co-created content is another emerging category. The participant anticipates seeking more of in the future. They suggested that for such content, one could use provenance to differentiate between human and AI work. Another speaker observed that even when content is clearly identified as human- or AI-generated, people's perception of it does not change, only its persuasiveness does. "I think Stanford did a study very recently on labeling, and how basically it says around the perception, people's perception didn't change if the author was AI or not, right? It only changed when the persuasiveness of the content was called into question," they explained. Other key areas for improvement: Creating a blockchain registry for watermarked content: "My fundamental belief is that, from the output standpoint, we definitely need to start implementing some kind of watermarking and putting better data in. And we need to figure out a blockchain registry where we tag everything," Siddharth Puri, Co-Founder & CEO of Tyroo, suggested. He added that the government could support the infrastructure costs for such a registry. Compliance flexibility on label sizes/duration: The government should provide AI service providers with a certain degree of compliance flexibility, argued Poushali Dutta, Co-Founder of OttrCall. "For example, regarding the 10% rule, me being a voice ecosystem provider, if I can do it better and more efficiently over text rather than voice, that flexibility should be there," she said. Dutta was referring to the mandate that AI service providers must add labels covering 10% of the visual display or audio duration on synthetic content. She emphasized that the government should treat productivity tools and tools that facilitate mass manipulation differently. "I'm just saying, tools which are used for productivity and efficiency of small businesses versus tools which are used for mass scale deception and manipulation, like the deepfakes, which we are actually trying to regulate, they have to be regulated differently. We cannot do a one-size-fits-all approach," Dutta explained. She added that the government could distinguish between the two based on end use. Giving the example of OttrCall, she pointed out that they do not provide services directly to consumers, and regulations should treat B2B services like theirs differently from consumer-facing products. Regulations protecting the average person's likeness need better implementation One of the stated goals of the draft rules is to prevent reputational damage, raising the issue of personality rights and likeness protection. A participant commented that a patchwork of regulations already exists to address likeness protections. "Any future regulatory approach should begin by identifying where the gaps lie and how proposed measures will meaningfully serve the victim. Simply introducing a new law may risk straining existing capacity," the participant said, quoting research by the Rati Foundation and Tattle. They added that in cases of non-consensual intimate imagery (NCII), the report found police apathy, officers often did not register FIRs. Even where they wanted to help victims, they lacked the necessary training and manpower. "So, the question which we should be asking is that, if my likeness is being used, and it is being used in a way in which I cannot approach a court efficiently, a regulatory body efficiently, or the police efficiently, and I am taking the worst instance here of abuse towards women, which is potentially what has been cited again and again by two organisations which are also led by women, this is what has been stated, the problem and the fix and the solution is actually ensuring your existing laws can actually be realised through enforcement," they stated. Definitional issues with what qualifies as 'synthetically generated information' Participants unanimously agreed that there was a problem with how the draft rules defined synthetic information. They suggested that the definition could include anything from photoshopped content to your phone camera improving image quality, and even AR dog filters. "I think, what the definition should be in the rules, is synthetic content, which should be content generated by AI or, predominantly, or some legal terms, substantially, predominantly generated or modified by AI. It excludes minor alterations, Photoshop, touch-ups, face swaps, and all of that," a participant suggested. They added that the government could also include an element specifying "synthetic content with intent to mislead," given that the focus of the rules is to prevent precisely that type of content. Fixing the IT Act: One of the participants recommended fixing the IT Act by including a proper classification for AI service providers, along with proportionate obligations. "On the synthetic piece, have a separate regulation for synthetic media, don't push it under the IT Act. Make it risk-based, distribute obligations appropriately across the stack, and then make them outcome-based, don't be prescriptive," they added. You can watch the event video here: This discussion was supported by Google, Meta, and Amazon, with community partners Internet Freedom Foundation (IFF), Centre for Communication Governance (CCG), NLU Delhi, and Broadband India Forum (BIF).
[4]
Are AI Developers Intermediaries Under Synthetic Media Rules?
"Whether AI developers, particularly in the context of genAI platforms, whether they are intermediaries or not... the intent of the government, it appears, is that they should be, and that's how probably the rules have been written. But in my own opinion, they are not. Many of them will not qualify the definition of intermediaries. And hence, the primary purpose of this rule may get jeopardised," said Rakesh Maheshwari, Former Senior Director and Group Coordinator (Cyber Laws and Data Governance) at the Ministry of Electronics and Information Technology (MeitY), during MediaNama's discussion 'Regulating for Deepfakes in India' held last week. Maheshwari was responding to a wider debate on how the draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, classifies platforms that generate or modify content using AI. For context, the amendment seeks to legally define synthetically generated information, mandate visible labelling and embedded metadata, require user declarations verified by Significant Social Media Intermediaries (SSMIs), and extend due diligence obligations to intermediaries that host or distribute synthetic content. [Note: While the legal panel took place under the Chatham House Rule, attribution is permitted for Rakesh Maheshwari. Other comments from this panel below are reported without attribution. The business panel cited later in this article chose to waive the rule entirely and is fully attributable.] A participant explained that although the definition of "intermediary" under Section 2(w) of the IT Act is broad, the safe harbour under Section 79 applies only when the intermediary acts as a communication system or neutral conduit. If a platform is generating or modifying the content rather than merely transmitting it, it may fall outside the scope of safe harbour. One participant noted, "If you are only doing a very minimal assistive function, maybe you can enjoy safe harbour. If you're generating, it becomes difficult to fit the platform under 79 because it may not meet the functional test." MediaNama Founder and Editor Nikhil Pahwa pointed out that generative AI workflows often involve the same user on both ends: "So a user is on both ends of that. So it's not an intermediary," Pahwa said. A participant agreed: "For lack of any other better framework, this is what is being used. But strictly under 79, it is difficult to fit," the participant noted. Alongside the definitional debate, participants discussed who would be responsible for applying labelling and traceability, whether it would be the AI developer, the platform hosting the content, or the ad network. A participant argued that AI developers do not meet the legal definition of an intermediary under Section 2(w): "In no reading possible can an AI developer be a part of 2W. There is no pre-existing record for an AI tool to transmit. That is the core element in the definition. The rule is creating an artificial distinction between platforms that already qualify as intermediaries and model developers that do not. That may not stand legal scrutiny," the participant added. This raises the possibility that the draft rule may go beyond what the IT Act allows through delegated rulemaking. The uncertainty extends to who is actually responsible for applying the label and traceability metadata. One participant noted: "It is not clear. Unless the rule explicitly classifies the AI developer as an intermediary, it is difficult to derive a clear intent that they must apply the label." Another participant referenced China's approach, where AI developers embed metadata linking generated content back to the user: "Every AI developer requires a sign-in, and the data the user inputs is embedded in the metadata. Through that metadata, you can trace back the person who created the content." During the business-focused panel, speakers discussed how companies that provide AI-powered services classify themselves in practice. Responding to a question on liability when multiple models are involved, Poushali Dutta, Co-Founder of OttrCall, said that responsibility should lie with the user generating the prompt rather than the AI platform: "Text is the new content. The user, the person who is creating the prompt, they should be the one liable because it is their intentions that matter. Just think about how we previously regulated content. It is the user who is generating it; it should be their liability." When asked whether OttrCall considers itself an intermediary, she said: "Legally, no, because we also generate the same thing. We are not an intermediary, but the prompt is still done by our customers. We do not interfere with the prompt. The ownership should be on them because it is their intention that matters." Abhishek Nevatia, Founder of Zoop, described his platform as occupying a hybrid role: "I think we are an intermediary of sorts because we are connecting two sides of a network. We perform both roles, generation and distribution. The question is where liability should lie when things go wrong," Nevatia said. These statements illustrate that current business models do not map neatly onto the intermediary framework, especially where AI both assists and generates content. This discussion was supported by Google, Meta, and Amazon, with community partners Internet Freedom Foundation (IFF), Centre for Communication Governance (CCG), NLU Delhi, and Broadband India Forum (BIF). Disclosure: MediaNama's Editor Nikhil Pahwa has invested in Zoop, whose co-founder Abhishek Nevatia was a panellist during the "Impact of the Regulation on Businesses and Technical Challenges for Implementation" session.
[5]
What should social media do as per IT Rules draft amendment?
"The fundamental problem with the rules that have come out is that they are, in a sense, collapsing two entirely separate regulatory logics [together]. One that is meant for, I think, the IT rules [is] about platform accountability and liability. And the stuff about labelling and content provenance is really to do with governing AI technology, right? And I think that by conflating AI oversight with the liability regime, we are essentially creating outcomes that are not needed and a lot of confusion," a participant pointed out during MediaNama's discussion 'Regulating for Deepfakes in India' on November 5. The discussion focused on a draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. This amendment aims to legally define synthetically generated information, mandate labelling, watermarking, and metadata embedding for such content. Also, it requires user declarations, automated verification by Significant Social Media Intermediaries (SSMIs), and due diligence obligations for intermediaries that facilitate or host synthetic content. Discussing the responsibilities of social media platforms (classified as intermediaries under the IT Act and Rules), the participant argued that platforms should only be responsible for user behaviour and content moderation within the bounds of safe harbour. [Note: The bulk of the discussion was under the Chatham House Rule, under which participants' remarks names and affiliations cannot be disclosed. Where speakers specifically waived this rule, we have added attribution to their comments.] One of the biggest points of contention is whether social media platforms (and not platforms generating synthetic content) have labelling responsibilities as well. Addressing this, former Senior Director at the Ministry of Electronics and Information Technology (MeitY), Rakesh Maheshwari, suggested that, based on his interpretation, users are responsible for declaring synthetic content when they upload it on a social media platform. "If it has been synthetically generated and that's where maybe the platform puts a mark and gives metadata to be able to prove the provenance, and if it is passing through multiple kinds of platforms, naturally, the size of the file will keep on increasing. Hopefully, if nothing is removed, the traceability gets established, and maybe any platform that is used for uploading also gets noted," he explained. Outside of labelling, participants explained that when users upload content to a social media platform, it ends up transcoding such content: a process via which a file can end up losing its metadata. They suggested that, given the metadata embedding requirements, social media platforms could end up liable for the metadata loss. Elaborating on his point about user disclosures about synthetic content, Maheshwari explained that users may not always correctly disclose content. "Assuming that the user has flagged that this is a synthetically generated media information, and the platform looks into [it] using the tools and confirms that it is. If it is so, then it labels it (the content) as synthetically generated. If not, and for the rest of the things where user has not declared it to be synthetically generated, what really happens in those scenarios?" he questioned. Maheshwari suggested that in cases where a user does not declare a piece of content as synthetically generated, the regulation should not require platforms to do anything except provide their own mark and their own metadata to it. He gave the example of how TikTok's content includes labels which indicate that a video came from TikTok when a user exports it. "Now you [the social media platform can] leave it to the reporting community who reports [whether a piece of content is synthetic] and depending upon the kind of problem which has been reported, you can go back to what the user had committed, user had communicated and that's where you start. Maybe if it is NCII [non-consensual intimate imagery], synthetically generated NCII, the policies can always convey that the user account shall be disabled," Maheshwari argued. Besides suggesting platform-level labels, he also recommended that the regulation should exempt text-based synthetic outputs. Another participant, however, countered that the rules appear to require social media platforms to scan content and check whether it's synthetically generated. "To my mind, the main thing that [this] does is that it blurs the distinction between hosting and editorialising, right? And I think that obviously, it undermines this whole safe harbour principle. To me, the requirements represent an active moderation requirement," they pointed out. Another participant agreed, suggesting that even a requirement to add platform logos to content is active moderation, to which Maheshwari argued that active moderation would involve looking into the content. Besides being a question of content moderation, participants also questioned the effectiveness of implementing content labels. A participant argued that if platforms have content labelling obligations, such as via C2PA (Coalition for Content Provenance and Authenticity) or SynthID, the labelling solution will not be effective because of lack of interoperability. "So, the technical standards of detection and embedding those markers, first, we should think it through how that will happen. We will figure out adoption, and then we can define when we are going to embed them, and at what threshold we are going to embed them," they explained. One of the participants brought up that platforms would struggle to establish whether a piece of live content is AI-generated. "And after that, let's say if that becomes downloadable and I put my logo on it and if something untoward has happened, my brand image is going to go for a toss," they argued. This particular participant also suggested that under the rules, any failure to reasonably verify user declarations about synthetic content could lead to a loss of safe harbour, adding that courts would have to provide clarity on what constitutes 'reasonable' verification. Meanwhile, another person argued that platforms could disagree with each other on 'synthetically generated information' classifications. "If you have to actually implement these rules, you will have to proactively monitor each and every piece of content that a user uploads. Because, notwithstanding Mr. Maheshwari's amendment, which he's suggesting, as of now, whatever a user declares doesn't matter. Whether the user says this is SGI [synthetically generated information] or not, the intermediary, as per the rules, has to verify," they remarked. Discussing the implications of the rules on free speech, one of the participants quoted the explanatory note for the draft amendment which mentioned the use of deepfakes for spreading misinformation, damaging reputations, manipulating elections and financial fraud. The participant highlighted how reputational damage and misinformation may not fit entirely within the scope of the restrictions, while comparing these with reasonable restrictions on freedom of speech and expression under Article 19(2) of the Indian Constitution. " This [push to go beyond reasonable restrictions] has been a trend with the Union Government, where [the] industry has not been recognising it for some time, and it was much more alive to it in 2013 when the Shreya Singhal case was filed. If you read the 2015 Shreya Singhal judgement, the Additional Solicitor General at that time, Mr Tushar Mehta, made the argument that the internet is a new medium and due to virality, there needs to be additional flexibility in how laws and regulations are applied," the speaker argued. They added that the government appears to have succeeded in this push in the X Corp vs the Indian government case this year. The participant argued that unlike other jurisdictions, such as the European Union (EU) which created a comprehensive regulation, India's rules give the government powers for opaque censorship. Elaborating on the discussion with the MeitY AI Governance guidelines, the participant argued that under the header covering deepfakes the guidelines mentioned that traceability for deepfakes could be established using unique immutable identities like Aadhaar. He suggested that one such identity could the Aadhaar and a user could end up being asked to sign on platforms using their Aadhaar IDs. You can watch the event video here:
Share
Share
Copy Link
Major tech industry groups including Nasscom, IAMAI, and BSA are urging India's MeitY to revise draft rules requiring visible labeling of AI-generated content, citing concerns about innovation impact and global alignment.
India's technology industry is mounting significant resistance to the Ministry of Electronics and Information Technology's (MeitY) proposed amendments to the Information Technology Intermediary Rules, which would mandate comprehensive labeling of AI-generated content. The draft rules require platforms and users to label AI-generated visuals with visible markers covering at least 10% of the display area, while audio content must include disclaimers for the first 10% of duration .

Source: ET
Nasscom, representing India's tech sector, has urged MeitY to clarify definitions of "synthetically generated information" and "deepfake synthetic content," arguing that regulations should target harmful and malicious content rather than encompassing all algorithmically altered media . The association expressed concerns about technical feasibility and called for distinct obligations based on whether technology serves businesses or individual consumers.
The Internet and Mobile Association of India (IAMAI) has characterized the labeling requirements as "premature," arguing they impose significant burdens without commensurate benefits . IAMAI contends that the proposed rules risk mandating technologies that are "not yet mature, reliable, interoperable, or privacy-preserving" .
BSA, representing major global software firms, has warned against imposing inflexible standards that could undermine innovation. The organization recommended that India avoid requiring visible watermarks or labels on AI-generated content, cautioning that such marks are easily removed and could make Indian digital outputs less attractive in global markets .
Instead, BSA advocates for machine-readable markers and alignment with international protocols like the Coalition for Content Provenance and Authenticity (C2PA), which would simplify compliance for multinational platforms while maintaining transparency and user safety . Both industry groups emphasized the risk of India falling out of step with global digital standards if it moves too quickly without international coordination.
Experts have raised fundamental questions about whether AI developers qualify as intermediaries under existing IT Act definitions. Former MeitY Senior Director Rakesh Maheshwari noted that while the government's intent appears to classify AI platforms as intermediaries, "many of them will not qualify the definition of intermediaries" . This classification uncertainty could jeopardize the primary purpose of the proposed rules.
The scale of implementation presents another significant challenge. Industry participants have questioned the enforceability of labeling requirements given that "billions of videos are being uploaded on the internet every month" along with even larger numbers of images . The sheer volume of content makes comprehensive labeling potentially unenforceable at inception.

Source: MediaNama
Related Stories
A major concern centers on how the amendments might affect "safe harbor" provisions that provide conditional immunity for online intermediaries. Under current law, platforms enjoy protection from liability for third-party content provided they meet due diligence requirements . The new draft clarifies that due diligence obligations now include verification and labeling of AI-generated material, with non-compliance potentially stripping platforms of their conditional immunity.
IAMAI argues that requiring platforms to verify user declarations about synthetic content "effectively mandates intermediaries to adjudicate the legality or lawfulness of user content ex ante," potentially conflicting with the actual knowledge standard and safe harbor protections . This requirement could force platforms to review content before publication, contradicting established legal precedents.
Both IAMAI and Nasscom have expressed concerns about the broad nature of the synthetically generated information definition. Based on the current definition, "a bulk of the data on the internet would need a label" . IAMAI emphasized ambiguities in the phrase "reasonably appears to be authentic or true," noting that reasonable authenticity depends heavily on context.
The rules would impact all significant social media intermediaries with 5 million or more registered users in India, including YouTube, Facebook, Instagram, WhatsApp, X, LinkedIn, and others . Additionally, the amendments would extend to AI-based software and services including ChatGPT, Google's Gemini, Microsoft's Copilot, and Meta's AI assistant.

Source: MediaNama
Summarized by
Navi
[1]
22 Oct 2025•Policy and Regulation

14 Nov 2024•Policy and Regulation

25 Mar 2025•Policy and Regulation

1
Business and Economy

2
Technology

3
Policy and Regulation
