Indian Tech Industry Pushes Back Against Proposed AI Content Labeling Rules

5 Sources

Share

Major tech industry groups including Nasscom, IAMAI, and BSA are urging India's MeitY to revise draft rules requiring visible labeling of AI-generated content, citing concerns about innovation impact and global alignment.

Industry Opposition to Proposed Labeling Requirements

India's technology industry is mounting significant resistance to the Ministry of Electronics and Information Technology's (MeitY) proposed amendments to the Information Technology Intermediary Rules, which would mandate comprehensive labeling of AI-generated content. The draft rules require platforms and users to label AI-generated visuals with visible markers covering at least 10% of the display area, while audio content must include disclaimers for the first 10% of duration .

Source: ET

Source: ET

Nasscom, representing India's tech sector, has urged MeitY to clarify definitions of "synthetically generated information" and "deepfake synthetic content," arguing that regulations should target harmful and malicious content rather than encompassing all algorithmically altered media . The association expressed concerns about technical feasibility and called for distinct obligations based on whether technology serves businesses or individual consumers.

The Internet and Mobile Association of India (IAMAI) has characterized the labeling requirements as "premature," arguing they impose significant burdens without commensurate benefits . IAMAI contends that the proposed rules risk mandating technologies that are "not yet mature, reliable, interoperable, or privacy-preserving" .

Global Alignment and Competitive Concerns

BSA, representing major global software firms, has warned against imposing inflexible standards that could undermine innovation. The organization recommended that India avoid requiring visible watermarks or labels on AI-generated content, cautioning that such marks are easily removed and could make Indian digital outputs less attractive in global markets .

Instead, BSA advocates for machine-readable markers and alignment with international protocols like the Coalition for Content Provenance and Authenticity (C2PA), which would simplify compliance for multinational platforms while maintaining transparency and user safety . Both industry groups emphasized the risk of India falling out of step with global digital standards if it moves too quickly without international coordination.

Legal and Technical Implementation Challenges

Experts have raised fundamental questions about whether AI developers qualify as intermediaries under existing IT Act definitions. Former MeitY Senior Director Rakesh Maheshwari noted that while the government's intent appears to classify AI platforms as intermediaries, "many of them will not qualify the definition of intermediaries" . This classification uncertainty could jeopardize the primary purpose of the proposed rules.

The scale of implementation presents another significant challenge. Industry participants have questioned the enforceability of labeling requirements given that "billions of videos are being uploaded on the internet every month" along with even larger numbers of images . The sheer volume of content makes comprehensive labeling potentially unenforceable at inception.

Source: MediaNama

Source: MediaNama

Safe Harbor and Liability Implications

A major concern centers on how the amendments might affect "safe harbor" provisions that provide conditional immunity for online intermediaries. Under current law, platforms enjoy protection from liability for third-party content provided they meet due diligence requirements . The new draft clarifies that due diligence obligations now include verification and labeling of AI-generated material, with non-compliance potentially stripping platforms of their conditional immunity.

IAMAI argues that requiring platforms to verify user declarations about synthetic content "effectively mandates intermediaries to adjudicate the legality or lawfulness of user content ex ante," potentially conflicting with the actual knowledge standard and safe harbor protections . This requirement could force platforms to review content before publication, contradicting established legal precedents.

Scope and Definition Concerns

Both IAMAI and Nasscom have expressed concerns about the broad nature of the synthetically generated information definition. Based on the current definition, "a bulk of the data on the internet would need a label" . IAMAI emphasized ambiguities in the phrase "reasonably appears to be authentic or true," noting that reasonable authenticity depends heavily on context.

The rules would impact all significant social media intermediaries with 5 million or more registered users in India, including YouTube, Facebook, Instagram, WhatsApp, X, LinkedIn, and others . Additionally, the amendments would extend to AI-based software and services including ChatGPT, Google's Gemini, Microsoft's Copilot, and Meta's AI assistant.

Source: MediaNama

Source: MediaNama

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo