Experts warn no AI can detect deepfakes as India grapples with national security threat

3 Sources

Share

At the India AI Impact Summit, cybersecurity experts revealed that no AI model can reliably detect sophisticated deepfakes, warning this could become a national security risk by 2029 elections. Meanwhile, MeitY scientists propose curbing virality of synthetic content and exploring content provenance standards like C2PA to build accountability frameworks.

No AI Can Reliably Detect Deepfakes, Experts Warn

A stark warning emerged from the India AI Impact Summit: current technology cannot reliably identify sophisticated deepfakes. "As of now, there is not a single AI model that can detect a really good deepfake-generated video and tell you this is fake," said Tarun Wig, co-founder of Innefu Labs, during a session on AI-enabled cybercrime

1

. This revelation highlights a widening gap between the scale of synthetic media threats and the tools available to detect them, leaving India's legal, enforcement, and technical systems struggling to keep pace as AI reshapes fraud, impersonation, and misinformation.

Source: Digit

Source: Digit

Wig emphasized that deepfakes pose an outsized risk that extends well beyond individual fraud cases. "Deepfake is going to become a national security issue very soon, maybe as soon as the 2029 election," he warned, pointing to how easily synthetic audio and video can be weaponized and amplified at scale

1

. The session brought together former government officials, law enforcement experts, technologists, and senior lawyers who converged on one core concern: while AI has accelerated the scale and speed of cybercrime, mechanisms for detection, attribution and accountability remain fragmented and largely reactive.

Source: MediaNama

Source: MediaNama

MeitY Proposes Curbing Virality of AI-Generated Content

Addressing the challenge from a regulatory angle, Deepak Goel, a scientist in Cyber Law and Data Governance under the Ministry of Electronics & Information Technology (MeitY), proposed a novel approach. "The issue is not just content creation; the issue is virality. If content doesn't get disseminated, it stays between two people, and nobody objects. But can we create a mechanism that restricts amplification when content goes viral?" Goel asked during a policy discussion at the summit

2

.

Goel's emphasis on curbing virality came just three days before February 20, the compliance deadline for amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The amendment places obligations on intermediary platforms to label synthetic content and requires them to remove flagged content within three hours of receiving a government takedown notice

2

. However, Goel stressed that the government's approach extends beyond content moderation: "This is not about content moderation. It is about verifiability, accountability, and keeping the citizen at the centre of the trust model."

2

Coalition for Content Provenance and Authenticity (C2PA) Emerges as Potential Solution

As policymakers search for technical solutions, content provenance tools that embed cryptographic metadata into AI-generated images, videos, and audio are emerging as a key part of India's techno-legal framework for AI accountability. The Coalition for Content Provenance and Authenticity (C2PA), an open technical standard for identifying the origin and edits of digital content, is being discussed as a possible building block

2

.

Source: MediaNama

Source: MediaNama

At the summit, speakers from Adobe and Google joined MeitY officials to discuss how building trust in synthetic media requires verifiable context rather than censorship. John Miller, Legal and Government Affairs Executive at the Information Technology Industry Council, positioned content provenance as "foundational infrastructure for transparency, attribution, and accountability"

2

. Andy Parsons, Global Head of Content Authenticity at Adobe, described C2PA as a multi-company effort nearly five years in the making, intended to create a global standard that is "ready to adopt"

3

.

Gail Kent, Government Affairs and Public Policy Director at Google, explained that the technology is transparent, understandable, interoperable, and immutable

2

. However, Sameer Boray, Senior Policy Manager at the Information Technology Industry Council, cautioned that "C2PA is not a silver bullet. It is not a perfect solution, but it is a solution and a good start"

2

.

Attribution Crisis Paralyzes Law Enforcement

Beyond technical detection challenges, legal experts identified attribution as the most serious obstacle in prosecuting AI-enabled cybercrime cases. Senior Supreme Court advocate Vivek Sood stated bluntly: "The law is not weak. The entire law of criminal conspiracy has existed since the Indian Penal Code of 1860. The problem is attribution"

1

. As AI systems operate with increasing autonomy, courts and investigators struggle to reliably link digital actions to specific individuals.

Former Indian Police Service officer Prof. Triveni Singh framed the problem in stark terms, warning that traditional policing structures remain poorly equipped to respond to crimes that unfold simultaneously across jurisdictions and digital infrastructure layers

1

. Singh criticized existing training models as inadequate: "So what we think by giving two or three days of training, we can make them cyber-smart? No, it's complete science," he said

1

.

Wig explained that when agencies conduct raids, they deal with data volumes reaching 80 terabytes. "That is where AI plays a huge role in forensic analysis by identifying patterns that humans cannot," he noted

1

. Yet speakers cautioned that detection alone does not automatically translate into admissible evidence or successful prosecution.

Balancing Security with Privacy and Data Governance

As India develops its response framework, officials emphasized the need to balance security concerns with constitutional protections. Sood cautioned against allowing cybercrime enforcement to erode fundamental rights: "Right to privacy is a fundamental right recognised by the Supreme Court and flows from Article 21," he said, adding that crime prevention and investigation operate only as exceptions

1

.

Former MeitY official Rakesh Maheshwari placed cybercrime risks within India's data governance framework, particularly the Digital Personal Data Protection Act. "The whole aim is to collect only what is really required. Don't over-collect. Be transparent. Store it only for the minimum period required," Maheshwari said

1

. He highlighted that in case of a breach, organizations bear liability to report incidents not only to the data protection board but also to affected individuals.

Goel from MeitY raised critical questions about risk allocation: "Who bears the risk? Is it the content? Is it the citizen? Is it the platform? Our very strong opinion is that the individual is bearing the risk. My likeness is getting cloned. My voice is getting synthesised. My credibility is getting undermined"

2

. He emphasized users' right to know, protection against impersonation, and right to remedy.

Dr. Sapna Bansal, professor at Shri Ram College of Commerce, argued that technology alone cannot counter AI-enabled crime. "We are teaching cybercrime in colleges and schools, but ethical AI is still lacking," she said, warning that routine behaviors such as sharing OTPs, using public Wi-Fi, and trusting urgent calls continue to expose users to fraud

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo