3 Sources
3 Sources
[1]
Experts Warn No AI Can Reliably Detect Deepfakes
With inputs from Chaitanya Kohli and Prabhanu Kumar Das "As of now, there is not a single AI model that can detect a really good deepfake-generated video and tell you this is fake," said Tarun Wig, co-founder of Innefu Labs, during a session on AI-enabled cybercrime at the India AI Impact Summit, highlighting a widening gap between the scale of synthetic media threats and the tools available to detect them. Moreover, Wig warned that this gap extends well beyond individual fraud cases. "Deepfake is going to become a national security issue very soon, maybe as soon as the 2029 election," he said, pointing to how easily synthetic audio and video can be weaponised and amplified at scale. As a result, experts at the summit cautioned that India's legal, enforcement, and technical systems are struggling to keep pace as artificial intelligence reshapes fraud, impersonation, and misinformation. Against this backdrop, the session, titled AI for Secure India: Combating AI-Enabled Cybercrime, Deepfakes, Dark Web Threats and Data Breaches, brought together former government officials, law enforcement experts, technologists, and senior lawyers. Notably, speakers converged on one core concern: while AI has accelerated the scale and speed of cybercrime, mechanisms for detection, attribution, and accountability remain fragmented and largely reactive. Former Indian Police Service (IPS) officer and cybercrime expert Prof. Triveni Singh framed the problem in stark terms and outlined how AI has reshaped cybercrime in India: Furthermore, Singh warned that traditional policing structures remain poorly equipped to respond to crimes that unfold simultaneously across jurisdictions and digital infrastructure layers. Several speakers flagged AI-generated audio and video as a qualitatively different threat. Wig explained why deepfakes pose an outsized risk: Speakers acknowledged that AI increasingly assists cybercrime investigation and forensic analysis. However, they stressed that AI improves capacity without resolving deeper structural constraints. "Every crime, if it is not a lone wolf attack, it's a network," Wig said, explaining that fraud, money laundering, and identity theft typically involve interconnected actors. AI systems allow agencies to process vast volumes of seized digital data. "When agencies go on a raid, they deal with data to the tune of 80 terabytes," Wig said. "That is where AI plays a huge role in forensic analysis by identifying patterns that humans cannot." That said, speakers cautioned that detection alone does not automatically translate into admissible evidence or successful prosecution. From a legal standpoint, senior Supreme Court advocate Vivek Sood said attribution has emerged as the most serious challenge in AI-enabled cybercrime cases. "The law is not weak," Sood said. "The entire law of criminal conspiracy has existed since the Indian Penal Code of 1860. The problem is attribution." Sood outlined why attribution increasingly fails: As AI systems operate with increasing autonomy, courts and investigators struggle to reliably link digital actions to specific individuals. Several speakers pointed to serious capacity gaps within law enforcement. Singh criticised existing training models as inadequate. "So what we think by giving two or three days of training, we can make them cyber-smart? No, it's complete science," Singh said. He identified key weaknesses: Former Ministry of Electronics and Information Technology official Rakesh Maheshwari placed cybercrime risks within India's data governance framework, particularly the Digital Personal Data Protection Act. "The whole aim is to collect only what is really required. Don't over-collect. Be transparent. Store it only for the minimum period required," Maheshwari said. He also highlighted user rights under the law, including the ability to know what data has been collected, who it has been shared with, and to seek erasure. "In case of a breach, it will be the liability of the organisation to report it not only to the data protection board but also to the individuals whose data has been impacted," he said. Sood cautioned against allowing cybercrime enforcement to erode constitutional protections. "Right to privacy is a fundamental right recognised by the Supreme Court and flows from Article 21," he said, adding that crime prevention and investigation operate only as exceptions. While AI can assist in prevention and investigation, Sood stressed the need to protect the rights of the accused and avoid wrongful implication. Dr. Sapna Bansal, professor at Shri Ram College of Commerce, argued that technology alone cannot counter AI-enabled crime. "We are teaching cybercrime in colleges and schools, but ethical AI is still lacking," she said. She warned that routine behaviours such as sharing OTPs, using public Wi-Fi, and trusting urgent calls continue to expose users to fraud. "Stay alert, stay aware, stay secure," Bansal said, calling awareness and education the first line of defence. Taken together, the session at the India AI Impact Summit exposed a widening gap between the growing sophistication of AI-enabled cybercrime and the readiness of India's enforcement and governance systems. On paper, India has begun responding. Policymakers have introduced new legal frameworks, parliamentary committees have flagged deepfakes as a distinct risk, and agencies have started testing detection tools. For instance, the Parliamentary Standing Committee on Home Affairs has called for express legal provisions to address AI-generated content, while welcoming indigenous deepfake detection tools developed by the Centre for Development of Advanced Computing (C-DAC), which are currently under evaluation by law enforcement. However, detection tools still struggle in real-world conditions, attribution across borders remains difficult, and enforcement agencies face persistent capacity gaps. As multiple speakers noted, AI will inevitably be used by both criminals and the state. However, unless institutions adapt faster, AI risks becoming an asymmetric weapon deployed against citizens rather than a tool capable of protecting them.
[2]
MeitY Scientist Calls for Limits on Deepfake Virality
With Additional Inputs from Chaitanya Kohli & Prabhanu Kumar Das "The issue is not just content creation; the issue is virality. If content doesn't get disseminated, it stays between two people, and nobody objects. But can we create a mechanism that restricts amplification when content goes viral?" asked Deepak Goel, addressing ways to regulate AI-generated deepfake content. Goel is a Scientist in Cyber Law and Data Governance under the Ministry of Electronics & Information Technology (MeitY). It is important to note that Goel's emphasis on curbing virality comes just three days before February 20, a date set by the government for compliance with the recent amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The amendment not only places obligations on intermediary platforms to label synthetic content but also requires them to remove flagged content within three hours of receiving a government takedown notice. Goel was speaking at a policy discussion involving officials from Adobe and Google held at the India AI Impact Summit (2026) on February 17, 2026. Speakers said provenance tools that embed cryptographic metadata into AI-generated images, videos, and audio may emerge as a key part of India's techno-legal framework for AI accountability. As the government looks beyond takedowns and toward accountability, technical standards such as Coalition for Content Provenance and Authenticity (C2PA) are being discussed as possible building blocks of this framework. It is an open technical standard for identifying the origin and edits of digital content. "Government is normally known for taking content down. But this is not about content moderation. It is about verifiability, accountability, and keeping the citizen at the centre of the trust model," said MeitY scientist Goel. He further said that overly prescriptive legislation would not help achieve global cooperation or avoid jurisdictional conflicts. "If legislation is too prescriptive, it would be really tough to achieve convergence. But if we create laws that are principle-based, simply setting out principle-based obligations and leaving the industry to implement them on its own, then we can achieve good convergence," he added. "In India, we are very strongly of the opinion that we should create laws that are not prescriptive. The laws would be principle-based, keeping the citizen at the centre. If technical implementation teams work on those principles, the solutions would be acceptable everywhere. That should be the way to go ahead." - MeitY's Scientist, Deepak Goel. He later asked: "Who bears the risk? Is it the content? Is it the citizen? Is it the platform? Our very strong opinion is that the individual is bearing the risk. My likeness is getting cloned. My voice is getting synthesised. My credibility is getting undermined. My decisions are getting manipulated. How do we empower the individual so that what belongs to them doesn't get undermined in any way?" Goel said he does not have complete answers to these questions yet but is awaiting techno-legal solutions from tech giants like Google and Adobe. "I hope they keep coming [with tech solutions] and keep getting adopted at mass scale," he said. Goel also emphasised the importance of users' right to know. "They have the right to protection against impersonation. And if something bad happens, they have the right to remedy." Addressing currently available solutions, he said, "Whatever C2PA or any standard does, embedding metadata or identifiers into content generation tools, it should be transparent, understandable, interoperable, and immutable." All the speakers in the discussion agreed that the C2PA protocol is one possible solution to tackle deepfakes. However, "C2PA is not a silver bullet. It is not a perfect solution, but it is a solution and a good start. The standards that C2PA is creating are a step in the right direction," said Sameer Boray, Senior Policy Manager at the Information Technology Industry Council. "In the age of rapidly scaling synthetic media, content provenance can itself be characterised as foundational infrastructure for transparency, attribution, and accountability -- not as moderation or censorship, but as verifiable context," said John Miller, Legal and Government Affairs Executive at the Information Technology Industry Council. Gail Kent, Government Affairs and Public Policy Director at Google, explained why C2PA is a "compelling digital tool." She said the technology is: She also cautioned that "just because something is created by AI doesn't mean it's not trustworthy." However, while addressing the challenge of deepfake content, she laid out three ways to build trust in the current AI age: While discussing policymaking around tools like C2PA, Sameer Boray said regulators and policy professionals need to think about synthetic media holistically "rather than just looking at it as a content moderation issue," as privacy laws, cybersecurity rules, and AI governance guidelines intersect with the main IT Act. Referring to the 50 states in the US, he said each state regulates a small aspect of essentially the same issue, resulting in bills that conflict with one another because they are siloed rather than holistic.
[3]
India AI Impact Summit 2026: Trust in the age of synthetic media is turning into infrastructure
At the India AI Impact Summit 2026, a session titled "Building trust in the age of synthetic media" tried to do something unusually concrete for a topic that often collapses into vibes and fear. Rather than arguing about whether AI-generated content is inherently "good" or "bad", the panellists repeatedly came back to what it would take to make transparency about digital media legible, scalable, and compatible across platforms and jurisdictions. Convened by the Coalition for Content Provenance and Authenticity and the Information Technology Industry Council, the panel featured Andy Parsons (Global Head of Content Authenticity at Adobe), John Miller (General Counsel and Senior Vice President of Policy at Information Technology Industry Council), Gail Kent (Global Public Policy Director at Google), Sameer Boray (Senior Policy Manager at Information Technology Industry Council), and Deepak Goyal from the Ministry of Electronics and Information Technology. Their shared premise: synthetic media is scaling fast, and trust is now a foundational requirement for everything from democratic discourse to consumer safety. Early in the session, John Miller positioned trust as the unifying thread running through modern digital policy, from privacy to cybersecurity to AI governance. His framing was careful: content provenance standards such as Coalition for Content Provenance and Authenticity are not meant to be a moderation tool, nor a mechanism for censorship. Instead, the pitch is closer to "verifiable context", a way to attach tamper-resistant metadata to content so people and platforms can understand how something was made, edited, and shared. That distinction matters because it tries to defuse a predictable backlash: that any system which labels, traces, or verifies media will become a lever for controlling speech. The panel kept reiterating the inverse idea: that provenance, at least in its ideal form, shifts decision-making to the viewer by providing information, not by making the decision for them. This is where the "nutrition label" metaphor surfaced, a way to describe provenance as a standardised information panel. The ambition is not to decide truth, but to make questions like "is this a photograph?", "was this edited?", "was AI involved?", and "what tool chain touched it?" answerable through objective signals. As moderator, Andy Parsons outlined the coalition's origin story: a multi-company effort, nearly five years in the making, intended to create a global standard that is "ready to adopt". The key selling point is not that it solves every problem, but that it creates a shared, interoperable foundation. In other words, even if provenance does not answer every hard question about deception, it can at least standardise the basics of "where did this come from, and what happened to it". Gail Kent echoed that idea from the perspective of a company that sits on both distribution and creation surfaces. She pointed to long-standing product features that revolve around understanding images, reverse search, "about this image" style context, and newer multimodal workflows. The core argument was that AI increases creative capability, but also makes manipulation easier, which raises the value of embedding reliable signals into content at the point of creation. She described two broad approaches. One is a marker that identifies AI-generated content. The other is richer provenance via content credentials that carry information about how a piece of media was created and edited. The subtext is important: labelling "AI-made" is not enough on its own, because the interesting questions are often about what changed, by whom, and whether the context is being misrepresented, not merely whether a model was used at some step. Kent also stressed that AI-created does not automatically mean untrustworthy, a point that subtly pushes back against a future where "AI label" becomes a scarlet letter. In a world where AI tools are baked into mainstream apps and devices, the label has to communicate context without implying that the content is necessarily false. The panel's most repeated phrase, in different ways, was that none of these systems are perfect. Sameer Boray was direct: Coalition for Content Provenance and Authenticity is "a solution", not "the solution". He placed it in a wider toolbox that includes watermarking, human review, and other provenance methods. This is not just a rhetorical hedge. It is a practical admission that any one mechanism will have failure modes, especially once content starts getting screen-recorded, re-encoded, shared across messaging apps, or remixed in ways that strip metadata. Even Andy Parsons leaned into this realism: provenance is foundational, not magical. The value is in improving the baseline, making it easier for platforms, governments, and users to make better decisions with more information than they have today. A key undercurrent through the discussion was that India is in a particularly intense moment for digital regulation, with proposals that touch AI governance, privacy, and platform rules. The panel did not frame this as India acting in isolation, but as part of a global wave, with similar conversations happening in Europe, the US, and elsewhere. Still, India's scale, linguistic diversity, and mobile-first internet make the implementation question sharper. If policy is written as if every surface can instantly show provenance labels and verify credentials, it risks becoming performative, or worse, unworkable. Boray raised the obvious concern in the context of a tight implementation window: provenance is not uniformly supported across major platforms, and it is particularly hard to enforce in private, closed, high-velocity sharing environments. He argued for a phased approach that gathers stakeholders and maps what is technically realistic, rather than assuming that a standard can be switched on everywhere at once. This is where the panel's tone became more pragmatic than ideological. Nobody was denying responsibility. The debate was about sequencing, feasibility, and the difference between a regulation that looks good on paper and one that can actually be complied with in a heterogeneous ecosystem of devices, apps, and user behaviours. Deepak Goyal gave the clearest articulation of what is at stake from a governance standpoint. In his framing, the "risk bearer" is not primarily the platform. It is the individual whose likeness is cloned, whose voice is synthesised, whose decisions are manipulated, and whose credibility is undermined. That view reorients the synthetic media debate away from abstract arguments about misinformation and towards more immediate harms: impersonation, fraud, coercion, and reputational damage. It also leads naturally to a rights-based framing: the right to know, the right to protection against impersonation, and the right to remedy when harm occurs. He also emphasised a familiar regulatory instinct: being technology-agnostic, and ideally purpose-agnostic, while still aiming for citizen empowerment and ease of doing business. The suggestion was that if regulation sets principle-based outcomes rather than prescribing a specific technical approach, it is more likely to scale and more likely to align globally. There was a revealing aside too: Goyal said he would like to test provenance tooling himself if given access. It sounds small, but it hints at a recurring problem in tech regulation, namely that the people tasked with making rules often do not get hands-on exposure to how the tooling behaves in real workflows. One of the cleanest distinctions raised was between creation and dissemination. A person can create synthetic content and share it privately without triggering public harm. The societal risk appears when the content becomes viral, amplified, and detached from the context of its creation. That is exactly where provenance struggles today. Metadata can be stripped. Content can be re-shared as screenshots. Audio can be re-recorded. Video can be re-encoded. Messaging apps can be the fastest path, and also the hardest place to attach, preserve, and display context. The panel did not claim to have a perfect fix for this. Instead, it leaned into a more layered approach: provenance signals where possible, other markers where needed, and user-facing interfaces that make context understandable in the moment people are deciding whether to trust and share. Kent laid out a simple but consequential framework: trust is a shared ecosystem. Companies need to build and ship the tooling. Government needs to set principled goals and create workable rules. Users need media literacy, because no cryptographic system can replace human judgement in every situation. This is also where a subtle warning surfaced: a trust ecosystem cannot be built solely through compliance. If the labels are confusing, if they stigmatise legitimate content, or if they are inconsistently applied, users will ignore them. In that sense, UX and literacy become as important as cryptography. Kent's personal example of a parent forwarding questionable information is familiar, but the point was less about family dynamics and more about scale: a world where every user requires personalised verification help is not a world where trust has been solved. The panel ended on a question that hangs over nearly every major tech policy conversation today: how to avoid a fragmented ecosystem where each jurisdiction sets slightly different requirements, forcing global products into inconsistent behaviours and reducing the chance of universal adoption. All three voices who answered converged on principle-based regulation as the most realistic path to compatibility. If laws specify goals such as transparency, security, and privacy preservation, and avoid mandating a single implementation, the industry has room to innovate and standardise. If laws become prescriptive, convergence becomes harder, and fragmentation becomes more likely. Boray added a policy practitioner's caution: synthetic media intersects with privacy, cybersecurity, and AI governance, not only "content moderation". Treating it as a narrow issue risks contradictory rules, even within the same country. Despite the surface-level disagreement over whether provenance is merely "one solution" or "a foundational solution", the session converged on a few practical truths. First, trust is becoming a prerequisite infrastructure layer for digital life, not a nice-to-have. Second, provenance standards are attractive because they are interoperable and, in theory, verifiable without relying on any single platform's claims. Third, no single mechanism will solve virality, impersonation, and manipulation, so the future will be layered: provenance plus other markers, plus interfaces, plus education, plus remedies when harm occurs. Most importantly, the panel treated implementation as the real battleground. Not because policy is irrelevant, but because every promise about trust is only as good as what survives the messy journey from creation tools to phones to feeds to forwards. If synthetic media is the new normal, then "trust in the age of synthetic media" stops being a slogan and starts looking like systems engineering: standards, incentives, and user comprehension, all moving together, or not moving at all. If you want, this can be tightened into a 900 to 1,100-word feature for a news site, or expanded into a longer reported piece by adding one concrete Indian case study of impersonation fraud and mapping exactly where provenance would, and would not, have helped.
Share
Share
Copy Link
At the India AI Impact Summit, cybersecurity experts revealed that no AI model can reliably detect sophisticated deepfakes, warning this could become a national security risk by 2029 elections. Meanwhile, MeitY scientists propose curbing virality of synthetic content and exploring content provenance standards like C2PA to build accountability frameworks.
A stark warning emerged from the India AI Impact Summit: current technology cannot reliably identify sophisticated deepfakes. "As of now, there is not a single AI model that can detect a really good deepfake-generated video and tell you this is fake," said Tarun Wig, co-founder of Innefu Labs, during a session on AI-enabled cybercrime
1
. This revelation highlights a widening gap between the scale of synthetic media threats and the tools available to detect them, leaving India's legal, enforcement, and technical systems struggling to keep pace as AI reshapes fraud, impersonation, and misinformation.
Source: Digit
Wig emphasized that deepfakes pose an outsized risk that extends well beyond individual fraud cases. "Deepfake is going to become a national security issue very soon, maybe as soon as the 2029 election," he warned, pointing to how easily synthetic audio and video can be weaponized and amplified at scale
1
. The session brought together former government officials, law enforcement experts, technologists, and senior lawyers who converged on one core concern: while AI has accelerated the scale and speed of cybercrime, mechanisms for detection, attribution and accountability remain fragmented and largely reactive.
Source: MediaNama
Addressing the challenge from a regulatory angle, Deepak Goel, a scientist in Cyber Law and Data Governance under the Ministry of Electronics & Information Technology (MeitY), proposed a novel approach. "The issue is not just content creation; the issue is virality. If content doesn't get disseminated, it stays between two people, and nobody objects. But can we create a mechanism that restricts amplification when content goes viral?" Goel asked during a policy discussion at the summit
2
.Goel's emphasis on curbing virality came just three days before February 20, the compliance deadline for amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The amendment places obligations on intermediary platforms to label synthetic content and requires them to remove flagged content within three hours of receiving a government takedown notice
2
. However, Goel stressed that the government's approach extends beyond content moderation: "This is not about content moderation. It is about verifiability, accountability, and keeping the citizen at the centre of the trust model."2
As policymakers search for technical solutions, content provenance tools that embed cryptographic metadata into AI-generated images, videos, and audio are emerging as a key part of India's techno-legal framework for AI accountability. The Coalition for Content Provenance and Authenticity (C2PA), an open technical standard for identifying the origin and edits of digital content, is being discussed as a possible building block
2
.
Source: MediaNama
At the summit, speakers from Adobe and Google joined MeitY officials to discuss how building trust in synthetic media requires verifiable context rather than censorship. John Miller, Legal and Government Affairs Executive at the Information Technology Industry Council, positioned content provenance as "foundational infrastructure for transparency, attribution, and accountability"
2
. Andy Parsons, Global Head of Content Authenticity at Adobe, described C2PA as a multi-company effort nearly five years in the making, intended to create a global standard that is "ready to adopt"3
.Gail Kent, Government Affairs and Public Policy Director at Google, explained that the technology is transparent, understandable, interoperable, and immutable
2
. However, Sameer Boray, Senior Policy Manager at the Information Technology Industry Council, cautioned that "C2PA is not a silver bullet. It is not a perfect solution, but it is a solution and a good start"2
.Related Stories
Beyond technical detection challenges, legal experts identified attribution as the most serious obstacle in prosecuting AI-enabled cybercrime cases. Senior Supreme Court advocate Vivek Sood stated bluntly: "The law is not weak. The entire law of criminal conspiracy has existed since the Indian Penal Code of 1860. The problem is attribution"
1
. As AI systems operate with increasing autonomy, courts and investigators struggle to reliably link digital actions to specific individuals.Former Indian Police Service officer Prof. Triveni Singh framed the problem in stark terms, warning that traditional policing structures remain poorly equipped to respond to crimes that unfold simultaneously across jurisdictions and digital infrastructure layers
1
. Singh criticized existing training models as inadequate: "So what we think by giving two or three days of training, we can make them cyber-smart? No, it's complete science," he said1
.Wig explained that when agencies conduct raids, they deal with data volumes reaching 80 terabytes. "That is where AI plays a huge role in forensic analysis by identifying patterns that humans cannot," he noted
1
. Yet speakers cautioned that detection alone does not automatically translate into admissible evidence or successful prosecution.As India develops its response framework, officials emphasized the need to balance security concerns with constitutional protections. Sood cautioned against allowing cybercrime enforcement to erode fundamental rights: "Right to privacy is a fundamental right recognised by the Supreme Court and flows from Article 21," he said, adding that crime prevention and investigation operate only as exceptions
1
.Former MeitY official Rakesh Maheshwari placed cybercrime risks within India's data governance framework, particularly the Digital Personal Data Protection Act. "The whole aim is to collect only what is really required. Don't over-collect. Be transparent. Store it only for the minimum period required," Maheshwari said
1
. He highlighted that in case of a breach, organizations bear liability to report incidents not only to the data protection board but also to affected individuals.Goel from MeitY raised critical questions about risk allocation: "Who bears the risk? Is it the content? Is it the citizen? Is it the platform? Our very strong opinion is that the individual is bearing the risk. My likeness is getting cloned. My voice is getting synthesised. My credibility is getting undermined"
2
. He emphasized users' right to know, protection against impersonation, and right to remedy.Dr. Sapna Bansal, professor at Shri Ram College of Commerce, argued that technology alone cannot counter AI-enabled crime. "We are teaching cybercrime in colleges and schools, but ethical AI is still lacking," she said, warning that routine behaviors such as sharing OTPs, using public Wi-Fi, and trusting urgent calls continue to expose users to fraud
1
.Summarized by
Navi
[1]
22 Oct 2025•Policy and Regulation

12 Nov 2025•Policy and Regulation

02 Jan 2026•Policy and Regulation

1
Policy and Regulation

2
Technology

3
Technology
