2 Sources
2 Sources
[1]
UK's deepfake detection plan unlikely to work, says expert
Home Office enlists Microsoft to set industry standards as AI-generated forgeries surge from 500K to 8M in two years The UK government claims it will develop a "world-first" framework to evaluate deepfake detection technologies as AI-generated content proliferates. The Home Office is working Microsoft, other tech corporations and academics to assess methods for identifying harmful forgeries. It estimates eight million deepfakes were shared in 2025, up from half a million in 2023. Nik Adams, Deputy Commissioner for City of London Police, called the framework "a strong and timely addition to the UK's response to the rapidly evolving threat posed by AI and deepfake technologies." "By rigorously testing deepfake technologies against real-world threats and setting clear expectations for industry, this framework will significantly bolster law enforcement's ability to stay ahead of offenders, protect victims and strengthen public confidence as these technologies continue to evolve." However, Dr Ilia Kolochenko, CEO at ImmuniWeb, a Swiss cybersecurity biz, said the plan "will quite unlikely make any systemic improvements in the near future." Kolochenko pointed to numerous open source tools and groups of experts that already exist to track and expose AI-generated content. "Even if an AI fake is detected, the biggest question is what to do next," he told The Register. "Reputable media and websites will likely take it down rapidly even without scientific proof that it is an AI fake." Clandestine or anonymous media are unlikely to be as cooperative. "We need a systemic and global amendment of legislation - not just legally unenforceable code of conduct or best practices - to stop the surging harm of AI-created content," Kolochenko added. "In sum, while this commendable action is a solid start, we are still very far from a final solution." The Register asked the Home Office for a time frame for the framework and the technology being used, but did not receive a response. Microsoft directed us to the Home Office's statement. ®
[2]
Britain to work with Microsoft to build deepfake detection system
LONDON, Feb 5 (Reuters) - Britain will work with Microsoft, academics and experts to develop a system to spot deepfake material online, the government said on Thursday, as it moves to set standards for tackling harmful and deceptive AI-generated content. While manipulated material has circulated online for decades, the rapid adoption of generative AI chatbots - made possible through the launch of ChatGPT and others - has amplified concerns about the scale and realism of deepfakes. Britain, which recently criminalised the creation of non-consensual intimate images, said it was working on a deepfake detection evaluation framework to set consistent standards for assessing detection tools and technologies. "Deepfakes are being weaponised by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear," technology minister Liz Kendall said in a statement. GOVERNMENTS SPURRED INTO ACTION BY NON-CONSENSUAL IMAGES The framework will evaluate how technology can be used to assess, understand and detect harmful deepfake materials, regardless of its source, the government said, by testing deepfake detection technologies against real-world threats like sexual abuse, fraud and impersonation. That would help the government and law enforcement obtain better knowledge on where gaps in detection remain, it said, adding that the framework would be used to set clear expectations for industries on deepfake detection standards. An estimated 8 million deepfakes were shared in 2025, up from 500,000 in 2023, according to government figures. Governments and regulators worldwide, who are struggling to keep pace with the rapid evolution of AI technology, were spurred into action this year, as Elon Musk's Grok chatbot was found to generate non-consensual sexualised images of people, including children. The British communications watchdog and privacy regulator are carrying out parallel investigations into Grok. (Reporting by Muvija M; Editing by Joe Bavier)
Share
Share
Copy Link
The UK government announced a framework to evaluate deepfake detection technologies, partnering with Microsoft as AI-generated content surged from 500,000 to 8 million instances between 2023 and 2025. However, cybersecurity experts question whether the initiative will deliver systemic improvements without global legislative changes to address the rapidly evolving threat.
The UK government has unveiled plans to develop what it calls a "world-first" deepfake detection evaluation framework, partnering with Microsoft, academics, and industry experts to establish consistent industry standards for identifying harmful AI-generated content
1
. The Home Office initiative comes as AI-generated forgeries have exploded, with an estimated 8 million deepfakes shared in 2025, up dramatically from 500,000 in 20232
. Technology minister Liz Kendall emphasized the urgency, stating that "deepfakes are being weaponised by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear"2
.Source: Market Screener
The deepfake detection system will evaluate how technology can assess, understand, and detect harmful deepfake materials by testing deepfake detection technologies against real-world threats including sexual abuse, fraud, and impersonation
2
. Nik Adams, Deputy Commissioner for City of London Police, described the framework as "a strong and timely addition to the UK's response to the rapidly evolving threat posed by AI and deepfake technologies"1
. The initiative aims to help government and law enforcement identify gaps in detection capabilities while setting clear expectations for industries on deepfake detection standards.
Source: The Register
Despite official enthusiasm, cybersecurity experts have raised concerns about the framework's potential impact. Dr. Ilia Kolochenko, CEO at ImmuniWeb, a Swiss cybersecurity firm, argued that the plan "will quite unlikely make any systemic improvements in the near future"
1
. He pointed to numerous open source tools and expert groups already working to track and expose AI-generated content, suggesting the infrastructure for detection already exists. Kolochenko emphasized that even when deepfakes are identified, the critical question remains: what happens next? While reputable media will likely remove flagged content rapidly, clandestine or anonymous platforms are unlikely to cooperate.Related Stories
The initiative reflects growing global concern about the misuse of AI, particularly following incidents where generative AI tools created non-consensual images. Governments worldwide were spurred into action after Elon Musk's Grok chatbot was found to generate non-consensual sexualised images of people, including children
2
. Britain recently criminalized the creation of non-consensual intimate images, and British communications and privacy regulators are conducting parallel investigations into Grok. However, Kolochenko stressed that "we need a systemic and global amendment of legislation - not just legally unenforceable code of conduct or best practices - to stop the surging harm of AI-created content"1
. While the framework represents a solid start, experts suggest that without binding legislative changes at a global level, the battle against deepfakes remains far from won. The rapid evolution of generative AI technology continues to outpace regulatory efforts, leaving governments struggling to establish effective controls.Summarized by
Navi
[1]
[2]
07 Jan 2025•Policy and Regulation

12 Jul 2025•Technology

18 Feb 2026•Policy and Regulation

1
Policy and Regulation

2
Technology

3
Technology
