The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Wed, 26 Feb, 12:03 AM UTC
2 Sources
[1]
I hope this AI deepfake detection feature comes to more phones soon - but it needs one key upgrade to be truly useful
(Image credit: Honor / Future / Shutterstock (MapensStudio) / Shutterstock (KM1994)) Along with nuclear weapons and selfie sticks, deepfakes are among those rare technological developments that most of us wish didn't develop at all. A portmanteau of 'deep learning' and 'fake', 'deepfake' refers to an image, video, or audio snippet that synthesize real material with AI-generated content. You may have seen the TikTok of Tom Cruise dancing in his backyard or the photo of Pope Francis wearing a Balenciaga coat (neither of which were real, of course), but for every harmless meme, there are 100 nefarious deepfake scams designed to trick people out of their money. Indeed, the number of deepfake-based fraud attempts has reportedly risen by 2,137% over the last three years - we ourselves reported on a case involving a "Deepfake CFO" who tricked employees into handing over $25 million - and given the inflammatory nature of today's politics landscape, it's becoming increasingly difficult to separate fact from fiction. Wouldn't it be handy, then, to have a deepfake detection tool baked into your smartphone? A warning system designed to expose bad actors before they convince you of their false identity? Enter Honor, the Chinese mobile maker behind some of the best phones in the UK and Europe, who last year announced Deepfake Detection, a smartphone feature that can identify and warn users against digitally manipulated content. The on-device tool examines frame-by-frame information such as eye contact, lighting, image clarity, and video playback to detect flaws that are imperceptible to the human eye. If inconsistencies are identified, the Deepfake Detection feature triggers a popup that reads: "Honor scam alert. It looks like the other person could be using AI to swap their face." Pretty cool, right? Honor has confirmed that its unique new tool will be available globally from April, meaning owners of top-end Honor handsets like the Honor Magic 7 Pro will soon be better defended against deepfake-using scammers. There's just one problem. Deepfake Detection only works during video calls, meaning you'll need to be targeted by a real-life, video-calling scammer for it to have any use. Sure, the feature is still valuable for that particular use case - I'm sure my grandma will be grateful for it the next time someone comes for her credit card details - but in my experience, the bigger (or at least more common) risk posed by deepfakes lies in how they're used to propagate misinformation on third-party platforms. Social media platforms like X and Facebook are awash with content that's been manipulated by AI - whether that's deepfakes or wholly AI-generated image and video content. We all like to think we can spot a fake, but this technology has come so far in so little time. Indeed, a recent study found that AI literacy among the general public is depressingly low, with only 0.1% of participants able to correctly distinguish between real and deepfake stimuli. Perhaps unsurprisingly, this study found that older adults are particularly susceptible to AI-generated deception, and while younger participants were more confident in their ability to detect deepfakes, their actual performance in the study was equally poor. The point being: deepfakes are hard to spot, even for those of us who are chronically online. A built-in smartphone tool that identifies video-calling deepfake scammers is great, and Honor deserves praise for developing one. But the real deepfake battleground is not in our front-facing cameras, it's on our For You Pages. If I could engage Deepfake Detection while scrolling through Instagram, it would make phones like the Honor Magic 7 Pro even more useful. There are, of course, reasons why Honor can't (or won't) expand the functionality of Deepfake Detection to third-party platforms, and while the company hasn't yet shared what they are (I've asked), I suspect that it has something to do with these platforms being ring-fenced with watertight terms of use policies. Perhaps, then, the onus is on the platforms themselves to implement built-in reality check buttons. The likes of X and Facebook aren't exactly basking in public trust right now, and while Meta does claim to "remove misleading manipulated media [that] has been edited or synthesized," a quick browse of your parents' Facebook feeds will confirm that more needs to be done. Smartphone manufacturers like Honor can lend a hand, but they need to be allowed to do so. Scammers are a scourge, but misinformation is the real enemy. Pope Francis wearing a Balenciaga coat is funny, but what about when Pope Francis goes viral for denouncing catholicism? It's all fun and games until the retweets turn to riots. Our smartphones should be as good at spotting deepfakes as they are serving them up to us, and I hope more manufacturers (and indeed social media platforms) are working on ways to tackle this issue.
[2]
Honor AI deepfake detection tech to roll out globally this April - Phandroid
With the rise of AI and the rapid improvements we're seeing on a daily basis, it's not surprising that there are concerns about issues like deepfakes. So, who does the onus fall on to protect us from these deepfakes? If you own an Honor smartphone, then you might be interested to learn that Honor will begin rolling out its AI deepfake detection technology this April. If this feature sounds familiar to you, it's because Honor unveiled it back at IFA 2024. Now, with the feature rolling out globally this April, Honor is hoping to protect more users from falling prey to scams involving deepfake technology. The company cites a study by Entrust Cybersecurity Institute, which found that in 2024, a deepfake attack occurred every five minutes. Also, Deloitte's 2024 Connected Consumer study found that 59% of respondents struggled to differentiate between human and AI-generated content. If you've encountered similar struggles yourself, don't be embarrassed. AI-generated content has improved by leaps and bounds over the years, making it harder to tell than ever. According to Honor, its AI deepfake detection technology will detect pixel-level synthetic imperfections, border compositing artifacts, inter-frame continuity issues, and anomalies in face-to-ear ratio, hairstyle, and facial features.
Share
Share
Copy Link
Honor is set to globally roll out its AI deepfake detection feature in April, aiming to protect users from scams during video calls. While innovative, the technology's limited scope highlights the need for broader solutions in combating digital misinformation.
Chinese mobile manufacturer Honor is set to globally launch its AI-powered deepfake detection feature this April, marking a significant step in the fight against digital fraud and misinformation. The technology, first unveiled at IFA 2024, will be available on high-end Honor devices such as the Honor Magic 7 Pro 1.
Honor's deepfake detection tool operates on-device, analyzing video calls frame-by-frame to identify inconsistencies that are often imperceptible to the human eye. The system examines various elements including:
When the tool detects potential manipulation, it triggers a popup warning: "Honor scam alert. It looks like the other person could be using AI to swap their face" 1.
The introduction of this technology comes at a crucial time, as deepfake-related fraud attempts have reportedly increased by 2,137% over the past three years. A study by Entrust Cybersecurity Institute found that in 2024, a deepfake attack occurred every five minutes 2.
While Honor's initiative is commendable, the current iteration of the technology has a significant limitation: it only functions during video calls. This narrow scope fails to address the broader issue of deepfakes and AI-generated content proliferating on social media platforms 1.
A recent study highlighted the difficulty in distinguishing between real and AI-generated content, with only 0.1% of participants able to correctly identify deepfakes. Deloitte's 2024 Connected Consumer study further emphasized this challenge, revealing that 59% of respondents struggled to differentiate between human and AI-generated content 2.
Experts argue that the real battleground for deepfake detection lies on social media platforms rather than in video calls. The ability to apply deepfake detection while browsing platforms like Instagram or Facebook could significantly enhance the tool's utility 1.
However, expanding the functionality to third-party platforms presents challenges, likely due to strict terms of use policies. This situation underscores the need for collaboration between smartphone manufacturers, social media companies, and policymakers to develop comprehensive solutions for combating digital misinformation 1.
Reference
[1]
[2]
Phandroid - Android News and Reviews
|Honor AI deepfake detection tech to roll out globally this April - PhandroidAs deepfake technology becomes more sophisticated, tech companies are developing advanced detection tools to combat the growing threat of AI-generated scams and disinformation.
3 Sources
3 Sources
McAfee introduces Project Mockingbird, a deepfake detection tool, for Lenovo's new AI PCs. The technology aims to combat the rising threat of AI-generated audio and video content.
4 Sources
4 Sources
Deepfake technology is increasingly being used to target businesses and threaten democratic processes. This story explores the growing prevalence of deepfake scams in the corporate world and their potential impact on upcoming elections.
2 Sources
2 Sources
Hiya, a call screening and fraud detection company, has released a free Chrome extension called Hiya Deepfake Voice Detector to identify AI-generated voices in audio and video content, aiming to combat misinformation ahead of the 2024 US elections.
4 Sources
4 Sources
A recent study by iProov reveals that only 2 out of 2,000 participants could accurately distinguish between real and AI-generated deepfake content, highlighting the growing threat of misinformation and identity fraud in the digital age.
3 Sources
3 Sources