Curated by THEOUTPOST
On Thu, 18 Jul, 4:02 PM UTC
3 Sources
[1]
Michael Mosley 'deepfaked' on social media in health scam
AI-generated fake videos using likeness of popular doctors to sell hemp gummies and hoax high blood pressure and diabetes cures TV doctors including Hilary Jones and the late Michael Mosley are being "deepfaked" on social media to promote health scams, it has emerged. Scammers are using artificial intelligence to take the likeness of popular doctors such as Mosley, Jones and Rangan Chatterjee and make fake videos advertising products. The clips show the well-known public faces promoting various products such as apparent cures for high blood pressure and diabetes, as well as hemp gummies. An investigation by the British Medical Journal (BMJ) found the videos on social media. The videos are not officially endorsed by the celebrities whose images are manipulated to appear in them. "The bottom line is, it's much cheaper to spend your cash on making videos than it is on doing research and coming up with new products and getting them to market in the conventional way," said John Cormack, a retired doctor based in Essex. 'Increase in activity' Henry Ajder, a deepfake expert, added that AI tools had rapidly improved in the past few years and this had led to "a significant increase in this kind of activity". Deepfake videos have been a problem for years, with one prominent example being a video of Nancy Pelosi, the former US speaker in the House of Representatives, that claimed she was drunk in 2019. Facebook refused at the time of the video's circulation to take the manipulated clip offline. In a statement following the latest investigation, Facebook said it would investigate the TV doctor videos. "We don't permit content that intentionally deceives or seeks to defraud others, and we're constantly working to improve detection and enforcement," a spokesman for Meta, which owns Facebook and Instagram, said. "We encourage anyone who sees content that might violate our policies to report it so we can investigate and take action." 'Beyond our remit' The General Medical Council (GMC), which is responsible for upholding the professional standards of doctors in the UK, is unable to take any action on the videos because they are not genuine. If doctors were really promoting hoax cures, then the medical council may be able to intervene, but it is powerless against deepfakes. "Computer-generated videos by people not on our register would sit beyond our remit," a spokesman for the GMC said. Identifying a deepfake video is increasingly difficult because computer engineers are now replicating voices and video footage better than ever before, and there is often no easy way to verify a photo. Small errors, such as too many fingers or improper ear lobes, used to be a giveaway, but these have now been fixed by the computer algorithms. 'Potential breakthrough' However, astronomers think they have found a new way of telling apart deepfakes and genuine clips with the secret being in the eyes. Astronomy researchers at the University of Hull have found that deepfake videos are unable to make the lights in both eyeballs match. In a real video, the reflections in a person's two eyeballs are identical, but in deepfakes they tend to be different, a study found. "The reflections in the eyeballs are consistent for the real person, but incorrect [from a physics point of view] for the fake person," said Kevin Pimbblet, a professor of astrophysics and the director of the Centre of Excellence for Data Science, Artificial Intelligence and Modelling at the University of Hull. 'Arms race to detect deepfakes' Researchers repurposed techniques used to study stars and galaxies and applied them to the patterns in eyeballs on video, and found that they match in real ones, but do not in fakes. "It's important to note that this is not a silver bullet for detecting fake images," Prof Pimbblet added. "There are false positives and false negatives; it's not going to get everything. But this method provides us with a basis, a plan of attack, in the arms race to detect deepfakes." The work was presented at the Royal Astronomical Society's National Astronomy Meeting in Hull.
[2]
'Deepfakes' of Michael Mosley and Hilary Jones being used to promote scams on social media
Deepfake videos of TV doctors are being used on social media to sell scam products including "cures" for high blood pressure and diabetes. Michael Mosley is among a number of TV doctors victim to "deepfakes" of themselves circulating on social media to sell scam products, an investigation has revealed. The likenesses of trusted names including Hilary Jones, Michael Mosley and Rangan Chatterjee are being used to promote products claiming to fix high blood pressure and diabetes, and to sell hemp gummies, according to the British Medical Journal. Deepfakes are created by using AI to map a digital likeness of a real person on to a video of a body that isn't theirs. It's hard to say exactly how convincing these fabricated videos are - but one recent study suggests up to half of all people shown deepfakes talking about scientific subjects cannot distinguish them from authentic videos. A video of Dr Hilary Jones posted on Facebook appeared to show him promoting a "cure" for high blood pressure on the Lorraine programme - but it wasn't him. Dr Jones told the BMJ that wasn't the only product being promoted using his name, with his likeness also attached to so-called diabetes treatments and a slew of hemp gummies. In one fake video, Dr Michael Mosley, who died last month, is shown appearing to talk about a diabetes "cure" that does away with the need for insulin injections. The BMJ did not specify how many videos it found in its investigation. For Dr Jones, the problem is so bad he now employs a social media specialist to trawl the internet for deepfake videos that misrepresent his views and tries to take them down. But it is hard to keep on top of. "They just pop up the next day under a different name," he said. Read more from Sky News: British men missing in Sweden feared murdered How hot is it in popular holiday destinations? Henry Ajder, an expert on deepfake technology, said: "The rapid democratisation of accessible AI tools for voice cloning and avatar generation has transformed the fraud and impersonation landscape." Spotting deepfakes can be tricky as the technology has improved, he added. "It's difficult to quantify how effective this new form of deepfake fraud is, but the growing volume of videos now circulating would suggest bad actors are having some success." Many of the videos were found on Facebook and Instagram, which is owned by Meta. A spokesperson told the BMJ it will investigate the videos highlighted in the report. "We don't permit content that intentionally deceives or seeks to defraud others, and we're constantly working to improve detection and enforcement," the spokesperson said. "We encourage anyone who sees content that might violate our policies to report it so we can investigate and take action."
[3]
Deepfakes of trusted and popular doctors are being used to illegally sell marijuana products online | Business Insider India
In a twist that sounds straight out of a sci-fi thriller, some of the UK's most beloved TV are now finding their faces hijacked by to hawk dubious products online. According to a startling report by The BMJ, these digital doppelgangers are being used to peddle everything from miracle cures for high blood pressure and diabetes to hemp gummies. The phenomenon, known as deepfaking, uses artificial intelligence to map a real person's likeness onto another video. The results can be uncannily realistic -- so much so that a recent study found up to half of viewers couldn't distinguish from authentic videos. Some of the targeted have amassed millions of followers with large spheres of influence, such as Hilary Jones, and . While the study does not mention the following names, another podiatrist and influencer Dana Brems, aka on Instagram, had recently expressed concern after deepfakes surfaced of the personality recommending a "ear-cleaning" product. John Cormack, a retired doctor from Essex, partnered with The BMJ to uncover the extent of this digital deception. "The bottom line is, it's much cheaper to spend your cash on making videos than it is on doing research and coming up with new products and getting them to market in the conventional way," Cormack explains. The proliferation of fake content featuring familiar faces is an inevitable side effect of our current AI revolution, says Henry Ajder, a deepfake technology expert. "The rapid democratisation of accessible AI tools for voice cloning and avatar generation has transformed the and impersonation landscape." The issue has reached such proportions that even the targeted doctors are fighting back. Hilary Jones, for instance, employs a social media specialist to search for and take down misrepresenting his views. "Even if you do, they just pop up the next day under a different name," Jones laments. Meta, the company behind Facebook and Instagram where many of these videos have been found, has promised to investigate. "We don't permit content that intentionally deceives or seeks to defraud others, and we're constantly working to improve detection and enforcement," a Meta spokesperson told The BMJ. Deepfakes prey on people's emotions, notes journalist Chris Stokel-Walker. When a trusted figure endorses a product, viewers are more likely to believe in its efficacy. This emotional manipulation is precisely what makes deepfakes so insidious. Spotting deepfakes has become increasingly challenging as the technology improves. Additionally, the recent tsunami of non-consensual deepfake videos would suggest that they might be having some commercial success, despite being illegal. For those who find their likenesses being used without consent, there seems to be little recourse. However, Stokel-Walker offers some advice: scrutinise the content for telltale signs of fakery, leave a comment questioning its authenticity, use the platform's reporting tools, and report the account responsible for sharing the post. As AI continues to blur the lines between reality and digital deception, it's crucial for users to remain vigilant. The faces we trust most could be the very ones leading us astray -- at least, digitally speaking. The findings of this research can be accessed here.
Share
Share
Copy Link
Trusted health experts Michael Mosley and Dr. Hilary Jones have become the latest victims of deepfake technology, as scammers use their likenesses to promote fraudulent health products on social media platforms.
In a disturbing trend, deepfake technology is being weaponized to exploit the trust and credibility of well-known health experts. Michael Mosley, a prominent British journalist and doctor, and Dr. Hilary Jones, a familiar face on UK television, have found themselves at the center of a sophisticated online scam 1.
Fraudsters have created convincing deepfake videos featuring Mosley and Jones, manipulating their images and voices to endorse various health products. These fabricated endorsements are being widely circulated on social media platforms, particularly Facebook 2. The scam primarily revolves around the illegal promotion of marijuana-based products, falsely claiming they can cure various ailments 3.
Both Mosley and Jones have expressed their shock and frustration at being impersonated. Mosley stated, "It's identity theft. They've stolen my identity to sell products I know nothing about" 1. The scam not only damages the reputations of these trusted health professionals but also poses significant risks to public health by promoting unverified and potentially harmful products.
This incident highlights the growing threat of deepfake technology in spreading misinformation and conducting fraud. As the technology becomes more sophisticated and accessible, there are concerns about its potential to undermine public trust in media and expert opinions. The ease with which deepfakes can be created and disseminated poses a significant challenge for social media platforms and law enforcement agencies.
Facebook, now known as Meta, has acknowledged the issue and claims to be taking action against these fraudulent posts. A spokesperson stated, "We're removing these scam ads and any new ones we detect" 2. However, the rapid proliferation of such content makes it challenging to completely eradicate the problem.
The use of deepfakes for fraudulent purposes raises important legal and ethical questions. While laws regarding deepfake technology are still evolving, this case underscores the urgent need for robust legislation to protect individuals from digital impersonation and to hold perpetrators accountable.
Experts stress the importance of public awareness in combating deepfake scams. Users are advised to be skeptical of sensational health claims, especially those featuring celebrity endorsements on social media. Verifying information through official channels and trusted sources is crucial in the age of digital manipulation.
Reference
[1]
[2]
Experts warn of a growing trend in online scams: deepfake videos featuring well-known doctors promoting fraudulent health products. This new form of digital deception poses significant risks to public health and trust in medical professionals.
4 Sources
4 Sources
Actor Tom Hanks has issued a public warning about fraudulent advertisements using AI-generated versions of his image and voice. The Hollywood star took to Instagram to alert his followers about these deceptive practices.
15 Sources
15 Sources
AI-generated deepfakes, particularly those impersonating Elon Musk, are contributing to a surge in fraud cases, with losses expected to reach $40 billion by 2027. As detection tools struggle to keep pace, experts warn of the growing threat to unsuspecting victims.
2 Sources
2 Sources
Deepfake technology is increasingly being used to target businesses and threaten democratic processes. This story explores the growing prevalence of deepfake scams in the corporate world and their potential impact on upcoming elections.
2 Sources
2 Sources
As deepfake technology becomes more sophisticated, tech companies are developing advanced detection tools to combat the growing threat of AI-generated scams and disinformation.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved