Curated by THEOUTPOST
On Mon, 12 May, 8:01 AM UTC
2 Sources
[1]
'Tool for grifters': AI deepfakes push bogus sexual cures
Washington (AFP) - Holding an oversized carrot, a brawny, shirtless man promotes a supplement he claims can enlarge male genitalia -- one of countless AI-generated videos on TikTok peddling unproven sexual treatments. The rise of generative AI has made it easy -- and financially lucrative -- to mass-produce such videos with minimal human oversight, often featuring fake celebrity endorsements of bogus and potentially harmful products. In some TikTok videos, carrots are used as a euphemism for male genitalia, apparently to evade content moderation policing sexually explicit language. "You would notice that your carrot has grown up," the muscled man says in a robotic voice in one video, directing users to an online purchase link. "This product will change your life," the man adds, claiming without evidence that the herbs used as ingredients boost testosterone and send energy levels "through the roof." The video appears to be AI-generated, according to a deepfake detection service recently launched by the Bay Area-headquartered firm Resemble AI, which shared its results with AFP. "As seen in this example, misleading AI-generated content is being used to market supplements with exaggerated or unverified claims, potentially putting consumers' health at risk," Zohaib Ahmed, Resemble AI's chief executive and co-founder, told AFP. "We're seeing AI-generated content weaponized to spread false information." 'Cheap way' The trend underscores how rapid advances in artificial intelligence have fueled what researchers call an AI dystopia, a deception-filled online universe designed to manipulate unsuspecting users into buying dubious products. They include everything from unverified -- and in some cases, potentially harmful -- dietary supplements to weight loss products and sexual remedies. "AI is a useful tool for grifters looking to create large volumes of content slop for a low cost," misinformation researcher Abbie Richards told AFP. "It's a cheap way to produce advertisements," she added. Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, has observed a surge of "AI doctor" avatars and audio tracks on TikTok that promote questionable sexual remedies. Some of these videos, many with millions of views, peddle testosterone-boosting concoctions made from ingredients such as lemon, ginger and garlic. More troublingly, rapidly evolving AI tools have enabled the creation of deepfakes impersonating celebrities such as actress Amanda Seyfried and actor Robert De Niro. "Your husband can't get it up?" Anthony Fauci, former director of the National Institute of Allergy and Infectious Diseases, appears to ask in a TikTok video promoting a prostate supplement. Many manipulated videos are created from existing ones, modified with AI-generated voices and lip-synced to match what the altered voice says. "The impersonation videos are particularly pernicious as they further degrade our ability to discern authentic accounts online," Mantzarlis said. Last year, Mantzarlis discovered hundreds of ads on YouTube featuring deepfakes of celebrities -- including Arnold Schwarzenegger, Sylvester Stallone, and Mike Tyson -- promoting supplements branded as erectile dysfunction cures. The rapid pace of generating short-form AI videos means that even when tech platforms remove questionable content, near-identical versions quickly reappear -- turning moderation into a game of whack-a-mole. Researchers say this creates unique challenges for policing AI-generated content, requiring novel solutions and more sophisticated detection tools. AFP's fact checkers have repeatedly debunked scam ads on Facebook promoting treatments -- including erectile dysfunction cures -- that use fake endorsements by Ben Carson, a neurosurgeon and former US cabinet member. Yet many users still consider the endorsements legitimate, illustrating the appeal of deepfakes. "Scammy affiliate marketing schemes and questionable sex supplements have existed for as long as the internet and before," Mantzarlis said. "As with every other bad thing online, generative AI has made this abuse vector cheaper and quicker to deploy at scale."
[2]
'Tool for grifters': AI deepfakes push bogus sexual cures
Holding an oversized carrot, a brawny, shirtless man promotes a supplement he claims can enlarge male genitalia -- one of countless AI-generated videos on TikTok peddling unproven sexual treatments. In some TikTok videos, carrots are used as a euphemism for male genitalia, apparently to evade content moderation policing sexually explicit language.Holding an oversized carrot, a brawny, shirtless man promotes a supplement he claims can enlarge male genitalia -- one of countless AI-generated videos on TikTok peddling unproven sexual treatments. The rise of generative AI has made it easy -- and financially lucrative -- to mass-produce such videos with minimal human oversight, often featuring fake celebrity endorsements of bogus and potentially harmful products. In some TikTok videos, carrots are used as a euphemism for male genitalia, apparently to evade content moderation policing sexually explicit language. "You would notice that your carrot has grown up," the muscled man says in a robotic voice in one video, directing users to an online purchase link. "This product will change your life," the man adds, claiming without evidence that the herbs used as ingredients boost testosterone and send energy levels "through the roof." The video appears to be AI-generated, according to a deepfake detection service recently launched by the Bay Area-headquartered firm Resemble AI, which shared its results with AFP. "As seen in this example, misleading AI-generated content is being used to market supplements with exaggerated or unverified claims, potentially putting consumers' health at risk," Zohaib Ahmed, Resemble AI's chief executive and co-founder, told AFP. "We're seeing AI-generated content weaponized to spread false information." Cheap way The trend underscores how rapid advances in artificial intelligence have fueled what researchers call an AI dystopia, a deception-filled online universe designed to manipulate unsuspecting users into buying dubious products. They include everything from unverified -- and in some cases, potentially harmful -- dietary supplements to weight loss products and sexual remedies. "AI is a useful tool for grifters looking to create large volumes of content slop for a low cost," misinformation researcher Abbie Richards told AFP. "It's a cheap way to produce advertisements," she added. Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, has observed a surge of "AI doctor" avatars and audio tracks on TikTok that promote questionable sexual remedies. Some of these videos, many with millions of views, peddle testosterone-boosting concoctions made from ingredients such as lemon, ginger and garlic. More troublingly, rapidly evolving AI tools have enabled the creation of deepfakes impersonating celebrities such as actress Amanda Seyfried and actor Robert De Niro. "Your husband can't get it up?" Anthony Fauci, former director of the National Institute of Allergy and Infectious Diseases, appears to ask in a TikTok video promoting a prostate supplement. But the clip is a deepfake, using Fauci's likeness. Pernicious Many manipulated videos are created from existing ones, modified with AI-generated voices and lip-synced to match what the altered voice says. "The impersonation videos are particularly pernicious as they further degrade our ability to discern authentic accounts online," Mantzarlis said. Last year, Mantzarlis discovered hundreds of ads on YouTube featuring deepfakes of celebrities -- including Arnold Schwarzenegger, Sylvester Stallone, and Mike Tyson -- promoting supplements branded as erectile dysfunction cures. The rapid pace of generating short-form AI videos means that even when tech platforms remove questionable content, near-identical versions quickly reappear -- turning moderation into a game of whack-a-mole. Researchers say this creates unique challenges for policing AI-generated content, requiring novel solutions and more sophisticated detection tools. AFP's fact checkers have repeatedly debunked scam ads on Facebook promoting treatments -- including erectile dysfunction cures -- that use fake endorsements by Ben Carson, a neurosurgeon and former US cabinet member. Yet many users still consider the endorsements legitimate, illustrating the appeal of deepfakes. "Scammy affiliate marketing schemes and questionable sex supplements have existed for as long as the internet and before," Mantzarlis said. "As with every other bad thing online, generative AI has made this abuse vector cheaper and quicker to deploy at scale."
Share
Share
Copy Link
The rise of AI-generated content has led to a proliferation of misleading advertisements for unproven sexual health products, often featuring deepfake celebrity endorsements.
The rapid advancement of artificial intelligence (AI) technology has given rise to a concerning trend in online advertising. AI-generated deepfakes are being increasingly used to create misleading videos promoting unproven sexual health treatments, raising alarm among researchers and consumers alike 12.
Generative AI has made it remarkably easy and cost-effective to mass-produce videos with minimal human oversight. These videos often feature AI-generated avatars or even deepfake versions of celebrities endorsing dubious products. For instance, TikTok has seen a surge of videos featuring muscular men holding oversized carrots as euphemisms for male genitalia, promoting supplements claiming to enhance sexual performance 1.
More troublingly, AI tools have enabled the creation of deepfakes impersonating well-known personalities. Researchers have identified videos featuring fake versions of celebrities such as Amanda Seyfried, Robert De Niro, and even former public health official Anthony Fauci promoting questionable sexual health products 12.
The rapid pace at which these AI-generated videos can be produced poses significant challenges for content moderation. Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, notes that even when platforms remove such content, near-identical versions quickly reappear, turning moderation into a game of "whack-a-mole" 1.
Zohaib Ahmed, CEO of Resemble AI, warns that this trend poses potential health risks to consumers. The AI-generated content is being used to market supplements with exaggerated or unverified claims, potentially endangering users' well-being 1. Misinformation researcher Abbie Richards describes AI as "a useful tool for grifters looking to create large volumes of content slop for a low cost" 2.
The proliferation of these AI-generated ads contributes to what researchers term an "AI dystopia" – a deception-filled online environment designed to manipulate unsuspecting users. This trend not only promotes potentially harmful products but also further erodes the ability of internet users to discern authentic content from fabricated material 12.
As AI technology continues to evolve, researchers emphasize the need for more sophisticated detection tools and novel solutions to police AI-generated content. The ease with which these misleading advertisements can be created and disseminated at scale underscores the urgency of addressing this growing challenge in the digital landscape 12.
Reference
[2]
Experts warn of a growing trend in online scams: deepfake videos featuring well-known doctors promoting fraudulent health products. This new form of digital deception poses significant risks to public health and trust in medical professionals.
4 Sources
4 Sources
Trusted health experts Michael Mosley and Dr. Hilary Jones have become the latest victims of deepfake technology, as scammers use their likenesses to promote fraudulent health products on social media platforms.
3 Sources
3 Sources
As deepfake technology becomes more sophisticated, tech companies are developing advanced detection tools to combat the growing threat of AI-generated scams and disinformation.
3 Sources
3 Sources
As the 2024 US presidential election approaches, the rise of AI-generated fake content is raising alarms about potential voter manipulation. Experts warn that the flood of AI-created misinformation could significantly impact the electoral process.
5 Sources
5 Sources
The rapid proliferation of AI-generated child sexual abuse material (CSAM) is overwhelming tech companies and law enforcement. This emerging crisis highlights the urgent need for improved regulation and detection methods in the digital age.
9 Sources
9 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved