2 Sources
[1]
'Fueling sexism': AI 'bikini interview' videos flood internet
The videos are strikingly lifelike, featuring bikini-clad women conducting street interviews and eliciting lewd comments -- but they are entirely fake, generated by AI tools increasingly used to flood social media with sexist content. Such AI slop -- mass-produced content created by cheap artificial intelligence tools that turn simple text prompts into hyper-realistic visuals -- is frequently drowning out authentic posts and blurring the line between fiction and reality. The trend has spawned a cottage industry of AI influencers churning out large volumes of sexualized clips with minimal effort, often driven by platform incentive programs that financially reward viral content. Hordes of AI clips, laden with locker-room humor, purport to show scantily clad female interviewers on the streets of India or the United Kingdom -- sparking concern about the harm such synthetic content may pose to women. AFP's fact-checkers traced hundreds of such videos on Instagram, many in Hindi, that purportedly show male interviewees casually delivering misogynistic punchlines and sexualized remarks -- sometimes even grabbing the women -- while crowds of men gawk or laugh in the background. Many videos racked up tens of millions of views -- and some further monetized that traction by promoting an adult chat app to "make new female friends." The fabricated clips were so lifelike that some users in the comments questioned whether the featured women were real. A sample of these videos analyzed by the US cybersecurity firm GetReal Security showed they were created using Google's Veo 3 AI generator, known for hyper-realistic visuals. 'Gendered harm' "Misogyny that usually stayed hidden in locker room chats and groups is now being dressed up as AI visuals," Nirali Bhatia, an India-based cyber psychologist, told AFP. "This is part of AI-mediated gendered harm," she said, adding that the trend was "fueling sexism." The trend offers a window into an internet landscape now increasingly swamped with AI-generated memes, videos and images that are competing for attention with -- and increasingly eclipsing -- authentic content. "AI slop and any type of unlabeled AI-generated content slowly chips away at the little trust that remains in visual content," GetReal Security's Emmanuelle Saliba told AFP. The most viral misogynistic content often relies on shock value -- including Instagram and TikTok clips that Wired magazine said were generated using Veo 3 and portray Black women as big-footed primates. Videos on one popular TikTok account mockingly list what so-called gold-digging "girls gone wild" would do for money. Women are also fodder for distressing AI-driven clickbait, with AFP's fact-checkers tracking viral videos of a fake marine trainer named "Jessica Radcliffe" being fatally attacked by an orca during a live show at a water park. The fabricated footage rapidly spread across platforms including TikTok, Facebook and X, sparking global outrage from users who believed the woman was real. 'Unreal' Last year, Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, found 900 Instagram accounts of likely AI-generated "models" -- predominantly female and typically scantily clothed. These thirst traps cumulatively amassed 13 million followers and posted more than 200,000 images, typically monetizing their reach by redirecting their audiences to commercial content-sharing platforms. With AI fakery proliferating online, "the numbers now are undoubtedly much larger," Mantzarlis told AFP. "Expect more nonsense content leveraging body standards that are not just unrealistic but literally unreal," he added. Financially incentivized slop is becoming increasingly challenging to police as content creators -- including students and stay-at-home parents around the world -- turn to AI video production as gig work. Many creators on YouTube and TikTok offer paid courses on how to monetize viral AI-generated material on platforms, many of which have reduced their reliance on human fact-checkers and scaled back content moderation. Some platforms have sought to crack down on accounts promoting slop, with YouTube recently saying that creators of "inauthentic" and "mass produced" content would be ineligible for monetization. "AI doesn't invent misogyny -- it just reflects and amplifies what's already there," AI consultant Divyendra Jadoun told AFP. "If audiences reward this kind of content with millions of likes, the algorithms and AI creators will keep producing it. The bigger fight isn't just technological -- it's social and cultural."
[2]
'Fueling sexism': AI 'bikini interview' videos flood internet
Washington (AFP) - The videos are strikingly lifelike, featuring bikini-clad women conducting street interviews and eliciting lewd comments -- but they are entirely fake, generated by AI tools increasingly used to flood social media with sexist content. Such AI slop -- mass-produced content created by cheap artificial intelligence tools that turn simple text prompts into hyper-realistic visuals -- is frequently drowning out authentic posts and blurring the line between fiction and reality. The trend has spawned a cottage industry of AI influencers churning out large volumes of sexualized clips with minimal effort, often driven by platform incentive programs that financially reward viral content. Hordes of AI clips, laden with locker-room humor, purport to show scantily clad female interviewers on the streets of India or the United Kingdom -- sparking concern about the harm such synthetic content may pose to women. AFP's fact-checkers traced hundreds of such videos on Instagram, many in Hindi, that purportedly show male interviewees casually delivering misogynistic punchlines and sexualized remarks -- sometimes even grabbing the women -- while crowds of men gawk or laugh in the background. Many videos racked up tens of millions of views -- and some further monetized that traction by promoting an adult chat app to "make new female friends." The fabricated clips were so lifelike that some users in the comments questioned whether the featured women were real. A sample of these videos analyzed by the US cybersecurity firm GetReal Security showed they were created using Google's Veo 3 AI generator, known for hyper-realistic visuals. 'Gendered harm' "Misogyny that usually stayed hidden in locker room chats and groups is now being dressed up as AI visuals," Nirali Bhatia, an India-based cyber psychologist, told AFP. "This is part of AI-mediated gendered harm," she said, adding that the trend was "fueling sexism." The trend offers a window into an internet landscape now increasingly swamped with AI-generated memes, videos and images that are competing for attention with -- and increasingly eclipsing -- authentic content. "AI slop and any type of unlabeled AI-generated content slowly chips away at the little trust that remains in visual content," GetReal Security's Emmanuelle Saliba told AFP. The most viral misogynistic content often relies on shock value -- including Instagram and TikTok clips that Wired magazine said were generated using Veo 3 and portray Black women as big-footed primates. Videos on one popular TikTok account mockingly list what so-called gold-digging "girls gone wild" would do for money. Women are also fodder for distressing AI-driven clickbait, with AFP's fact-checkers tracking viral videos of a fake marine trainer named "Jessica Radcliffe" being fatally attacked by an orca during a live show at a water park. The fabricated footage rapidly spread across platforms including TikTok, Facebook and X, sparking global outrage from users who believed the woman was real. 'Unreal' Last year, Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, found 900 Instagram accounts of likely AI-generated "models" -- predominantly female and typically scantily clothed. These thirst traps cumulatively amassed 13 million followers and posted more than 200,000 images, typically monetizing their reach by redirecting their audiences to commercial content-sharing platforms. With AI fakery proliferating online, "the numbers now are undoubtedly much larger," Mantzarlis told AFP. "Expect more nonsense content leveraging body standards that are not just unrealistic but literally unreal," he added. Financially incentivized slop is becoming increasingly challenging to police as content creators -- including students and stay-at-home parents around the world -- turn to AI video production as gig work. Many creators on YouTube and TikTok offer paid courses on how to monetize viral AI-generated material on platforms, many of which have reduced their reliance on human fact-checkers and scaled back content moderation. Some platforms have sought to crack down on accounts promoting slop, with YouTube recently saying that creators of "inauthentic" and "mass produced" content would be ineligible for monetization. "AI doesn't invent misogyny -- it just reflects and amplifies what's already there," AI consultant Divyendra Jadoun told AFP. "If audiences reward this kind of content with millions of likes, the algorithms and AI creators will keep producing it. The bigger fight isn't just technological -- it's social and cultural."
Share
Copy Link
AI tools are being used to create hyper-realistic, sexist content featuring bikini-clad women, flooding social media platforms and blurring the line between fiction and reality.
In a disturbing trend, artificial intelligence (AI) tools are being used to create and disseminate hyper-realistic videos featuring bikini-clad women conducting street interviews. These entirely fabricated clips, known as "AI slop," are flooding social media platforms, raising concerns about the proliferation of sexist content and the blurring line between fiction and reality 12.
Source: Tech Xplore
AI slop refers to mass-produced content created by cheap AI tools that transform simple text prompts into highly realistic visuals. Fact-checkers from AFP have traced hundreds of such videos on Instagram, many in Hindi, depicting scantily clad female interviewers subjected to misogynistic comments and inappropriate behavior from male interviewees 12.
These AI-generated clips have garnered tens of millions of views, with some even monetizing their popularity by promoting adult chat apps. The videos are so convincing that many users questioned whether the featured women were real 12.
Analysis by US cybersecurity firm GetReal Security revealed that many of these videos were created using Google's Veo 3 AI generator, known for its hyper-realistic visuals 12. This technology has given rise to a cottage industry of AI influencers who can produce large volumes of sexualized content with minimal effort.
Nirali Bhatia, an India-based cyber psychologist, warns that this trend is "fueling sexism" and represents "AI-mediated gendered harm" 12. The proliferation of such content is not only normalizing misogyny but also eroding trust in visual content online.
Emmanuelle Saliba from GetReal Security notes, "AI slop and any type of unlabeled AI-generated content slowly chips away at the little trust that remains in visual content" 12.
The issue extends beyond sexist content. AI-generated memes, videos, and images are increasingly competing for attention with authentic content across various platforms. Some of the most viral misogynistic content relies on shock value, including clips portraying Black women as primates 12.
Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, identified 900 Instagram accounts of likely AI-generated "models" last year, amassing 13 million followers and over 200,000 images 12.
The financial incentives driving the creation of AI slop make it increasingly challenging to police. Content creators worldwide, including students and stay-at-home parents, are turning to AI video production as gig work 12.
While some platforms like YouTube have announced measures to crack down on "inauthentic" and "mass-produced" content, the scale of the problem remains daunting 12.
AI consultant Divyendra Jadoun emphasizes that "AI doesn't invent misogyny -- it just reflects and amplifies what's already there" 12. The popularity of such content among audiences perpetuates its creation, highlighting that the issue is not just technological but also social and cultural.
As AI technology continues to advance, addressing the spread of harmful AI-generated content will require a multifaceted approach involving technology companies, policymakers, and society at large.
Summarized by
Navi
Mount Sinai researchers develop an AI model that provides individualized treatment recommendations for atrial fibrillation patients, potentially transforming the standard approach to anticoagulation therapy.
3 Sources
Health
17 hrs ago
3 Sources
Health
17 hrs ago
TSMC achieves unprecedented 70.2% market share in Q2 2025, driven by AI, smartphone, and PC chip demand. The company's revenue hits $30.24 billion, showcasing its technological leadership and market dominance.
3 Sources
Business
17 hrs ago
3 Sources
Business
17 hrs ago
UCLA researchers develop a non-invasive brain-computer interface system with AI assistance, significantly improving performance for users, including those with paralysis, in controlling robotic arms and computer cursors.
5 Sources
Technology
17 hrs ago
5 Sources
Technology
17 hrs ago
Gartner predicts AI-capable PCs will make up 31% of the global PC market by 2025, with shipments reaching 77.8 million units. Despite temporary slowdowns due to tariffs, AI PCs are expected to become the norm by 2029.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
Vincent Boucher, an AI pioneer, introduces AGI Alpha, a blockchain-based platform for coordinating AI agents to complete complex tasks. The platform's first release is a decentralized AGI Jobs Marketplace powered by the $AGIALPHA token on Solana.
2 Sources
Technology
1 hr ago
2 Sources
Technology
1 hr ago