Curated by THEOUTPOST
On Thu, 12 Dec, 8:02 AM UTC
6 Sources
[1]
1 in 6 Congresswomen Targeted by AI-Generated Sexually Explicit Deepfakes
More than two dozen members of Congress have been the victims of sexually explicit deepfakes â€" and an overwhelming majority of those impacted are women, according to a new study that spotlights the stark gender disparity in this technology and the evolving risks for women’s participation in politics and other forms of civic engagement. The American Sunlight Project (ASP), a think tank that researches disinformation and advocates for policies that promote democracy, released findings on Wednesday that identified more than 35,000 mentions of nonconsensual intimate imagery (NCII) depicting 26 members of Congress â€" 25 women and one man â€" that were found recently on deepfake websites. Most of the imagery was quickly removed as researchers shared their findings with impacted members of Congress. “We need to kind of reckon with this new environment and the fact that the internet has opened up so many of these harms that are disproportionately targeting women and marginalized communities,†said Nina Jankowicz, an online disinformation and harassment expert who founded The American Sunlight Project and is an author on the study. Nonconsensual intimate imagery, also known colloquially as deepfake porn though advocates prefer the former, can be created through generative AI or by overlaying headshots onto media of adult performers. There is currently limited policy to restrict its creation and spread. ASP shared the first-of-its-kind findings exclusively with The 19th. The group collected data in part by developing a custom search engine to find members of the 118th Congress by first and last name, and abbreviations or nicknames, on 11 well-known deepfake sites. Neither party affiliation nor geographic location had an impact on the likelihood of being targeted for abuse, though younger members were more likely to be victimized. The largest factor was gender, with women members of Congress being 70 times more likely than men to be targeted. ASP did not release the names of the lawmakers who were depicted in the imagery, in order to avoid encouraging searches. They did contact the offices of everyone impacted to alert them and offer resources on online harms and mental health support. Authors of the study note that in the immediate aftermath, imagery targeting most of the members was entirely or almost entirely removed from the sites â€" a fact they’re unable to explain. Researchers have noted that such removals do not prevent material from being shared or uploaded again. In some cases involving lawmakers, search result pages remained indexed on Google despite the content being largely or entirely removed. “The removal may be coincidental. Regardless of what exactly led to removal of this content â€" whether â€~cease and desist’ letters, claims of copyright infringement, or other contact with the sites hosting deepfake abuse â€" it highlights a large disparity of privilege,†according to the study. “People, particularly women, who lack the resources afforded to Members of Congress, would be highly unlikely to achieve this rapid response from the creators and distributors of AI-generated NCII if they initiated a takedown request themselves.†According to the study’s initial findings, nearly 16 percent of all the women who currently serve in Congress â€" or about 1 in 6 congresswomen â€" are the victims of AI-generated nonconsensual intimate imagery. Jankowicz has been the target of online harassment and threats for her domestic and international work dismantling disinformation. She has also spoken publicly about being the victim of deepfake abuse â€" a fact she found out through a Google Alert in 2023. “You can be made to appear in these compromised, intimate situations without your consent, and those videos, even if you were to say, pursue a copyright claim against the original poster, â€" as in my case â€" they proliferate around the internet without your control and without some sort of consequence for the people who are amplifying or creating deepfake porn,†she said. “That continues to be a risk for anybody who is in the public eye, who is participating in public discourse, but in particular for women and for women of color.†Image-based sexual abuse can have devastating mental health effects on victims, who include everyday people who are not involved in politics â€" including children. In the past year, there have been reports of high school girls being targeted for image-based sexual abuse in states like California, New Jersey and Pennslyvania. School officials have had varying degrees of response, though the FBI has also issued a new warning that sharing such imagery of minors is illegal. The full impact of deepfakes on society is still coming into focus, but research already shows that 41 percent of women between the ages of 18 and 29 self-censor to avoid online harassment. “That is a hugely powerful threat to democracy and free speech, if we have almost half of the population silencing themselves because they’re scared of the harassment they could experience,†said Sophie Maddocks, research director at the Center for Media at Risk at the University of Pennsylvania. There is no federal law that establishes criminal or civil penalties for someone who generates and distributes AI-generated nonconsensual intimate imagery. About a dozen states have enacted laws in recent years, though most include civil penalties, not criminal ones. AI-generated nonconsensual intimate imagery also opens up threats to national security by creating conditions for blackmail and geopolitical concessions. That could have ripple effects on policymakers irrespective of whether they’re directly the target of the imagery. “My hope here is that the members are pushed into action when they recognize not only that it’s affecting American women, but it’s affecting them,†Jankowicz said. “It’s affecting their own colleagues. And this is happening simply because they are in the public eye.†Image-based sexual abuse is a unique risk for women running for office. Susanna Gibson narrowly lost her competitive legislative race after a Republican operative shared nonconsensual recordings of sexually explicit livestreams featuring the Virginia Democrat and her husband with The Washington Post. In the months after her loss, Gibson told The 19th she heard from young women discouraged from running for office out of fear of intimate images being used to harass them. Gibson has since started a nonprofit dedicated to fighting image-based sexual abuse and an accompanying political action committee to support women candidates against violations of intimate privacy. Maddocks has studied how women who speak out in public are more likely to experience digital sexual violence. “We have this much longer, â€~women should be seen and not heard’ pattern that makes me think about Mary Beard’s writing and research on this idea that womanhood is antithetical to public speech. So when women speak publicly, it’s almost like, â€~OK. Time to shame them. Time to strip them. Time to get them back in the house. Time to shame them into silence.’ And that silencing and that shaming motivation â€| we have to understand that in order to understand how this harm is manifesting as it relates to congresswomen.†ASP is encouraging Congress to pass federal legislation. The Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024, also known as the DEFIANCE Act, would allow people to sue anyone who creates, shares or receives such imagery. The Take It Down Act would include criminal liability for such activity and require tech companies to take down deepfakes. Both bills have passed the Senate with bipartisan support, but have to navigate concerns around free speech and harm definitions, which are typical hurdles to tech policy, in the House. “It would be a dereliction of duty for Congress to let this session lapse without passing at least one of these bills," Jankowicz said “It is one of the ways that the harm of artificial intelligence is actually being felt by real Americans right now. It's not a future harm. It's not something that we have to imagine.†In the absence of congressional action, the White House has collaborated with the private sector to conceive creative solutions to curb image-based sexual abuse. But critics aren’t optimistic about Big Tech’s ability to regulate itself, given the history of harm caused by its platforms. “It is so easy for perpetrators to create this content, and the signal is not just to the individual woman being targeted,†Jankowicz said. “It’s to women everywhere, saying, â€~If you take this step, if you raise your voice, this is a consequence that you might have to deal with.’â€
[2]
AI Deepfake Harassment Affects 25 Congresswomen, Validating AOC's DEFIANCE Act
Introduced by House Representative Alexandria Ocasio-Cortez earlier this year, the bill passed unanimously in the Senate in July. Earlier this year, House Representative Alexandria Ocasio-Cortez introduced the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act to enable victims of "digital forgery" to sue deepfake creators. Now, research by The American Sunlight Project has demonstrated just how close to home the issue really is for at least 25 Congresswomen who were the subject of nonconsensual intimate imagery. Lawmakers Targeted The ASP research identified more than 35,000 mentions of 26 members of Congress on deepfake pornography sites. Reflecting a wider pattern of nonconsensual deepfake porn overwhelmingly targeting women, only one of the victims identified was male. The study also found that younger lawmakers were more likely to be victims. Commenting on the recent surge in deepfake pornography, ASP said the "weaponization" of generative AI affects women and girls from all walks of life. "For every headline-grabbing incident, thousands more -- including underage girls -- suffer in silence, enduring profound and often invisible trauma," the organization stated. DEFIANCE Act Passage ASP's findings underscore the importance of legislation like the DEFIANCE Act that can be used to hold the perpetrators of deepfake abuse accountable. Under the act, victims will be able to pursue civil remedies against those who produce or process non-consensual deepfake porn with the intent to distribute it. Anyone found guilty of doing so could be liable to pay up to $150,000 in damages or up to $250,000 if the incident was connected to further sexual assault, stalking, or harassment. The bipartisan bill passed the Senate unanimously in July. It now awaits passage through the House before the president can sign it into law. A Global Crackdown Outside of the U.S., other governments are also taking steps to crack down on deepfake abuse and prosecute those who are responsible for it. In the U.K., the previous government initially moved to create a new offense with potentially unlimited fines and the prospect of jail time for the worst offenders. However, this summer's surprise general election interrupted legislative progress, and the cause has yet to be picked up by the new government. Meanwhile, public outrage over deepfake pornography in South Korea has prompted the government to investigate Telegram over its role in distribution. While other areas of AI regulation have been more contentious, there is broad support for legislation targeting deepfake sexual abuse from across the political spectrum.
[3]
Anti-deepfake porn bill included in funding deal
A bill that seeks to fight the rise of deepfake pornography was included in the year-end government funding deal unveiled Tuesday, raising the prospect the legislation could cross the finish line in the coming days. The TAKE IT DOWN Act would criminalize nonconsensual intimate imagery, including content generated by artificial intelligence (AI), and would require platforms to take down such material after being notified of its existence. The bill passed the Senate earlier this month, but had yet to be taken up by the House. Its inclusion in the year-end continuing resolution, which needs to pass by Friday to avert a government shutdown, boosts its chances. "Over the past several months, courageous victims of AI-deepfake 'revenge porn' have shared their stories to raise awareness and inform lawmakers' efforts to stop this despicable behavior," Sen. Ted Cruz (R-Texas), who introduced the legislation, said in a statement. "Passage of our bipartisan TAKE IT DOWN Act will give innocent victims -- many of whom are teenage girls -- the opportunity to seek justice against deviants who publish these abusive images," he continued. "It will also hold Big Tech accountable by making sure websites remove these disgusting fake videos and pictures immediately." Americans for Responsible Innovation (ARI), an AI policy advocacy group, touted the inclusion of the legislation in the stopgap bill as a "huge win for victims and for everyone online." "It's also proof positive that Congress has the willpower to work across the aisle on AI policy," Satya Thallam, ARI's senior vice president of government affairs, said in a statement. "Good governance on AI is going to happen step-by-step, and issue-by-issue," he added. "The broad coalition and grassroots support we saw for the TAKE IT DOWN Act is going to be a template for making change in the 119th Congress." The rise of publicly accessible AI models in recent years has also spurred the rise of deepfake pornography. The issue gained prominence earlier this year when sexually explicit AI-generated images of pop star Taylor Swift circulated online. The situation prompted the White House to respond, saying it was "alarmed" by the circulation of the images. "While social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual, intimate imagery of real people," White House press secretary Karine Jean-Pierre said at the time. Lawmakers, particularly female lawmakers, have also been targets of deepfake pornography. More than two dozen lawmakers have been victims of sexually explicit AI-generated images, according to a recent report from the American Sunlight Project. The report found more than 35,000 mentions of 26 lawmakers on prominent deepfake websites. The impacted lawmakers included 25 women and one man.
[4]
Women in congress are 70 percent more likely to be victims of AI-generated deepfakes than male counterparts
A new report gives scale to the problem of AI deepfakes among female politicians. Credit: traffic_analyzer / DigitalVision Vectors via Getty Images The field of politics has become even more complicated for women leaders, with the rise of generative AI posing an even greater risk for female politicians than men, according to a report released this week. The analysis, published by disinformation think tank the American Sunlight Project (ASP) via The 19th on Dec. 11, uncovered more than 35,000 pieces of digitally altered nonconsensual intimate imagery (NCII) depicting 26 members of Congress. ASP's numbers, when broken down, outline a stark reality for victims of NCII: Of those thousands of images, the organization found 25 women politicians depicted by AI. Only one man was the subject of such content. Throughout the entire study, women members of Congress were 70 times more likely than men to be targeted by non-consensual synthetic images and 1 in 6 congresswomen (or 16 percent) are the victims of nonconsensual intimate imagery. Leaders across the political aisle have attempted to address the rise of both NCII and synthetic AI-generated images, but have been slow to reach consensus. In January, a bipartisan group of Senators introduced the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024 (Defiance Act), intended to criminalize the act of spreading nonconsensual, sexualized "digital forgeries" made with AI. It was passed in July but is still being considered by the House. Last week the Senate passed the Take It Down Act, introduced by Senator Ted Cruz in June. The bill similarly criminalizes the publishing of digitally manipulated deepfakes online but also metes out penalties for companies who fail to remove such content within 48 hours of it being reported. But the gendered reality of AI-boosted images can't be understated, especially as women leaders navigate a precarious online environment putting them at greater risk for sexual abuse. In August, the Center for Countering Digital Hate published a study on the rise of online hate and negative engagement on the social profiles of women politicians. According to an analysis of the Instagram profiles of 10 female incumbents, one in 25 comments was "highly likely" to be toxic. Instagram failed to act on 93 percent of reported abusive comments targeting female politicians. "We need to kind of reckon with this new environment and the fact that the internet has opened up so many of these harms that are disproportionately targeting women and marginalized communities," said American Sunlight Project founder and author Nina Jankowicz. "My hope here is that the members are pushed into action when they recognize not only that it's affecting American women, but it's affecting them. It's affecting their own colleagues. And this is happening simply because they are in the public eye." ASP informed the offices of affected politicians, alerting them to AI-generated NCII. The images were almost entirely removed following the notice, although the organization didn't receive any comments.
[5]
Dozens of lawmakers victims of sexually explicit deepfakes: Report
More than two dozen lawmakers have been the victims of deepfake pornography, with female lawmakers significantly more likely to be targeted, according to a new report released Wednesday. The report from the American Sunlight Project, an advocacy group focused on combatting disinformation, found more than 35,000 mentions of 26 lawmakers on prominent deepfake websites. The impacted lawmakers included 25 women and one man. "This report reveals a stark and disturbing reality," Nina Jankowicz, CEO of the American Sunlight Project, said in a statement. "Female lawmakers are being targeted by AI-generated deepfake pornography at an alarming rate," she continued, adding, "This isn't just a tech problem; it's a direct assault on women in leadership and democracy itself." After the group notified the affected members of Congress, the content depicting 14 of these lawmakers was removed in less than 48 hours, according to the report. The content depicting another nine lawmakers was entirely or mostly removed but remained on landing or search result pages. "The vast majority of targets of deepfake sexual abuse are private citizens, and even minors, who frequently lack the resources to rectify the harm done to them," Jankowicz said. "I myself have been targeted with this vile content," she added. "As both a survivor and a researcher, I strongly feel that all women like me deserve to be protected by their government and have a path to justice for the sexual abuse they have endured. It is long past time for Congress to act." The Senate passed legislation last week that would criminalize non-consensual, sexually explicit imagery, including deepfakes generated using artificial intelligence (AI), and require platforms to remove the content once notified. The bill, known as the TAKE IT DOWN Act, is one of several pieces of legislation moving through Congress that has sought to address the growing problem of sexually explicit deepfakes. The Disrupt Explicit Forged Images and Non-Consensual Edits Act, also known as the DEFIANCE Act, similarly passed the Senate in July. The bill would create a federal civil remedy for victims of deepfake pornography. In the House, the DEFIANCE Act was introduced by Rep. Alexandria Ocasio-Cortez (N.Y.), who has talked about her own personal experience as the target of sexually explicit AI-generated content.
[6]
Teen victim of AI-generated "deepfake pornography" urges Congress to pass "Take It Down Act"
Anna McAdams has always kept a close eye on her 15-year-old daughter Elliston Berry's life online. So it was hard to come to terms with what happened 15 months ago on the Monday morning after Homecoming in Aledo, Texas. A classmate took a picture from Elliston's Instagram, ran it through an artificial intelligence program that appeared to remove her dress and then sent around the digitally altered image on Snapchat. "She came into our bedroom crying, just going, 'Mom, you won't believe what just happened,'" McAdams said. Last year, there were more than 21,000 deepfake pornographic videos online -- up more than 460% over the year prior. The manipulated content is proliferating on the internet as websites make disturbing pitches -- like one service that asks, "Have someone to undress?" "I had PSAT testing and I had volleyball games," Elliston said. "And the last thing I need to focus and worry about is fake nudes of mine going around the school. Those images were up and floating around Snapchat for nine months." In San Francisco, Chief Deputy City Attorney Yvonne Mere was starting to hear stories similar to Elliston's -- which hit home. "It could have easily been my daughter," Mere said. The San Francisco City Attorney's office is now suing the owners of 16 websites that create "deepfake nudes," where artificial intelligence is used to turn non-explicit photos of adults and children into pornography. "This case is not about tech. It's not about AI. It's sexual abuse," Mere said. These 16 sites had 200 million visits in just the first six months of the year, according to the lawsuit. City Attorney David Chiu says the 16 sites in the lawsuit are just the start. "We're aware of at least 90 of these websites. So this is a large universe and it needs to be stopped," Chiu said. Republican Texas Sen. Ted Cruz is co-sponsoring another angle of attack with Democratic Minnesota Sen. Amy Klochubar. The Take It Down Act would require social media companies and websites to remove non-consensual, pornographic images created with AI. "It puts a legal obligation on any tech platform -- you must take it down and take it down immediately," Cruz said. The bill passed the Senate this month and is now attached to a larger government funding bill awaiting a House vote. In a statement, a spokesperson for Snap told CBS News: "We care deeply about the safety and well-being of our community. Sharing nude images, including of minors, whether real or AI-generated, is a clear violation of our Community Guidelines. We have efficient mechanisms for reporting this kind of content, which is why we're so disheartened to hear stories from families who felt that their concerns went unattended. We have a zero tolerance policy for such content and, as indicated in our latest transparency report, we act quickly to address it once reported." Elliston says she's now focused on the present and is urging Congress to pass the bill. "I can't go back and redo what he did, but instead, I can prevent this from happening to other people," Elliston said.
Share
Share
Copy Link
A new study reveals that 1 in 6 congresswomen have been victims of AI-generated sexually explicit deepfakes, highlighting the urgent need for legislative action to combat this growing threat.
A recent study by the American Sunlight Project (ASP) has uncovered a disturbing trend in the world of artificial intelligence and politics. The research reveals that 1 in 6 congresswomen have been victims of AI-generated sexually explicit deepfakes, highlighting a significant gender disparity in the targeting of politicians with this technology 1.
The study identified more than 35,000 mentions of nonconsensual intimate imagery (NCII) depicting 26 members of Congress across 11 well-known deepfake websites. Of these, 25 were women and only one was a man, demonstrating that female lawmakers are 70 times more likely to be targeted than their male counterparts 2.
The proliferation of AI-generated deepfakes poses significant risks to women's participation in politics and other forms of civic engagement. Nina Jankowicz, founder of the ASP, emphasized the need to address this new environment where the internet has opened up harms disproportionately targeting women and marginalized communities 1.
Research shows that 41% of women between the ages of 18 and 29 self-censor to avoid online harassment, which experts warn could have a chilling effect on democracy and free speech 1.
In response to this growing threat, several pieces of legislation have been introduced:
The DEFIANCE Act, introduced by Rep. Alexandria Ocasio-Cortez, aims to enable victims of "digital forgery" to sue deepfake creators 2.
The TAKE IT DOWN Act, introduced by Senator Ted Cruz, would criminalize nonconsensual intimate imagery and require platforms to remove such material after being notified 3.
Both bills have passed the Senate and are awaiting further action in the House 3 5.
The issue of deepfake pornography is not limited to the United States. Governments worldwide are taking steps to address this problem:
After being notified by the ASP, many of the deepfake websites removed the content depicting the affected lawmakers. However, researchers note that such removals do not prevent material from being shared or uploaded again 1.
As the technology behind AI-generated deepfakes continues to advance, the need for comprehensive legislation and industry cooperation becomes increasingly urgent to protect individuals, particularly women and marginalized communities, from this form of digital abuse.
Reference
[3]
[4]
The U.S. Senate has unanimously passed the DEFIANCE Act, a bipartisan bill aimed at empowering victims of non-consensual deepfake pornography. The legislation allows victims to sue creators and distributors of such content.
4 Sources
4 Sources
A surge in AI-generated deepfake pornography targeting female politicians worldwide is raising concerns about women's participation in public life and the integrity of democratic processes.
2 Sources
2 Sources
First Lady Melania Trump returns to public advocacy, championing legislation against non-consensual intimate imagery and AI-generated deepfakes. The bipartisan 'Take It Down Act' aims to criminalize revenge porn and protect victims, especially young people, from online exploitation.
5 Sources
5 Sources
A bipartisan group of U.S. senators has introduced legislation aimed at protecting individuals and artists from AI-generated deepfakes. The bill seeks to establish legal safeguards and address concerns about AI exploitation in various sectors.
5 Sources
5 Sources
Governor Gavin Newsom signs bills closing legal loopholes and criminalizing AI-generated child sexual abuse material, positioning California as a leader in AI regulation.
7 Sources
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved