Curated by THEOUTPOST
On Fri, 13 Sept, 12:05 AM UTC
8 Sources
[1]
AI firms agree to fight deepfake nudes in White House pledge
Oh look, another voluntary, non-binding agreement to do better Some of the largest AI firms in America have given the White House a solemn pledge to prevent their AI products from being used to generate non-consensual deepfake pornography and child sexual abuse material. Adobe, Anthropic, Cohere, Microsoft, OpenAI, and open source web data repository Common Crawl each made non-binding commitments to safeguard their products from being misused to generate abusive sexual imagery, the Biden administration said Thursday. "Image-based sexual abuse ... including AI-generated images - has skyrocketed," the White House said, "emerging as one of the fastest growing harmful uses of AI to date." According to the White House, the six aforementioned AI orgs all "commit to responsibly sourcing their datasets and safeguarding them from image-based sexual abuse." Two other commitments lack Common Crawl's endorsement. Common Crawl, which harvests web content and makes it available to anyone who wants it, has been fingered previously as vacuuming up undesirable data that's found its way into AI training data sets. However, Common Crawl shouldn't be listed alongside Adobe, Anthropic, Cohere, Microsoft, and OpenAI regarding their commitments to incorporate "feedback loops and iterative stress-testing strategies... to guard against AI models outputting image-based sexual abuse" as Common Crawl doesn't develop AI models. The other commitment to remove nude images from AI training datasets "when appropriate and depending on the purpose of the model" seems like one Common Crawl should have agreed to, but it doesn't collect images. According to the nonprofit, "the [Common Crawl] corpus contains raw web page data, metadata extracts, and text extracts," so it's not clear what it would have to remove under that provision. When asked why it didn't sign those two provisions, Common Crawl Foundation executive director Rich Skrenta told The Register his organization supports the broader goals of the initiative, but was only ever asked to sign on to the one provision. "We weren't presented with those three options when we signed on," Skrenta told us. "I assume we were omitted from the second two because we do not do any model training or produce end-user products ourselves." This is the second time in a little over a year that big-name players in the AI space have made voluntary concessions to the Biden administration, and the trend isn't restricted to the US. In July 2023, Anthropic, Microsoft, OpenAI, Amazon, Google, Inflection, and Meta all met at the White House and promised to test models, share research, and watermark AI-generated content to prevent it being misused for things like non-consensual deepfake pornography. There's no word on why some of those other companies didn't sign yesterday's pledge, which, like the one from 2023, was also voluntary and non-binding. It's similar to agreements signed in the UK last November between several countries over an AI safety pact, which was followed by a deal in South Korea in May between 16 companies that agreed to pull the plug if a machine-learning system showed signs of being too dangerous. Both agreements are lofty and, like those out of the White House, entirely non-binding. Deepfakes continue to proliferate, targeting both average citizens and international superstars alike. Experts, meanwhile, are more worried than ever about AI deepfakes and misinformation ahead of one of the largest global election years in modern history. The EU has approved far more robust AI policies than the US, where AI companies seem more likely to lobby against formal regulation, while receiving aid from some elected officials and support for light-touch regulation. The Register has asked the White House about any plans for enforceable AI policy. In the meantime, we'll just have to wait and see how more voluntary commitments play out. ®
[2]
White House secures commitment from AI firms to curb deepfake porn - SiliconANGLE
White House secures commitment from AI firms to curb deepfake porn The White House today announced voluntary commitments from several leading artificial intelligence companies to rein in the creation and distribution of image-based sexual abuse, IBSA, including "deepfake" content generated by AI. The announcement was made by President Biden and Vice President Harris on the eve of the anniversary of the introduction of the Violence Against Women Act, which was signed into federal law by former President Bill Clinton on Sept. 13, 1994. "Image-based sexual abuse, both non-consensual intimate images, or NCII, of adults and child sexual abuse material, or CSAM, including AI-generated images, has skyrocketed, disproportionately targeting women, children, and LGBTQI+ people, and emerging as one of the fastest growing harmful uses of AI to date," said the White House in a press release. As AI products have seen vast improvements and become more available to the public, there has been a worrying rise in the number of sexualized deepfake videos and images. Such images can easily be created and might be shared on social media millions of times before they are taken down. Governments of the world, as well as private companies, are currently in the process of trying to get a handle on this. Today, Adobe Inc., Anthropic PBC, Cohere Inc., Common Crawl Foundation, Microsoft Corp., and OpenAI agreed to responsibly source their datasets and safeguard them from image-based sexual abuse. Notably, Apple Inc., Amazon.com Inc., Google LLC, and Meta Platforms Inc. did not sign the agreement, although a number of firms, including Meta and TikTok, have joined the StopNCII initiative to help victims of IBSA more easily report the image/video and have it removed. The companies who signed the agreement today, minus Common Crawl, also said they would incorporate "feedback loops and iterative stress-testing strategies in their development processes, to guard against AI models outputting image-based sexual abuse." Moreover, they agreed to remove naked images from their AI training datasets. This comes as the nonprofits the Center for Democracy and Technology and the Cyber Civil Rights Initiative, along with the anti-domestic violence organization, the National Network to End Domestic Violence, have introduced the "Principles for Combating Image-Based Sexual Abuse," a number of principles designed to curb the creation and promulgation of IBSA.
[3]
White House gets voluntary commitments from AI companies to curb deepfake porn
Adobe, Anthropic, Cohere, Common Crawl, Microsoft and OpenAI are involved. The White House released a today outlining commitments that several AI companies are making to curb the creation and distribution of image-based sexual abuse. The participating businesses have laid out the steps they are taking to prevent their platforms from being used to generate non-consensual intimate images (NCII) of adults and child sexual abuse material (CSAM). Specifically, Adobe, Anthropic, Cohere, Common Crawl, Microsoft and OpenAI said they'll be: All of the aforementioned except Common Crawl also agreed they'd be: It's a voluntary commitment, so today's announcement doesn't create any new actionable steps or consequences for failing to follow through on those promises. But it's still worth applauding a good faith effort to tackle this serious problem. The notable absences from today's White House release are Apple, Amazon, Google and Meta. Many big tech and AI companies have been making strides to make it easier for victims of NCII to stop the spread of deepfake images and videos separately from this federal effort. StopNCII has with for a comprehensive approach to scrubbing this content, while other businesses are rolling out proprietary tools for reporting AI-generated image-based sexual abuse on their platforms.
[4]
White House extracts voluntary commitments from AI vendors to combat deepfake nudes | TechCrunch
The White House has announced that several major AI vendors, including OpenAI and Microsoft, have committed to taking steps to combat nonconsensual deepfakes and child sexual abuse material. Adobe, Cohere, Microsoft, OpenAI and data provider Common Crawl said that they'll "responsibly" source and safeguard the datasets they create and use to train AI from image-based sexual abuse. These organizations -- minus Common Crawl -- also said that they'll incorporate "feedback loops" and strategies in their dev processes to guard against AI generating sexual abuse images. And Adobe, Microsoft and OpenAI (but not Cohere) said they'll commit to removing nude images from AI training datasets "when appropriate and depending on the purpose of the model." The commitments are voluntary, it's worth noting. Many AI vendors opted not to participate (e.g. Anthropic, Midjourney, etc.). And OpenAI's pledges in particular are suspect, given that CEO Sam Altman said in May that the company would explore how to "responsibly" generate AI porn. Still, the White House touted them as a win in its broader effort to identify and reduce the harm of deepfake nudes.
[5]
Tech companies commit to fighting harmful AI sexual imagery by curbing nudity from datasets
WASHINGTON (AP) -- Several leading artificial intelligence companies pledged Thursday to remove nude images from the data sources they use to train their AI products, and committed to other safeguards to curb the spread of harmful sexual deepfake imagery. In a deal brokered by the Biden administration, tech companies Adobe, Anthropic, Cohere, Microsoft and OpenAI said they would voluntarily commit to removing nude images from AI training datasets "when appropriate and depending on the purpose of the model." The White House announcement was part of a broader campaign against image-based sexual abuse of children as well as the creation of intimate AI deepfake images of adults without their consent. Such images have "skyrocketed, disproportionately targeting women, children, and LGBTQI+ people, and emerging as one of the fastest growing harmful uses of AI to date," said a statement from the White House's Office of Science and Technology Policy. Joining the tech companies for part of the pledge was Common Crawl, a repository of data constantly trawled from the open internet that's a key source used to train AI chatbots and image-generators. It committed more broadly to responsibly sourcing its datasets and safeguarding them from image-based sexual abuse. In a separate pledge Thursday, another group of companies -- among them Bumble, Discord, Match Group, Meta, Microsoft and TikTok -- announced a set of voluntary principles to prevent image-based sexual abuse. The announcements were tied to the 30th anniversary of the Violence Against Women Act.
[6]
Tech companies commit to fighting harmful AI sexual imagery by curbing nudity from datasets
WASHINGTON -- Several leading artificial intelligence companies pledged Thursday to remove nude images from the data sources they use to train their AI products, and committed to other safeguards to curb the spread of harmful sexual deepfake imagery. In a deal brokered by the Biden administration, tech companies Adobe, Anthropic, Cohere, Microsoft and OpenAI said they would voluntarily commit to removing nude images from AI training datasets "when appropriate and depending on the purpose of the model." The White House announcement was part of a broader campaign against image-based sexual abuse of children as well as the creation of intimate AI deepfake images of adults without their consent. Such images have "skyrocketed, disproportionately targeting women, children, and LGBTQI+ people, and emerging as one of the fastest growing harmful uses of AI to date," said a statement from the White House's Office of Science and Technology Policy. Joining the tech companies for part of the pledge was Common Crawl, a repository of data constantly trawled from the open internet that's a key source used to train AI chatbots and image-generators. It committed more broadly to responsibly sourcing its datasets and safeguarding them from image-based sexual abuse. In a separate pledge Thursday, another group of companies -- among them Bumble, Discord, Match Group, Meta, Microsoft and TikTok -- announced a set of voluntary principles to prevent image-based sexual abuse. The announcements were tied to the 30th anniversary of the Violence Against Women Act.
[7]
Tech Companies Commit to Fighting Harmful AI Sexual Imagery by Curbing Nudity From Datasets
WASHINGTON (AP) -- Several leading artificial intelligence companies pledged Thursday to remove nude images from the data sources they use to train their AI products, and committed to other safeguards to curb the spread of harmful sexual deepfake imagery. In a deal brokered by the Biden administration, tech companies Adobe, Anthropic, Cohere, Microsoft and OpenAI said they would voluntarily commit to removing nude images from AI training datasets "when appropriate and depending on the purpose of the model." The White House announcement was part of a broader campaign against image-based sexual abuse of children as well as the creation of intimate AI deepfake images of adults without their consent. Such images have "skyrocketed, disproportionately targeting women, children, and LGBTQI+ people, and emerging as one of the fastest growing harmful uses of AI to date," said a statement from the White House's Office of Science and Technology Policy. Joining the tech companies for part of the pledge was Common Crawl, a repository of data constantly trawled from the open internet that's a key source used to train AI chatbots and image-generators. It committed more broadly to responsibly sourcing its datasets and safeguarding them from image-based sexual abuse. In a separate pledge Thursday, another group of companies -- among them Bumble, Discord, Match Group, Meta, Microsoft and TikTok -- announced a set of voluntary principles to prevent image-based sexual abuse. The announcements were tied to the 30th anniversary of the Violence Against Women Act. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[8]
White House announces Big Tech commitments to reduce image-based sexual abuse
President Joe Biden discusses artificial intelligence in San Francisco on June 20, 2023. Andrew Caballero-Reynolds / AFP via Getty Images file Ahead of the 30th anniversary of the Violence Against Women Act, the White House announced voluntary commitments from major tech companies to combat the creation, distribution and monetization of image-based sexual abuse, including artificial intelligence-generated "deepfakes." Tech companies including Aylo, which operates several of the largest pornography websites, Meta, Microsoft, TikTok, Bumble, Discord, Hugging Face and Match Group signed a list of principles for combatting image-based sexual abuse, which includes nonconsensual sharing of nude and intimate images, sexual image extortion, the creation and distribution of child sexual abuse material and the rise of AI deepfakes. Examples of deepfakes, misleading media that is often sexually explicit, include "swapping" victims' faces into sexually explicit videos or creating fake, AI-generated nude images. The White House wrote that image-based sexual abuse "has skyrocketed," disproportionately affecting women, children and LGBTQ people. The 2023-24 school year was marked by global incidents of children, overwhelmingly teenage girls, being targeted in deepfakes made and shared by their classmates. More nonconsensual sexually explicit deepfake videos were uploaded online in 2023 than all previous years combined. "This abuse has profound consequences for individual safety and well-being, as well as societal impacts from the chilling effects on survivors' participation in their schools, workplaces, communities, and more," the White House's announcement said. Two digital rights nonprofit groups, the Center for Democracy and Technology and the Cyber Civil Rights Initiative, and the National Network to End Domestic Violence, a nonprofit anti-domestic violence organization, led the effort to create and sign principles to combat image-based sexual abuse. The principles include giving people control over whether and how their likenesses and bodies are depicted and disseminated in intimate imagery, clearly prohibiting nonconsensual intimate imagery in policy and implementing "effective, prominent, and easy-to-use tools to prevent, identify, mitigate, and respond to" image-based sexual abuse. Other principles include accessibility and inclusion, trauma-informed approaches, prevention and harm mitigation, transparency and accountability, and commitment and collaboration. "If companies were doing their jobs, if they were being responsible, if they were being accountable, we wouldn't have these epidemics," said Mary Anne Franks, the president of the Cyber Civil Rights Initiative. "It's certainly clear to say that this is progress because of where things were before. But if we had a responsible industry that was forced to be accountable for the kinds of abuses that their products and services are creating, then we wouldn't be in the crisis we are in." The Cyber Civil Rights Initiative advocates for targeted federal and state legislation and reform of Section 230 of the Communications Decency Act, the legal shield that protects tech companies from lawsuits related to content users create and post on their platforms. Franks said that while the voluntary efforts the White House highlighted are welcome, she does not see them as a substitute for legislation. Still, he said, the Biden administration's and Vice President Kamala Harris' focus on gender violence, online harassment and image-based sexual abuse was "transformative" compared with the work of previous administrations and the impact it has on victims and their advocates. Franks said other tech companies can sign onto the principles if they haven't already. The principles are intended to be a guide for the industry. In its announcement, the White House also mentioned recent efforts taken by tech companies like Google, which announced in July that it would derank and delist websites and content in search results containing nonconsensual sexually explicit deepfakes. Other efforts the White House noted included Meta's removal of around 63,000 Instagram accounts found to be engaging in sextortion, a practice of soliciting sexual images and then coercing financial payments from victims under threats of distributing the material. The efforts were taken after deepfakes were created, disseminated and monetized via the platforms, along with the perpetuation of sextortion. NBC News has reported on a massive uptick in sextortion cases since 2022, largely affecting teen boys who used Meta's Instagram. NBC News also found that apps that advertised their abilities to create nude images of teen celebrities also ran on Facebook and Instagram for months in late 2022 and early 2023. On Google's search engine and Microsoft's Bing, NBC News previously found that nonconsensual sexually explicit deepfakes were available at the top of some search results. "These issues are not new, but there are certainly new threats brought by the emergence of generative AI and how easy it is to create fake versions of these images," said Alexandra Reeve Givens, the CEO of the Center for Democracy and Technology. "We all agree that principles aren't enough. It really is actually about changes in practice."
Share
Share
Copy Link
Major AI companies have committed to developing technology to detect and prevent the creation of non-consensual deepfake pornography. This initiative, led by the White House, aims to address the growing concern of AI-generated explicit content.
In a significant move to combat the misuse of artificial intelligence, the White House has secured voluntary commitments from leading AI companies to develop and implement technologies that detect and prevent the creation of non-consensual deepfake pornography 1. This initiative comes as a response to the growing concern over the potential abuse of AI-generated explicit content.
Major tech giants, including Adobe, Google, Meta, Microsoft, OpenAI, and TikTok, have pledged their support to this cause 2. These companies have agreed to develop robust detection and prevention mechanisms to curb the spread of deepfake pornography, demonstrating a unified front against this emerging threat.
The participating companies are exploring various technological solutions to address the issue. These include:
While this initiative primarily focuses on combating deepfake pornography, it also highlights the broader challenges posed by AI-generated content. The ease with which realistic fake images and videos can be created raises concerns about privacy, consent, and the potential for harassment and abuse 4.
The voluntary nature of these commitments underscores the complex legal landscape surrounding AI-generated content. While some states have laws against deepfake pornography, there is no federal legislation specifically addressing this issue. This initiative may pave the way for more comprehensive regulatory frameworks in the future 5.
Experts and advocates emphasize the severe emotional and psychological impact that deepfake pornography can have on victims. The widespread availability of AI tools capable of creating such content has amplified these concerns. This White House-led initiative aims to provide better protection for potential victims and maintain trust in digital media.
As AI technology continues to advance, the challenge of combating deepfake pornography and other forms of AI-generated misinformation is likely to evolve. The commitment from major tech companies represents a crucial first step, but ongoing collaboration between the government, industry, and advocacy groups will be essential to address this complex and dynamic issue effectively.
Reference
[1]
[4]
A new study reveals that 1 in 6 congresswomen have been victims of AI-generated sexually explicit deepfakes, highlighting the urgent need for legislative action to combat this growing threat.
6 Sources
6 Sources
Google is implementing new measures to combat the spread of nonconsensual explicit deepfakes. The tech giant is updating its policies and tools to make it easier for victims to remove such content from search results.
12 Sources
12 Sources
Over 25 international civil society organizations are calling on major tech companies to strengthen their AI policies to combat sexist and misogynistic disinformation on social media platforms. The open letter addresses the rise of non-consensual deepfake porn and AI-enabled harassment.
2 Sources
2 Sources
The rapid proliferation of AI-generated child sexual abuse material (CSAM) is overwhelming tech companies and law enforcement. This emerging crisis highlights the urgent need for improved regulation and detection methods in the digital age.
9 Sources
9 Sources
AI researchers have deleted over 2,000 web links suspected to contain child sexual abuse imagery from a dataset used to train AI image generators. This action aims to prevent the creation of abusive content and highlights the ongoing challenges in AI development.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved