Curated by THEOUTPOST
On Fri, 6 Sept, 12:07 AM UTC
2 Sources
[1]
Microsoft gives deepfake porn victims a tool to scrub images from Bing search | TechCrunch
The advancement of generative AI tools has created a new problem for the internet: the proliferation of synthetic nude images resembling real people. On Thursday, Microsoft took a major step to give revenge porn victims a tool to stop its Bing search engine from returning these images. Microsoft announced a partnership with StopNCII, an organization that allows victims of revenge porn to create a digital fingerprint of these explicit images, real or not, on their device. StopNCII's partners then use that digital fingerprint, or "hash" as it's technically known, to scrub the image from their platforms. Microsoft's Bing joins Facebook, Instagram, Threads, TikTok, Snapchat, Reddit, PornHub, and OnlyFans in partnering with StopNCII, and using its digital fingerprints to stop the spread of revenge porn. In a blog post, Microsoft says it already took action on 268,000 explicit images being returned through Bing's image search in a pilot through the end of August with StopNCII's database. Previously, Microsoft offered a direct reporting tool, but the company says that's proven to be not enough. "We have heard concerns from victims, experts, and other stakeholders that user reporting alone may not scale effectively for impact or adequately address the risk that imagery can be accessed via search," said Microsoft in its blog post on Thursday. You can imagine how much worse that problem would be an a significantly more popular search engine: Google. Google Search offers its own tools to report and remove explicit images from its search results, but has faced criticism from former employees and victims for not partnering with StopNCII, according to a Wired investigation. Since 2020, Google users in South Korea have reported 170,000 search and YouTube links for unwanted sexual content, Wired reported. The AI deepfake nude problem is already widespread. StopNCII's tools only work for people over 18, but "undressing" sites are already creating problems for high schoolers around the country. Unfortunately, the United States doesn't have an AI deepfake porn law to hold anyone accountable, so the country is relying on a patchwork approach of state and local laws to address the issue. San Francisco prosecutors announced a lawsuit in August to take down 16 of the most "undressing" sites. According to a tracker for deepfake porn laws created by Wired, 23 American states have passed laws to address nonconsensual deepfakes, while nine have struck proposals down.
[2]
Microsoft removes revenge porn from Bing search using new tool
Microsoft announced today that it has partnered with StopNCII to proactively remove harmful intimate images and videos from Bing using digital hashes people create from their sensitive media. StopNCII is a project operated by the Revenge Porn Helpline that allows people to create digital hashes of their intimate pictures and videos without uploading the media from their phone. StopNCII then adds these hashes to a database used to find the same or similar images online, which are then removed by their partners, including Facebook, TikTok, Reddit, Pornhub, Instagram, OnlyFans, and Snapchat. In March, Microsoft shared its PhotoDNA technology with StopNCII, allowing enhanced creation of digital hashes without a person's images or videos leaving their device. "Much like the hashing technology that StopNCII.org already uses (PDQ), PhotoDNA is an additional process that enables identified harmful images to be hashed into a digital fingerprint, which then can be shared with industry platforms to identify and remove any non-consensual intimate image abuse material," explains the March announcement. Microsoft announced today that they have been piloting the use of StopNCII's database of hashes to remove intimate images within the Bing search index. Using this database, Microsoft says they have taken action on 268,899 images through the end of August. The rise of artificial intelligence has also led to an increased generation of deep fake nude images from non-intimate photos shared online. While these images are fake, they can be as equally distressing to those who are being exploited. A 2019 report by DeepTrace, now Sensity, showed that 96% of deepfake videos on the internet are pornographic in nature and almost all featured non-consensual use of a woman's likeness. Many of these images are uploaded as "revenge porn," for extortion, or to generate revenue by unscrupulous sites. Unfortunately, AI-generated images make it harder to match against PhotoDNA hashes. In these cases, those impacted should manually report the images to Microsoft, Google, and other online media companies. Microsoft says that impacted people can use their Report a Concern page to request that real or synthetic images be taken out of the Bing search index.
Share
Share
Copy Link
Microsoft introduces a new tool to help victims remove non-consensual intimate images, including AI-generated deepfakes, from Bing search results. This initiative aims to protect individuals from online exploitation and harassment.
In a significant move to address the growing concern of non-consensual intimate imagery online, Microsoft has introduced a new tool designed to help victims remove such content from Bing search results. This initiative, announced on September 5, 2024, specifically targets the removal of both real and AI-generated deepfake pornographic images 1.
The newly launched tool allows individuals to report and request the removal of intimate images of themselves that have been shared without their consent. This includes not only traditional revenge porn but also AI-generated deepfakes, which have become increasingly prevalent and problematic in recent years 2.
To utilize this service, victims can visit Microsoft's dedicated reporting page and submit a request. The company promises to review these requests promptly and take action to remove the reported content from Bing search results, typically within 24 hours 1.
While the tool primarily focuses on removing content from Bing search results, Microsoft has taken an additional step to assist victims. The company will also share copies of takedown notices with other search engines and online platforms, potentially expediting the removal process across the internet 2.
This initiative is part of Microsoft's larger commitment to online safety and privacy. The company acknowledges the severe emotional and psychological impact that non-consensual intimate imagery can have on victims. By providing this tool, Microsoft aims to give individuals more control over their online presence and protect them from exploitation and harassment 1.
Despite the positive intent, experts note that completely eradicating such content from the internet remains a significant challenge. While removing images from search results is a crucial step, it does not delete the content from its original source. Additionally, the rapid spread of content online often outpaces removal efforts 2.
Microsoft's tool is part of a growing trend among tech companies to address the issue of non-consensual intimate imagery. Other platforms, such as Google and Meta, have implemented similar measures in recent years. These collective efforts highlight the tech industry's increasing recognition of its role in combating online abuse and protecting user privacy 12.
Reference
[1]
[2]
Google is implementing new measures to combat the spread of nonconsensual explicit deepfakes. The tech giant is updating its policies and tools to make it easier for victims to remove such content from search results.
12 Sources
12 Sources
Microsoft has identified four key members of a global cybercrime network who allegedly bypassed AI safety measures to create and distribute harmful content, including celebrity deepfakes.
7 Sources
7 Sources
Major AI companies have committed to developing technology to detect and prevent the creation of non-consensual deepfake pornography. This initiative, led by the White House, aims to address the growing concern of AI-generated explicit content.
8 Sources
8 Sources
A new study reveals that 1 in 6 congresswomen have been victims of AI-generated sexually explicit deepfakes, highlighting the urgent need for legislative action to combat this growing threat.
6 Sources
6 Sources
San Francisco's city attorney has filed a lawsuit against websites creating AI-generated nude images of women and girls without consent. The case highlights growing concerns over AI technology misuse and its impact on privacy and consent.
12 Sources
12 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved