Curated by THEOUTPOST
On Wed, 31 Jul, 4:05 PM UTC
12 Sources
[1]
Google is trying to delete and bury explicit deepfakes from search results
Google is dramatically upping its efforts to combat the appearance of explicit images and videos created with AI in search results. The company wants to make it clear that AI-produced non-consensual deepfakes are not welcome in its search engine. The actual images may be prurient or offensive in some other way, but regardless of the details, Google has a new approach to removing this type of material and burying it far from the page one results if erasure isn't possible. Notably, Google has experimented with using its own AI to generate images for search results, but those pictures don't include real people and especially nothing racy. Google partnered with experts on the issue and those who have been targets of non-consensual deepfakes to make its system of response more robust. Google has allowed individuals to request the removal of explicit deepfakes for a while, but the proliferation and improvement of generative AI image creators means there's a need to do more. The request for removal system has been streamlined to make it easier to submit requests and speed up the response. When a request is received and confirmed as valid, Google's algorithms will also work to filter out any similar explicit results related to the individual. The victim won't have to manually comb through every variation of a search request that might pull up the content, either. Google's systems will automatically scan for and remove any duplicates of that image. And it won't be limited to one specific image file. Google will proactively put a lid on related content. This is particularly important given the nature of the internet, where content can be duplicated and spread across multiple platforms and websites. This is something Google already does when it comes to real but non-consensual imagery, but the system will now cover deepfakes, too. The method also shares some similarities with recent efforts by Google to combat unauthorized deepfakes, explicit or otherwise, on YouTube. Previously, YouTube would just label such content as created by AI or potentially misleading, but now, the person depicted or their lawyer can submit a privacy complaint, and YouTube will give the video's owner a couple of days to remove it themselves before YouTube reviews the complaint for merit. Content removal isn't 100% perfect, as Google well knows. That's why the explicit deepfake search results hunt also includes an updated ranking system. The new ranking pushes back against search terms with a chance of pulling up explicit deepfakes. Google Search will now try to lower the visibility of explicit fake content and websites associated with spreading them in search results, especially if the search has someone's name. For instance, say you were searching for a news article about how a specific celebrity's deepfakes went viral, and they are testifying to lawmakers about the need for regulation. Google Search will attempt to make sure you see those news stories and related articles about the issue and not the deepfakes under discussion. Given the complex and evolving nature of generative AI technology and its potential for abuse, addressing the spread of harmful content requires a multifaceted approach. And Google is hardly unique in facing the issue or working on solutions. They've appeared on Facebook, Instagram, and other Meta platforms, and the company has updated its policies as a result, with its Oversight Board recently recommending changing its guidelines to directly cover AI-generated explicit content and improve its own appeals process. Lawmakers are responding to the issue as well, with New York State's legislature passing a bill targeting AI-generated non-consensual pornography as part of its "revenge porn" laws. At the national level this week, the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024 (NO FAKES Act) was introduced in the U.S. Senate to deal with both explicit content and non-consensual use of deepfake visuals and voices. Similarly, Australia's legislature is working on a bill to criminalize the creation and distribution of non-consensual explicit deepfakes Still, Google can already point to some success in combatting explicit deepfakes. The company claims its early tests with these changes are succeeding in reducing the appearance of deepfake explicit images by more than 70%. Google hasn't declared victory over explicitly deepfakes quite yet, however. "These changes are major updates to our protections on Search, but there's more work to do to address this issue, and we'll keep developing new solutions to help people affected by this content," Google product manager Emma Higham explained in a blog post. "And given that this challenge goes beyond search engines, we'll continue investing in industry-wide partnerships and expert engagement to tackle it as a society."
[2]
Google announces new tactics to curb explicit deepfakes
The explosion of nonconsensual deepfake imagery online in the past year, particularly of female celebrities, has presented a difficult challenge for search engines. Even if someone isn't looking for that material, searching for certain names can yield a shocking number of links to fake explicit photos and videos of that individual. Google is trying to tackle that problem with an update to its ranking systems, the company announced in a blog post. Google product manager Emma Higham wrote in the post that the ranking updates are designed to lower explicit fake content for many searches. When someone uses terms to seek out nonconsensual deepfakes of specific individuals, the ranking system will attempt to instead provide "high-quality, non-explicit content," such as news articles, when it's available. "With these changes, people can read about the impact deepfakes are having on society, rather than see pages with actual nonconsensual fake images," Higham wrote. The ranking updates have already decreased exposure to explicit image results on deepfake searches by 70 percent, according to Higham. Google is also aiming to downrank explicit deepfake content, though Higham noted that it can be difficult to distinguish between content that is real and consensual, such as an actor's nude scenes, and material generated by artificial intelligence, without an actor's consent. To help spot deepfake content, Google is now factoring into its ranking whether a site's pages have been removed from Search under the company's policies. Sites with a high volume of removals for fake explicit imagery will now be demoted in Search. Additionally, Google is updating systems that handle requests for removing nonconsensual deepfakes from Search. The changes should make the request process easier. When a victim is able to remove deepfakes of themselves from Google Search, the company's systems will aim to filter all related results on similar searches about them, and scan and remove duplicates of that imagery. Higham acknowledged that there's "more work to do," and that Google would continue developing "new solutions" to help people affected by nonconsensual deepfakes. Google's announcement comes two months after the White House called on tech companies to stop the spread of explicit deepfake imagery.
[3]
Google Search is cracking down on sexually explicit deepfakes
Google is joining the growing number of companies standing up to sexually explicit deepfakes. The Alphabet division has made it easier for users to report non-consensual imagery found in search results, including those made by artificial intelligence tools. While it was previously possible for users to request the removal of these images prior to the update, under the new policy whenever that request is granted, the company will scan for duplicates of the non-consensual image and remove those as well. Google will also attempt to filter all explicit results on similar searches. "With every new technology advancement, there are new opportunities to help people -- but also new forms of abuse that we need to combat," product manager Emma Higham wrote in a blog post. "As generative imagery technology has continued to improve in recent years, there has been a concerning increase in generated images and videos that portray people in sexually explicit contexts, distributed on the web without their consent." Google has also changed its ranking system, lowering explicit deepfake content in general. Even direct searches for explicit deepfakes will bypass the user request and instead return "high-quality, non-explicit content -- like relevant news articles -- when it's available" the company wrote.
[4]
Google Cracks Down on Explicit Deepfakes
Newly announced measures by the search giant aim to make AI-generated, or otherwise spoofed explicit content, more difficult to discover. A few weeks ago, a Google search for "deepfake nudes jennifer aniston" brought up at least seven high-up results that purported to have explicit, AI-generated images of the actress. Now they have vanished. Google product manager Emma Higham says that new adjustments to how the company ranks results, which have been rolled out this year, have already cut exposure to fake explicit images by over 70 percent on searches seeking that content about a specific person. Where problematic results once may have appeared, Google's algorithms are aiming to promote news articles and other non-explicit content. The Aniston search now returns articles such as "How Taylor Swift's Deepfake AI Porn Represents a Threat" and other links like a Ohio attorney general warning about "deepfake celebrity-endorsement scams" that target consumers. "With these changes, people can read about the impact deepfakes are having on society, rather than see pages with actual non-consensual fake Images," Higham wrote in a company blog post on Wednesday. The ranking change follows a WIRED investigation this month that revealed that in recent years Google management rejected numerous ideas proposed by staff and outside experts to combat the growing problem of intimate portrayals of people spreading online without their permission. While Google made it easier to request removal of unwanted explicit content, victims and their advocates have urged more proactive steps. But the company has tried to avoid becoming too much of a regulator of the internet or harm access to legitimate porn. At the time, a Google spokesperson said in response that multiple teams were working diligently to bolster safeguards against what it calls nonconsensual explicit imagery (NCEI). The widening availability of AI image generators, including some with few restrictions on their use, has led to an uptick in NCEI, according to victims' advocates. The tools have made it easy for just about anyone to create spoofed explicit images of any individual, whether that's a middle school classmate or a mega-celebrity. In March, a WIRED analysis found Google had received over 13,000 demands to remove links to a dozen of the most popular websites hosting explicit deepfakes. Google removed results in around 82 percent of the cases. As part of Google's new crackdown, Higham says that the company will begin applying three of the measures to reduce discoverability of real but unwanted explicit images to those that are synthetic and unwanted. After Google honors a takedown request for a sexualized deepfake, it will then try to keep duplicates out of results. It will also filter explicit images from results in queries similar to those cited in the takedown request. And finally, websites subject to "a high volume" of successful takedown requests will face demotion in search results. "These efforts are designed to give people added peace of mind, especially if they're concerned about similar content about them popping up in the future," Higham wrote. Google has acknowledged that the measures don't work perfectly, and former employees and victims' advocates have said they could go much further. The search engine prominently warns people in the US looking for naked images of children that such content is unlawful. The warning's effectiveness is unclear, but it's a potential deterrent supported by advocates. Yet, despite laws against sharing NCEI, similar warnings don't appear for searches seeking sexual deepfakes of adults. The Google spokesperson has confirmed that this will not change.
[5]
Google Declares War on Deepfake Porn
Google's updates how it deals with explicit fake images popping up in search. Sexually explicit deepfakes are on the rise, proportionally affecting women. It's gotten so bad that a bipartisan bill to combat the dissemination of the highly intrusive act passed unanimously through the Senate last week. In an attempt to combat the harmful content, Google is instituting a crackdown. Google lays out its two-pronged attack methods on the company blog to keep deepfakes from appearing in searches. I imagine if you've ever been a victim of an explicit deepfake or even revenge porn, the first thing on your mind is removing the offending content. However, it's been notoriously difficult to get something taken down. Google has updated the system to request that the content be reduced. According to Google, when the content is successfully removed, the search engine will attempt to filter out all explicit content related to the term. And Google will take down any duplicate posts. To get an explicit post removed from Google Search, it must meet the following criteria: Google's ranking system has also undergone an update. The first step is creating a ranking that won't surface much explicit content. The search engine will attempt to present non-explicit results, such as a news article, in the case of search queries with a high rate of inappropriate results when a name is queried. Google claims its current updates have reduced deepfake port entries by over 70% (I'd love to see the real-life numbers on that statistic). Google acknowledges that determining whether or not the content is consensual (like an actor's nude scene as opposed to a deepfake) is an ongoing challenge. It's a problem that the company states, "We're making ongoing improvements to better surface legitimate content and downrank explicit fake content." Finally, Google made a list and checked it twice. If you get too many flags from Google on your search results, particularly regarding removals, your site can get demoted. That means if your page has too many explicit content removals, Google will determine when and if your site will pop up in search. These are relatively small steps in the grand scheme of things, but at least Google is giving deepfake and revenge porn victims a way to fight back. And if Google and other AI purveyors can get on board with rules and regulations for dealing with the issues, we won't have to wade through government gridlock to get things done.
[6]
Google's newest update cracks down on deepfake nudes in Search
Google's AI app highlights are generating really weird Play Store descriptions Summary Google is taking proactive measures to curb access to explicit deepfakes on Search. New reporting system allows affected users to remove offending images faster. Google is automatically filtering out explicit deepfakes from search results to prevent spread on the internet. Over the past couple of years, we've seen some incredible advancements thanks to the use of artificial intelligence. Of course, big names like OpenAI, Microsoft, and Google have continued to push things forward, debuting new and useful tools for a variety of different products. At the same time, these same companies have exercised caution in an attempt to navigate this new road that has never been traveled before. Despite all the interesting things that have come from the adoption of AI, there have also been some nefarious and downright scandalous things that have come from this technology as well. Related What is artificial intelligence? Its presence can be found on many of our devices right now Of course, Google understands this, and has, for some time, created tools and rules in order to protect its users. With that said, Google is now sharing how it will further curb access to explicit deepfakes on Search, by introducing new measures that will make it easier to remove these types of media from its platform. Furthermore, the brand will now also implement some ranking changes that should prevent these types of images from even surfacing in search queries. New changes that could have huge impacts Source: Google As far as what changes are being made, there are two. With Google looking to bolster its existing reporting system for explicit deepfakes, that will grant the affected user the ability to get offending images taken down at a much faster rate. It's able to make this possible by creating a simpler process, giving those that are affected a new way to remove offending content on a larger scale. When a new report is filed against such media, Google will automatically look to filter out this content from search results. Furthermore, Google will also try and locate and detect duplicates of these images and automatically remove them too. As you can imagine, when deepfakes hit the internet, they can spread like wildfire, and this new method makes it less tedious to keep these images off of Search. Of course, it needs to be said that this doesn't remove these images from the internet, it simply just removes them from Google's search results, making it harder for the public to access. In addition to the above, Google will also monitor this kind of content to ensure that it doesn't make its way up the rankings in Search. It will automatically lower the rankings for websites hosting this type of content, and will instead produce standard search results with non-explicit content in the form of images and even news articles. Google has apparently already been implementing this and claims that it has seen a 70% difference when it comes to these types of queries. Now, Google understands that legitimate explicit content is out there, but its main goal here is to remove and even suppress stuff that isn't genuine. Of course, this is a huge step for Google and there will no doubt be more changes made in the future. But it'll be a constant cat and mouse game, so it will be interesting to see how things turn out.
[7]
Google Updates Its Search Algorithm to Tackle AI Deepfakes
Google is updating its search engine's algorithm and its removal request process to better combat unwanted sexually explicit AI deepfakes of people, the tech giant announced Wednesday. While Google already offers victims the ability to request removal of such content, it's promising to make the process easier overall. When reported AI deepfakes are identified, Google Search will automatically filter out related search results that might pop up in the future so users won't have to repeatedly report similar images or duplicates of an image to Google. Google Search's algorithm is also getting an updating to better tackle this issue, which has become a growing problem across the internet and social media. Generative AI tools are widely accessible and often free or available at a low cost to users, making it easy to quickly make sexually explicit deepfakes of virtually anyone. Google will now also demote sites in its search engine's rankings that repeatedly harbor non-consensual AI deepfakes. "This approach has worked well for other types of harmful content, and our testing shows that it will be a valuable way to reduce fake explicit content in search results," Google said in a statement. Google says its Search algorithm update will lower the chances of explicit deepfakes appearing in Search. The search engine will also attempt to differentiate between real sexually explicit content made consensually (such as adult film stars' work, for example) and AI-generated media made without the person's consent. But Google says doing this is a "technical challenge," so these efforts may not be entirely accurate or effective. Regardless, Google claims that the changes it's already made to Search have reduced the resurfacing of such deepfakes by more than 70%. "With these changes, people can read about the impact deepfakes are having on society, rather than see pages with actual non-consensual fake images." Non-consensual AI deepfakes, especially sexually explicit ones, pose a problem for the very tech firms developing and promoting the use of image-generating AI tools. This year, Meta faced an investigation from its Oversight Board over its handling of two different sexually-explicit deepfakes of real women. The board found that the social media giant should have removed both images, and argues that Meta should stop relying so heavily on news reports of deepfakes spreading online and take more proactive actions instead. US officials are also pushing for laws to protect non-consensual deepfake victims and hold platforms accountable for hosting such media. Last month, Senator Ted Cruz (R-Texas) proposed the TAKE IT DOWN Act, which would make publishing non-consensual sexual deepfakes a federal crime and force social media platforms to remove them. Last week, the Senate passed the Defiance Act, which would allow victims to sue those who have created or shared unwanted sexual deepfakes. Of course, sexually explicit deepfakes aren't the only AI deepfakes of concern. Earlier this year, an AI deepfake of President Joe Biden discourage voters to vote in the primaries. More recently, a fake Kamala Harris video went viral on TikTok, and Elon Musk shared a different one of the VP on Twitter/X that potentially violates his platform's own rules.
[8]
Google announces steps to combat nonconsensual sexually explicit deepfakes
The announcement comes after mounting scrutiny on Google for their role in perpetuating the spread of deepfakes. After a staggering increase in the number of fake pornographic videos and images uploaded online in the last several years, Google on Wednesday announced new measures to assist victims and reduce the prominence of deepfakes in top search results. The search engine also committed to taking steps to derank websites that frequently host the nonconsensual sexually explicit fake videos -- also known as deepfakes -- meaning that they may appear lower in search results. Deepfakes refer to misleading fake media, which has increasingly been created using artificial-intelligence tools. Nonconsensual sexually explicit deepfakes often "swap" a victim's face onto the body of a person in a pre-existing pornographic video. Generative AI tools have also been used to create fake but realistic sexually explicit images that depict real people, or "undress" real photos to make victims appear nude. The practice overwhelmingly affects women and girls, both public figures and, increasingly, girls in middle and high schools around the world. In 2023, more nonconsensual sexually explicit deepfakes were posted online than in all previous years combined. Google and other search engines have directed traffic to websites that allow deepfake creators to profit, as well as included links to deepfake videos and shown deepfake images in top search results. The platform has also included links to tools used to create nonconsensual sexually explicit deepfakes in top results. In its announcement Wednesday, Google said it will aim to filter explicit content from similar searches after victims successfully request the removal of explicit nonconsensual fake imagery through an online form. Currently, victims have to flag each URL containing the imagery. Google also said that it will scan for and remove duplicates of nonconsensual sexually explicit deepfake images from search results after images are successfully flagged and taken down. "These efforts are designed to give people added peace of mind, especially if they're concerned about similar content about them popping up in the future," Google's announcement said. Google is not proactively scanning for new deepfakes to remove, and will only remove deepfakes if a victim successfully flags them. NBC News previously reported that when safe-search tools are turned off, results for queries like "deepfakes" and "fake nudes" would surface the material in top results, above relevant news articles about the growing trend. Now, Google said it aims to rank relevant news articles above deepfakes, including when someone is searching for a person's name and the word "deepfakes." "With these changes, people can read about the impact deepfakes are having on society, rather than see pages with actual non-consensual fake images," the announcement said. Google also said it will also demote websites that have been associated with a high number of deepfake removal requests in search results. One of the most prominent websites for nonconsensual sexually explicit deepfakes, which ranks highly in some Google search results, has used a variety of tactics to monetize the material. "This approach has worked well for other types of harmful content, and our testing shows that it will be a valuable way to reduce fake explicit content in search results," Google said. The announcement follows pressure from lawmakers to address the issue. In June, Senate Judiciary Chair Dick Durbin, D-Ill., sent a letter to Google's CEO asking for details on how it plans to combat deepfakes. Federal legislation introduced by Durbin would allow nonconsensual sexually explicit deepfake victims to sue perpetrators; it passed the Senate last week and is awaiting a vote in the House.
[9]
Google upgrades search in drive to tackle deepfake porn
Google is making changes to its search engine in an effort to tackle deepfake pornography, as the tech industry grapples with the far-reaching social impacts of generative artificial intelligence. Advances in generative AI mean that fake images have become more realistic and easier to create, prompting experts to warn that the use of people, without their knowledge or consent, in pornographic imagery has become more widespread. Measures introduced by the tech giant on Wednesday include changes that would make it easier for victims of deepfake porn to have videos and images of themselves taken down from the internet. Individuals currently need to make removal requests for each website address or URL. The latest changes would omit explicit results on related search terms that include a person's name. The search giant will also downgrade the ranking of sites that have received a high volume of removal notices. "In addition, when someone successfully removes an image from search under our policies, our systems will scan for -- and remove -- any duplicates of that image that we find," Google said in a blog post on Wednesday. Companies such as Google, Meta and X have been scrambling to tackle deepfakes on their platforms -- images, video and audio that can be generated using AI in the likeness of both private and public individuals. Last week Meta's independent Oversight Board called on the company to strengthen its rules on the removal of deepfake porn. "We are in the middle of a technology shift," said Emma Higham, a Google product manager who has been involved in the company's fight against deepfakes. "As we monitor our own systems, we've seen that there is a rise in removal requests for this kind of content." The issue is coming under scrutiny from regulators: the UK's Online Safety Act, passed in October and considered one of the strictest, makes it illegal to disseminate non-consensual pornographic deepfakes. In the US, legislation in several states targets those who create and share explicit deepfaked content. Clare McGlynn, a law professor at Durham University who studies regulations around pornography, said that the search engine has been slow to prevent such content from proliferating on the internet. "Google's delay in taking these obvious and necessary steps to reduce deepfake sexual abuse is inexcusable," she said. "Google remains responsible for facilitating the exponential rise in deepfake sexual abuse by its high-ranking of deepfake apps, websites and tutorials for many years." The company said it has cut the amount of explicit deepfake content that has appeared on its search results by 70 per cent since the start of the year through initial policy changes and limited updates to its search engine. However the company said there are limitations to the policy updates. Higham said that third-party media providers did not always share video data with Google, making it impossible to detect a potential duplicate. There may exist "trade-offs" for adult performers who want to share consensual adult content on the search engine while having non-consensual material filtered out, Google added. Google's tweaks to its search results ranking system would demote websites that link to non-consensual AI-generated adult content while promoting non-explicit "high-quality" sites, including news articles. "These are unsolved technical challenges for search engines," Higham said. "So we're getting to a point where we feel we can get more traction." The company has extensive policies in place to remove child sexual abuse materials and this year banned advertising for deepfake pornography. Its latest policies stop short of de-indexing popular deepfake sites from search results altogether, a move that advocacy groups such as #MyImageMyChoice are pushing for. Google said delisting sites entirely could block access to important information, such as how content can be removed from a host website.
[10]
Google is working on removing AI deepfakes from its search results
Google is upgrading its safety features to make them easier to remove deepfakes from search while also preventing them from showing up higher in search results. While users can request for the removal of explicit deepfakes successfully, Google now wants to make the process easier by automatically filtering out related search results and reporting them as well as other similar or duplicate images. Google will also demote websites that repeatedly contain AI deepfakes in their search ranking. Google is showing fewer AI-generated search results In a blog, making the announcement, Google product manager Emma Higham said, "This approach has worked well for other types of harmful content, and our testing shows that it will be a valuable way to reduce fake explicit content in search results." The search giant shared that past updates reduced exposure to explicit image results for queries around deepfakes by over 70% this year. "With these changes, people can read about the impact deepfakes are having on society, rather than see pages with actual non-consensual fake images," she said. (For top technology news of the day, subscribe to our tech newsletter Today's Cache) Earlier, in May, Google began removing advertisers who were promoting deepfake porn services. In 2022, it also expanded the types of content around "doxxing" so they could be removed and started to blur sexually explicit imagery by default in August 2023. Non-consensual AI deepfakes have become an increasing cause of concern for tech firms. Meta was recently investigated by its Oversight Board for failing to handle sexually explicit deepfakes of real women adequately. Read Comments
[11]
Google Takes Action Against Deepfake Porn In Search Results As Others Like Mark Zuckerberg's Meta And Elon Musk's X Also Tackle The Issue - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL)
Alphabet Inc.'s GOOG GOOGL Google has introduced new measures to combat the spread of explicit deepfake content in its search results. What Happened: In a blog post on Thursday product manager at Google, Emma Higham, said that the tech giant is introducing new online safety features designed to simplify the removal of explicit deepfakes from Search and prevent them from ranking highly in results. "These protections have already proven to be successful in addressing other types of non-consensual imagery, and we've now built the same capabilities for fake explicit images as well," she stated. See Also: Mark Zuckerberg's Meta Blames Hallucination For AI Assistant Incorrectly Denying Trump Assassination Attempt -- But What About Google? Google is also modifying its Search rankings to better manage queries that carry a higher risk of surfacing explicit fake content. Sites that receive a significant number of removals for fake explicit imagery will be demoted in Google Search rankings. The company disclosed that previous updates have reduced exposure to explicit image results on queries specifically looking for deepfake content by over 70% this year. Google is also working on distinguishing between real and fake explicit content so that legitimate images can still be surfaced while demoting deepfakes. "While differentiating between this content is a technical challenge for search engines, we're making ongoing improvements to better surface legitimate content and downrank explicit fake content," Higham noted. Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Why It Matters: In January this year, AI-generated pornographic images of Taylor Swift, which were circulated widely on Elon Musk's X, formerly Twitter triggered a major uproar. While the platform eventually removed those images, the incident provoked a wider concern among users, tech behemoths, and lawmakers alike. Last month, U.S. lawmakers introduced a bill, the Take It Down Act, mandating social media companies like Meta Platforms Inc. META and X to remove such images from their platforms. Earlier this month, Meta's internal Oversight Board called for clearer regulations against AI-generated pornographic content. This followed the identification of two pornographic deepfakes of prominent women on Meta's platforms. Musk's social media network X also witnessed a disturbing surge in pornographic content, causing discomfort among its users. The platform subsequently revised its content policy to include an opt-in mechanism for adult content. Photo Courtesy: Shutterstock.com Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: First iPhone 16 Models May Not Have Apple Intelligence Features As Cupertino Delays AI Integration For iOS 18 Overhaul: Report Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[12]
Google upgrades Search to combat deepfakes and demote sites posting them
The company is also expanding privacy protections for users. Here's how. Generative AI has made identifying synthetic content and protecting user privacy much more challenging. In an effort to improve information literacy and increase data protections, Google has made changes to Search that combat deepfakes and make taking control of your information a little easier. The company detailed its improvements to how it handles explicit fake content, or non-consensual deepfakes, in Search. While you have been able to request that Google remove this content from search results for years, Google will now filter all duplicates of that image, as well as explicit results that arise from similar searches about you -- not just the search result from the original removal request. In theory, this should cover more ground in terms of removing harmful content from corners it may be hiding in, even after someone has successfully requested a removal. The process applies to both non-consensual images and fake explicit imagery. Also: 7 ways to supercharge your Google searches with AI Google also updated its ranking systems "for queries where there's a higher risk of explicit fake content appearing in Search." These updates will prioritize surfacing high-quality, non-explicit results for queries that include people's names when available. The company says its updates have already reduced explicit content exposure by more than 70%. The changes aim to surface content that educates users on deepfakes, rather than the deepfakes themselves. Google will also demote sites the company has received many removal requests about. As part of its Search improvements, Google is also adding its "About this image" contextualizing feature to both Circle to Search and Google Lens. Users can now access the feature seamlessly through both tools. Say, for example, that a friend texts you an outrageous-looking image -- you can simply circle it on your Android device and open the "About this image" tab in Google Search, which will contain information about the photo's origins based on what the search engine can find. If you're using Google Lens, you can simply screenshot or download the image in question, open it in the Google app, and hit the Lens icon. This capability is available to both iOS and Android users. "About this image" surfaces information from other sites, including news and fact-checking platforms, that describe the image. This context can help debunk a photo that's being used out of its original context, for example, or that has been altered to misrepresent information. Google also refers to the image's metadata for context on the image's history and how or when it was created -- though metadata can be added or removed by someone when posting an image online. Some of this data can indicate whether an image is synthetic, or generated or edited using AI. At a briefing and demo attended by ZDNET, Google didn't specify how it's able to verify the AI origins of an image, but did note that the technology is still in its rudimentary stages. The "About this image" feature is able to detect whether an image was generated with AI if it contains Google DeepMind's SynthID watermark, which is embedded in the pixels of any image created using Google's AI tools. Also: The best AI search engines of 2024: Google, Perplexity, and more These moves aim to make finding context for what you see online easier as part of Google's information literacy initiative. If embraced, media tools like this can help voters navigate an election cycle rife with synthetic political content and misinformation. Available in 40 languages, "About this image" is accessible now in Circle to Search on the latest Samsung and Pixel phones, foldables and tablets, and on Google Lens, available in the Google app for both Android and iOS.
Share
Share
Copy Link
Google is implementing new measures to combat the spread of nonconsensual explicit deepfakes. The tech giant is updating its policies and tools to make it easier for victims to remove such content from search results.
In a significant move to address the growing concern of nonconsensual explicit deepfakes, Google has announced new measures to combat their spread through its search engine. The tech giant is updating its policies and tools to make it easier for victims to remove such content from search results, marking a crucial step in the fight against this form of digital exploitation 1.
Google's updated policy now explicitly prohibits "explicit synthetic content" that depicts an identifiable individual in a pornographic or sexually explicit situation without their consent 2. This includes both AI-generated deepfakes and more traditional forms of manipulated media. The company has streamlined the removal request process, allowing individuals to submit requests for multiple URLs in a single form, significantly reducing the burden on victims 3.
In addition to reactive measures, Google is taking proactive steps to address the issue. The company is working on improving its ability to detect and automatically remove explicit synthetic content from search results 4. However, the task is challenging due to the rapid advancement of AI technology used to create these deepfakes, making them increasingly difficult to distinguish from genuine content.
Google's initiative is part of a larger trend in the tech industry to combat the misuse of AI-generated content. Other platforms, such as Pornhub and Reddit, have also implemented policies against nonconsensual deepfakes 5. These efforts reflect growing awareness of the potential harm caused by such content, including emotional distress, reputational damage, and privacy violations.
While Google's move has been largely welcomed, it also raises questions about the balance between protecting individuals and maintaining free speech online. The company acknowledges the complexity of the issue, stating that they aim to remove harmful content while preserving access to educational, documentary, and artistic content that may include nudity or discussions about sex 1.
As AI technology continues to evolve, the challenge of combating explicit deepfakes is likely to grow more complex. Google's actions represent an important step in addressing the issue, but experts suggest that a comprehensive solution will require ongoing collaboration between tech companies, policymakers, and advocacy groups to stay ahead of emerging threats and protect individuals from digital exploitation 4.
Reference
[3]
[5]
Google is set to implement a new feature in its search engine that will label AI-generated images. This move aims to enhance transparency and combat the spread of misinformation through deepfakes.
14 Sources
14 Sources
Microsoft introduces a new tool to help victims remove non-consensual intimate images, including AI-generated deepfakes, from Bing search results. This initiative aims to protect individuals from online exploitation and harassment.
2 Sources
2 Sources
Major AI companies have committed to developing technology to detect and prevent the creation of non-consensual deepfake pornography. This initiative, led by the White House, aims to address the growing concern of AI-generated explicit content.
8 Sources
8 Sources
A new study reveals that 1 in 6 congresswomen have been victims of AI-generated sexually explicit deepfakes, highlighting the urgent need for legislative action to combat this growing threat.
6 Sources
6 Sources
Google announces plans to label AI-generated images in search results, aiming to enhance transparency and help users distinguish between human-created and AI-generated content.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved