The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On August 29, 2024
2 Sources
[1]
Deepfake website developers blocked form Sign in with Apple
Apple has blocked some developers from using Sign in with Apple, after a report discovered popular sign-in tools have been used by websites providing harmful AI image undressing services. While Apple Intelligence and other generative AI efforts often offer legitimate and ethical ways to change an image for users, some go the opposite way. The rise of deepfakes has led to a cottage industry of sites that lets users submit photographs and have the AI digitally remove the subject's clothes. The sites, referred to as undress or "nudify" sites, can enable abuse, and are a problem for tech companies. Wired reports that sign-on infrastructure from tech giants are being used on the sites, including Sign in with Apple. Of 16 sites seen by the report, Sign in with Apple was used on six. Google's sign-in API was used on all 16, Discord's on 13, and Patreon and Line used on two. Since being alerted to the issue, both Apple and Discord said they had removed API access to the developers responsible. Google said it would do the same in cases where its terms were violated, while Patreon claims it prohibits accounts from allowing explicit imagery to be produced. The use of the authentication systems provide a visible label of supposed credibility for the sites, despite the ownership and operation of the sites being extremely opaque. Few details about the operators of the sites are known. While action against the log-in systems of the sites won't close them down, the sites will most likely be subject to more action in the future. Many of the sites touted the use of Mastercard and Visa for payment systems, using their logos. In the case of Mastercard, a spokesperson said that the purchases of "nonconsensual deepfake content are not allowed on our network," and that it is prepared to take further action.
[2]
Google, Apple, and Discord Let Harmful AI 'Undress' Websites Use Their Sign-On Systems
Single sign-on systems from several Big Tech companies are being incorporated into deepfake generators, WIRED found. Discord and Apple have started to terminate some developers' accounts. Major technology companies, including Google, Apple, and Discord, have been enabling people to quickly sign up to harmful "undress" websites, which use AI to remove clothes from real photos to make victims appear to be "nude" without their consent. More than a dozen of these deepfake websites have been using login buttons from the tech companies for months. A WIRED analysis found 16 of the biggest so-called undress and "nudify" websites using the sign-in infrastructure from Google, Apple, Discord, Twitter, Patreon, and Line. This approach allows people to easily create accounts on the deepfake websites -- offering them a veneer of credibility -- before they pay for credits and generate images. While bots and websites that create nonconsensual intimate images of women and girls have existed for years, the number has increased with the introduction of generative AI. This kind of "undress" abuse is alarmingly widespread, with teenage boys allegedly creating images of their classmates. Tech companies have been slow to deal with the scale of the issues, critics say, with the websites appearing highly in search results, paid advertisements promoting them on social media, and apps showing up in app stores. "This is a continuation of a trend that normalizes sexual violence against women and girls by Big Tech," says Adam Dodge, a lawyer and founder of EndTAB (Ending Technology-Enabled Abuse). "Sign-in APIs are tools of convenience. We should never be making sexual violence an act of convenience," he says. "We should be putting up walls around the access to these apps, and instead we're giving people a drawbridge." The sign-in tools analyzed by WIRED, which are deployed through APIs and common authentication methods, allow people to use existing accounts to join the deepfake websites. Google's login system appeared on 16 websites, Discord's appeared on 13, and Apple's on six. X's button was on three websites, with Patreon and messaging service Line's both appearing on the same two websites. WIRED is not naming the websites, since they enable abuse. Several are part of wider networks and owned by the same individuals or companies. The login systems have been used despite the tech companies broadly having rules that state developers cannot use their services in ways that would enable harm, harassment, or invade people's privacy. After being contacted by WIRED, spokespeople for Discord and Apple said they have removed the developer accounts connected to their websites. Google said it will take action against developers when it finds its terms have been violated. Patreon said it prohibits accounts that allow explicit imagery to be created, and Line confirmed it is investigating but said it could not comment on specific websites. X did not reply to a request for comment about the way its systems are being used. In the hours after Jud Hoffman, Discord vice president of trust and safety, told WIRED it had terminated the websites' access to its APIs for violating its developer policy, one of the undress websites posted in a Telegram channel that authorization via Discord was "temporarily unavailable" and claimed it was trying to restore access. That undress service did not respond to WIRED's request for comment about its operations. Since deepfake technology emerged toward the end of 2017, the number of nonconsensual intimate videos and images being created has grown exponentially. While videos are harder to produce, the creation of images using "undress" or "nudify" websites and apps has become commonplace. "We must be clear that this is not innovation, this is sexual abuse," says David Chiu, San Francisco's city attorney, who recently opened a lawsuit against undress and nudify websites and their creators. Chiu says the 16 websites his office's lawsuit focuses on have had around 200 million visits in the first six months of this year alone. "These websites are engaged in horrific exploitation of women and girls around the globe. These images are used to bully, humiliate, and threaten women and girls," Chiu alleges.
Share
Share
Copy Link
Tech giants Apple and Google have taken action against AI-powered deepfake websites by blocking their sign-in services. This move aims to combat the misuse of technology for creating non-consensual explicit content.
In a significant move to combat the misuse of artificial intelligence, Apple and Google have blocked their sign-in services for websites that use AI to generate deepfake content 1. This action comes in response to growing concerns about the potential harm caused by AI-powered tools that can create non-consensual explicit images.
Deepfake technology, which uses AI to create or manipulate visual and audio content, has become increasingly sophisticated and accessible. While it has legitimate applications in entertainment and education, it has also been misused to create fake pornographic content without the subject's consent. This has raised serious ethical concerns and potential legal issues 2.
Apple has taken a proactive stance by revoking access to its "Sign in with Apple" feature for websites that offer AI-powered clothing removal or face-swapping services [1]. This decision aligns with Apple's commitment to user privacy and security, as stated in their guidelines for using the sign-in service.
Following Apple's lead, Google has also implemented similar measures. The tech giant has blocked its sign-in services for websites that use AI to generate explicit deepfake content [2]. This coordinated effort by two of the largest tech companies sends a strong message about the industry's stance on the ethical use of AI technology.
The blocking of these sign-in services has significant implications for deepfake websites. Many users rely on these convenient login options, and the loss of access could potentially reduce traffic and user engagement on these platforms. This move may force some websites to reconsider their services or implement stricter content policies [1][2].
This action by Apple and Google highlights the growing need for regulation and ethical guidelines in the rapidly evolving field of AI. As deepfake technology becomes more advanced and accessible, there is an increasing call for tech companies and lawmakers to address the potential for misuse and protect individuals from non-consensual content creation [2].
While these measures are a step in the right direction, they also raise questions about the balance between technological innovation and ethical considerations. The AI industry continues to grapple with how to harness the potential of deepfake technology for positive applications while preventing its misuse [1][2].
As the debate around AI-generated content continues, it's likely that we'll see more tech companies and platforms implementing similar protective measures. The actions taken by Apple and Google may set a precedent for how the industry addresses the ethical challenges posed by AI technology in the future [1][2].
San Francisco's city attorney has filed a lawsuit against websites creating AI-generated nude images of women and girls without consent. The case highlights growing concerns over AI technology misuse and its impact on privacy and consent.
12 Sources
Major AI companies have committed to developing technology to detect and prevent the creation of non-consensual deepfake pornography. This initiative, led by the White House, aims to address the growing concern of AI-generated explicit content.
8 Sources
Google is implementing new measures to combat the spread of nonconsensual explicit deepfakes. The tech giant is updating its policies and tools to make it easier for victims to remove such content from search results.
12 Sources
As AI-generated images become more prevalent, concerns about their impact on society grow. This story explores methods to identify AI-created images and examines how major tech companies are addressing the issue of explicit deepfakes.
2 Sources
The rapid proliferation of AI-generated child sexual abuse material (CSAM) is overwhelming tech companies and law enforcement. This emerging crisis highlights the urgent need for improved regulation and detection methods in the digital age.
9 Sources