6 Sources
6 Sources
[1]
YouTube's likeness detection has arrived to help stop AI doppelgΓ€ngers
AI content has proliferated across the Internet over the past few years, but those early confabulations with mutated hands have evolved into synthetic images and videos that can be hard to differentiate from reality. Having helped to create this problem, Google has some responsibility to keep AI video in check on YouTube. To that end, the company has started rolling out its promised likeness detection system for creators. Google's powerful and freely available AI models have helped fuel the rise of AI content, some of which is aimed at spreading misinformation and harassing individuals. Creators and influencers fear their brands could be tainted by a flood of AI videos that show them saying and doing things that never happened -- even lawmakers are fretting about this. Google has placed a large bet on the value of AI content, so banning AI from YouTube, as many want, simply isn't happening. Earlier this year, YouTube promised tools that would flag face-stealing AI content on the platform. The likeness detection tool, which is similar to the site's copyright detection system, has now expanded beyond the initial small group of testers. YouTube says the first batch of eligible creators have been notified they can use likeness detection, but interested parties will need to hand Google even more personal information to get protection from AI fakes. Currently, likeness detection is a beta feature in limited testing, so not all creators will see it as an option in YouTube Studio. When it does appear, it will be tucked into the existing "Content detection" menu. In YouTube's demo video, the setup flow appears to assume the channel has only a single host whose likeness needs protection. That person must verify their identity, which requires a photo of a government ID and a video of their face. It's unclear why YouTube needs this data in addition to the videos people have already posted with their oh-so stealable faces, but rules are rules. No guarantees After signing up, YouTube will flag videos from other channels that appear to have the user's face. YouTube's algorithm can't know for sure what is and is not an AI video. So some of the face match results may be false positives from channels that have used a short clip under fair use guidelines. If creators do spot an AI fake, they can add some details and submit a report in a few minutes. If the video includes content copied from the creator's channel that does not adhere to fair use guidelines, YouTube suggests also submitting a copyright removal request. However, just because a person's likeness appears in an AI video does not necessarily mean YouTube will remove it. YouTube has published a rundown of the factors its reviewers will take into account when deciding whether or not to approve a removal request. For example, parody content labeled as AI or videos with an unrealistic style may not meet the threshold for removal. On the flip side, you can safely assume that a realistic AI video showing someone endorsing a product or engaging in illegal activity will run afoul of the rules and be removed from YouTube. While this may be an emerging issue for creators right now, AI content on YouTube is likely to kick into overdrive soon. Google recently unveiled its new Veo 3.1 video model, which includes support for both portrait and landscape AI videos. The company has previously promised to integrate Veo with YouTube, making it even easier for people to churn out AI slop that may include depictions of real people. Google rival OpenAI has seen success (at least in terms of popularity) with its Sora AI video app and the new Sora 2 model powering it. This could push Google to accelerate its AI plans for YouTube, but as we've seen with Sora, people love making public figures do weird things. Popular creators may have to begin filing AI likeness complaints as regularly as they do DMCA takedowns.
[2]
YouTube's likeness detection technology has officially launched | TechCrunch
YouTube revealed on Tuesday that its likeness detection technology has officially rolled out to eligible creators in the YouTube Partner Program, following a pilot phase. The technology allows creators to request the removal of AI-generated content that uses their likeness. This is the first wave of the rollout, a YouTube spokesperson informed TechCrunch, adding that eligible creators received emails this morning. YouTube's detection technology identifies and manages AI-generated content featuring the likeness of creators, such as their face and voice. The technology is designed to prevent people from having their likeness misused, whether for endorsing products and services they have not agreed to support or for spreading misinformation. There have been plenty of examples of AI likeness misuse in recent years, such as the company Elecrow using an AI clone of YouTuber Jeff Geerling's voice to promote its products. On its Creator Insider channel, the company provided instructions on how creators can use the technology. To begin the onboarding process, creators need to go to the "Likeness" tab, consent to data processing, and use their smartphone to scan a QR code displayed on the screen, which will direct them to a web page for identity verification. This process requires a photo ID and a brief selfie video. Once YouTube grants access to use the tool, creators can view all detected videos and submit a removal request according to YouTube's privacy guidelines, or they can make a copyright request. There is also an option to archive the video. Creators can opt out of using the technology at any time, and YouTube will stop scanning for videos 24 hours after they do so. Likeness detection technology has been in pilot mode since earlier this year. YouTube first announced last year that it had partnered with Creative Artists Agency (CAA) to help celebrities, athletes, and creators identify content on the platform that uses their AI-generated likeness. In April, YouTube expressed its backing for the legislation referred to as the NO FAKES ACT, which seeks to address the issue of AI-generated replicas that imitate a person's image or voice to deceive others and generate harmful content.
[3]
YouTube's AI 'likeness detection' tool is searching for deepfakes of popular creators
Starting today, creators in YouTube's Partner Program are getting access to a new AI detection feature that will allow them to find and report unauthorized uploads using their likeness. As shown in this video from YouTube, after verifying their identity, creators can review flagged videos in the Content Detection tab on YouTube Studio. If a video looks like unauthorized, AI-generated content, creators can submit a request for it to be removed. The first wave of eligible creators was notified via email this morning, and the feature will roll out to more creators over the next few months. YouTube warned early users in a guide on the feature that, in its current in-development state, it "may display videos featuring your actual face, not altered or synthetic versions," such as clips of a creator's own content. It works similarly to Content ID, which YouTube uses to detect copyrighted audio and video content. YouTube originally announced this feature last year and began testing it in December through a pilot program with talent represented by Creative Artists Agency (CAA). YouTube's blog post at the time said, "Through this collaboration, several of the world's most influential figures will have access to early-stage technology designed to identify and manage AI-generated content that features their likeness, including their face, on YouTube at scale." YouTube and Google are among many tech firms pushing AI video generation and editing tools, and the likeness detection tool isn't their only feature in development to deal with AI-generated content on the platform. Last March, YouTube also began requiring creators to label uploads that include content generated or altered using AI and announced a strict policy around AI-generated music "that mimics an artist's unique singing or rapping voice."
[4]
YouTube is rolling out likeness detection tool to combat deepfakes
When AI tools first began proliferating around the web, worries about deepfakes quickly rose alongside them. And now that tech such as OpenAI's recently released Sora 2 is getting more capable and more widely available (and being used exactly as irresponsibly as you might have guessed), both famous and ordinary people may want more control over protecting their likenesses. After teasing the feature last year, YouTube is starting to launch a likeness detection tool to combat unwanted deepfakes and have them removed from the video platform. Likeness detection is currently being rolled out to members of the YouTube Partner Program. It's also only able to cover instances where an individual's face has been modified with AI; cases where a person's voice has been changed by AI without their consent may not be caught by this feature. To participate, people will need to submit a government ID and a brief video selfie to YouTube to ensure they are who they say they are and give the feature source material to draw from in its review. From there, it works similarly to YouTube's Content ID feature for finding copyrighted audio, scanning uploaded videos for possible matches that the person can then review and flag infringing videos for removal.
[5]
YouTube Rolls Out AI Likeness Detection Tool to Prevent Deepfakes
YouTube today began rolling out a new AI likeness detection feature, which lets creators detect, manage, and request the removal of unauthorized videos that use AI to generate or alter the creator's facial likeness. According to YouTube, the feature is meant to safeguard identities and prevent audiences from being misled by deepfakes. The likeness detection tool is available in YouTube Studio under a content detection tab. After completing an identity verification process that requires a photo ID and a selfie video, creators will be alerted if there are any AI-generated videos that use their likeness. YouTube Studio will show a list of videos with titles, channel, views, and dialogue, along with an option to request a removal. The tool supports likeness removal requests for AI videos, and copyright removal requests in case someone has used copyright-protected content without permission. YouTube creators that are members of the YouTube Partner Program will get access to the likeness detection tool over the next few months. In a statement to TheWrap, YouTube said that the first creators selected to use the feature are those that "may have the most immediate use for the tool." All monetized creators will have access by January 2026.
[6]
YouTube's new AI tool hopes to stop the deepfake menace
What's happened? In the ever-evolving realm of content creation, AI-generated deepfakes pose a serious threat, both for creators and viewers. To help curb the issue, YouTube has launched a new tool called likeness detection. Following a pilot test earlier this year, YouTube has officially launched the likeness detection feature for eligible creators in the YouTube Partner Program. This AI-powered tool detects and identifies unauthorized AI-generated videos that use a creator's facial likeness or voice. To use the feature, creators must verify their identity using a government-issued photo ID and a selfie video. Why is this important? Moreover, the new likeness detection feature addresses the growing misuse of a creator's face or voice in misleading content, such as fake endorsements or misinformation. Using the feature, creators can detect fake videos, request removals (based on YouTube's privacy guidelines), submit copyright requests, or archive videos. It also supports broader legislative efforts, like the NO FAKES Act 2024, aimed at regulating AI-generated content. Recommended Videos Why should I care? Moreover, the likeness detection feature contributes to a safer online environment where viewers can consume content without the anxiety of encountering deceptive videos. To enable the feature, YouTube Partner Program members should tap on Content detection on the YouTube Studio dashboard, select Likeness, give the platform permission to process your data, and use your smartphone to complete the identity verification process. You can disable the feature at any point, and YouTube will stop analyzing your videos after 24 hours. For viewers, the feature ensures greater trust in the content they view on a regular basis, reducing the risk of being misled by deepfake videos. OK, what's news? The likeness detection feature will continue to roll out to more YouTube creators, likely with improvements to detection accuracy and review process. Further, other content creation and sharing platforms could adopt a similar version of the AI likeness detection feature.
Share
Share
Copy Link
YouTube has officially rolled out its AI likeness detection technology to eligible creators in the YouTube Partner Program. This tool aims to identify and manage AI-generated content featuring creators' likenesses, helping to combat deepfakes and unauthorized use of their image.
In a significant move to address the growing concern of AI-generated deepfakes, YouTube has officially launched its likeness detection technology for eligible creators in the YouTube Partner Program
1
2
. This tool, which has been in development and testing for some time, aims to identify and manage AI-generated content that uses creators' likenesses without authorization.Source: MacRumors
The new feature, accessible through the 'Content detection' menu in YouTube Studio, allows creators to detect, review, and request the removal of videos that use AI to generate or alter their facial likeness
3
. To use the tool, creators must complete an identity verification process, which includes submitting a government ID and a brief selfie video4
.Source: engadget
Once verified, the system scans uploaded videos for potential matches, similar to YouTube's existing Content ID system for copyright detection. Creators can then review flagged videos and submit removal requests if they find unauthorized use of their likeness
5
.The introduction of this tool comes at a crucial time when AI-generated content is proliferating across the internet. Creators and influencers have expressed concerns about the potential misuse of their image and brand through AI-generated videos that could show them saying or doing things they never actually did
1
.Source: The Verge
YouTube's approach aims to strike a balance between allowing AI-generated content on the platform and protecting creators' rights. The company has stated that not all AI-generated content featuring a creator's likeness will automatically be removed. Factors such as parody, fair use, and the realistic nature of the content will be considered in the review process
1
.Related Stories
The first wave of eligible creators received notifications about the tool's availability on October 21, 2025. YouTube plans to expand access to more creators over the coming months, with all monetized creators expected to have access by January 2026
5
.While the current version of the tool focuses on facial likeness, YouTube acknowledges that it may not catch instances where only a person's voice has been altered by AI
4
. This suggests that further developments may be necessary to address the full spectrum of AI-generated content challenges.As AI technology continues to advance, with tools like Google's Veo 3.1 and OpenAI's Sora 2 making AI video creation more accessible, YouTube's likeness detection tool represents an important step in the ongoing effort to protect creators' identities and maintain trust in online content
1
2
.Summarized by
Navi
1
Business and Economy
2
Business and Economy
3
Technology