19 Sources
19 Sources
[1]
YouTube's likeness detection has arrived to help stop AI doppelgängers
AI content has proliferated across the Internet over the past few years, but those early confabulations with mutated hands have evolved into synthetic images and videos that can be hard to differentiate from reality. Having helped to create this problem, Google has some responsibility to keep AI video in check on YouTube. To that end, the company has started rolling out its promised likeness detection system for creators. Google's powerful and freely available AI models have helped fuel the rise of AI content, some of which is aimed at spreading misinformation and harassing individuals. Creators and influencers fear their brands could be tainted by a flood of AI videos that show them saying and doing things that never happened -- even lawmakers are fretting about this. Google has placed a large bet on the value of AI content, so banning AI from YouTube, as many want, simply isn't happening. Earlier this year, YouTube promised tools that would flag face-stealing AI content on the platform. The likeness detection tool, which is similar to the site's copyright detection system, has now expanded beyond the initial small group of testers. YouTube says the first batch of eligible creators have been notified they can use likeness detection, but interested parties will need to hand Google even more personal information to get protection from AI fakes. Currently, likeness detection is a beta feature in limited testing, so not all creators will see it as an option in YouTube Studio. When it does appear, it will be tucked into the existing "Content detection" menu. In YouTube's demo video, the setup flow appears to assume the channel has only a single host whose likeness needs protection. That person must verify their identity, which requires a photo of a government ID and a video of their face. It's unclear why YouTube needs this data in addition to the videos people have already posted with their oh-so stealable faces, but rules are rules. No guarantees After signing up, YouTube will flag videos from other channels that appear to have the user's face. YouTube's algorithm can't know for sure what is and is not an AI video. So some of the face match results may be false positives from channels that have used a short clip under fair use guidelines. If creators do spot an AI fake, they can add some details and submit a report in a few minutes. If the video includes content copied from the creator's channel that does not adhere to fair use guidelines, YouTube suggests also submitting a copyright removal request. However, just because a person's likeness appears in an AI video does not necessarily mean YouTube will remove it. YouTube has published a rundown of the factors its reviewers will take into account when deciding whether or not to approve a removal request. For example, parody content labeled as AI or videos with an unrealistic style may not meet the threshold for removal. On the flip side, you can safely assume that a realistic AI video showing someone endorsing a product or engaging in illegal activity will run afoul of the rules and be removed from YouTube. While this may be an emerging issue for creators right now, AI content on YouTube is likely to kick into overdrive soon. Google recently unveiled its new Veo 3.1 video model, which includes support for both portrait and landscape AI videos. The company has previously promised to integrate Veo with YouTube, making it even easier for people to churn out AI slop that may include depictions of real people. Google rival OpenAI has seen success (at least in terms of popularity) with its Sora AI video app and the new Sora 2 model powering it. This could push Google to accelerate its AI plans for YouTube, but as we've seen with Sora, people love making public figures do weird things. Popular creators may have to begin filing AI likeness complaints as regularly as they do DMCA takedowns.
[2]
YouTube's likeness detection technology has officially launched | TechCrunch
YouTube revealed on Tuesday that its likeness detection technology has officially rolled out to eligible creators in the YouTube Partner Program, following a pilot phase. The technology allows creators to request the removal of AI-generated content that uses their likeness. This is the first wave of the rollout, a YouTube spokesperson informed TechCrunch, adding that eligible creators received emails this morning. YouTube's detection technology identifies and manages AI-generated content featuring the likeness of creators, such as their face and voice. The technology is designed to prevent people from having their likeness misused, whether for endorsing products and services they have not agreed to support or for spreading misinformation. There have been plenty of examples of AI likeness misuse in recent years, such as the company Elecrow using an AI clone of YouTuber Jeff Geerling's voice to promote its products. On its Creator Insider channel, the company provided instructions on how creators can use the technology. To begin the onboarding process, creators need to go to the "Likeness" tab, consent to data processing, and use their smartphone to scan a QR code displayed on the screen, which will direct them to a web page for identity verification. This process requires a photo ID and a brief selfie video. Once YouTube grants access to use the tool, creators can view all detected videos and submit a removal request according to YouTube's privacy guidelines, or they can make a copyright request. There is also an option to archive the video. Creators can opt out of using the technology at any time, and YouTube will stop scanning for videos 24 hours after they do so. Likeness detection technology has been in pilot mode since earlier this year. YouTube first announced last year that it had partnered with Creative Artists Agency (CAA) to help celebrities, athletes, and creators identify content on the platform that uses their AI-generated likeness. In April, YouTube expressed its backing for the legislation referred to as the NO FAKES ACT, which seeks to address the issue of AI-generated replicas that imitate a person's image or voice to deceive others and generate harmful content.
[3]
YouTube's AI 'likeness detection' tool is searching for deepfakes of popular creators
Starting today, creators in YouTube's Partner Program are getting access to a new AI detection feature that will allow them to find and report unauthorized uploads using their likeness. As shown in this video from YouTube, after verifying their identity, creators can review flagged videos in the Content Detection tab on YouTube Studio. If a video looks like unauthorized, AI-generated content, creators can submit a request for it to be removed. The first wave of eligible creators was notified via email this morning, and the feature will roll out to more creators over the next few months. YouTube warned early users in a guide on the feature that, in its current in-development state, it "may display videos featuring your actual face, not altered or synthetic versions," such as clips of a creator's own content. It works similarly to Content ID, which YouTube uses to detect copyrighted audio and video content. YouTube originally announced this feature last year and began testing it in December through a pilot program with talent represented by Creative Artists Agency (CAA). YouTube's blog post at the time said, "Through this collaboration, several of the world's most influential figures will have access to early-stage technology designed to identify and manage AI-generated content that features their likeness, including their face, on YouTube at scale." YouTube and Google are among many tech firms pushing AI video generation and editing tools, and the likeness detection tool isn't their only feature in development to deal with AI-generated content on the platform. Last March, YouTube also began requiring creators to label uploads that include content generated or altered using AI and announced a strict policy around AI-generated music "that mimics an artist's unique singing or rapping voice."
[4]
YouTube Rolls Out AI Likeness Detection Tool to Help Creators Fight Deepfakes
Jibin is a tech news writer based in Ahmedabad, India, who loves breaking down complex information for a broader audience. Don't miss out on our latest stories. Add PCMag as a preferred source on Google. YouTube has begun rolling out an AI-powered likeness detection tool to help creators spot videos where their face may be altered or generated using AI. The tool is currently available to a limited set of creators in YouTube Studio's Content detection tab and will expand to all creators in the YouTube Partner Program over the next few months. To sign up for the tool, creators will have to submit proof of their government ID and video scans of their face. If a new video with their face gets uploaded, the tool will list them in the Content detection tab, where creators can view the flagged videos and request actions, such as copyright removal or likeness removal. They can also archive the listing if it seems harmless. Likeness detection works like Content ID, except that it looks for facial likeness instead of copyrighted audio or video, YouTube says. It is aimed at safeguarding a creator's identity and ensuring their audience isn't misled about what they endorse and what they don't, the company adds. Setting up likeness detection can take up to five days and can only be done by the Channel Owner or users listed as Managers. Editors can take action on flagged videos, but they won't be able to set up the tool themselves. In case a video with facial likeness fails to show up on the dashboard, creators can manually request a privacy review. YouTube says the tool is "still being tuned and refined" so there's a chance it won't pick up on all instances of your likeness being used. The tool comes as video generation tools, including Google's own Veo 3 and OpenAI's Sora 2, have made it difficult for the audience to distinguish between actual people and their deepfakes. This tool was announced last year in partnership with Creative Artists Agency (CAA) and piloted in December to help actors and athletes take down their deepfakes.
[5]
YouTube is rolling out likeness detection tool to combat deepfakes
When AI tools first began proliferating around the web, worries about deepfakes quickly rose alongside them. And now that tech such as OpenAI's recently released Sora 2 is getting more capable and more widely available (and being used exactly as irresponsibly as you might have guessed), both famous and ordinary people may want more control over protecting their likenesses. After teasing the feature last year, YouTube is starting to launch a likeness detection tool to combat unwanted deepfakes and have them removed from the video platform. Likeness detection is currently being rolled out to members of the YouTube Partner Program. It's also only able to cover instances where an individual's face has been modified with AI; cases where a person's voice has been changed by AI without their consent may not be caught by this feature. To participate, people will need to submit a government ID and a brief video selfie to YouTube to ensure they are who they say they are and give the feature source material to draw from in its review. From there, it works similarly to YouTube's Content ID feature for finding copyrighted audio, scanning uploaded videos for possible matches that the person can then review and flag infringing videos for removal.
[6]
YouTube launches AI detection tool to spot deepfakes using creators' faces and voices
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. The takeaway: The new system positions YouTube among the first major online platforms to embed large-scale identity-protection capabilities directly into its content moderation tools. The feature represents one of YouTube's strongest responses yet to the challenges of deepfake media and the growing accessibility of AI video generation. YouTube has begun the broad rollout of a new artificial intelligence detection system that identifies and manages AI-generated content replicating a creator's face or voice. The feature, now available to verified members of the YouTube Partner Program, allows creators to review and request the removal of deepfake videos that misuse their likeness for commercial or misleading purposes. The likeness-detection technology operates through a combination of facial and voice recognition algorithms trained to detect synthetic media across YouTube's massive upload base. Once activated, it continuously scans new videos against reference data provided by participating creators, similar to how Content ID searches for copyrighted material. The company stated that the system aims to prevent impersonations that could mislead audiences or falsely attribute endorsements. This issue has become more prevalent with the use of generative AI tools to fabricate photorealistic video and audio. Creators who choose to use the feature must first verify their identity through a process that involves consent to data processing, scanning a QR code, uploading a government-issued photo ID and recording a short selfie video to train the matching model. YouTube's systems validate the selfie and ID data on Google's servers before enabling full access within YouTube Studio. The verification process typically takes a few days to complete. After onboarding, creators can view a dashboard listing videos that match their likeness. The display includes video titles, upload channels, view counts, and subscriber information, along with YouTube's confidence assessment of whether AI generated the content. When a match appears, creators can choose among three responses: file a privacy-based removal request under YouTube's policies, submit a copyright claim if their content or voice is used without permission, or archive the video for documentation. YouTube cautioned that early results may not distinguish between legitimate clips from a creator's channel and synthetic versions. The company said the detection algorithm is still refining its accuracy. This week's rollout marks the first full phase of a system YouTube began testing late last year in collaboration with the Creative Artists Agency. That pilot included about 5,000 creators, including well-known personalities whose images are more likely to be targeted by impersonation attempts. YouTube policy communications manager Jack Malon said the initial release is targeted at users who will "benefit most immediately" from the tool while the company refines its practical performance before expanding global access by January 2026.
[7]
YouTube launches Likeness Detection to do away with deepfakes
One of the most terrifying aspects of the potential AI onslaught coming our way is the idea that AI generated content could mean a lack of trust in any videos or images. Because when anything can be generated and looks lifelike, then nothing can be trusted. And while this is worrying for all of us, it's particularly awful for those who live in the public eye. Thankfully, where AI is causing problems, AI also exists to help combat those problems. YouTube has added a "likeness detection" tool in its YouTube Studio, and the first wave of eligible creators is getting the chance to try it out. Digital cops cracking down on digital slop YouTube had announced these tools last year, but it's finally being rolled out to a select group of content creators. Those lucky creators will be informed by email, and more will be let in over the next few months. YouTube has warned that this early implementation is likely to throw up some false positives -- videos including their actual face, for instance, rather than AI-generated ones. Hopefully, Google will include some filtering tools to remove collaboration or reaction channels, because otherwise it's likely the tool will be buried under false hits before it's really gotten started. If you're one of the few being allowed to try this tool out before its larger rollout, then you can find it under the Content Detection tab in your YouTube Studio. If nothing else, browsing that tab by itself is likely to make some amusing and engaging content by itself. As long as you're famous enough to warrant AI fakes being made of you, of course. There's no word on whether this tool will become more widespread and be available to anyone. We don't know how it works, but it's likely it uses AI itself, and would need a significant database of your likeness in order to detect when a video is using it, so it would involve giving up a big portion of your facial features to the beast. As such, it's probably only really useful for those who do live in the public eye. But at this stage, who knows what the future may hold. Though Google is one of the biggest culprits for allowing the creation of AI slop content through its AI tools, it's good to see it taking responsibility for that fact, and giving creators the tools they need to defend themselves.
[8]
YouTube Rolls Out AI Likeness Detection Tool to Prevent Deepfakes
YouTube today began rolling out a new AI likeness detection feature, which lets creators detect, manage, and request the removal of unauthorized videos that use AI to generate or alter the creator's facial likeness. According to YouTube, the feature is meant to safeguard identities and prevent audiences from being misled by deepfakes. The likeness detection tool is available in YouTube Studio under a content detection tab. After completing an identity verification process that requires a photo ID and a selfie video, creators will be alerted if there are any AI-generated videos that use their likeness. YouTube Studio will show a list of videos with titles, channel, views, and dialogue, along with an option to request a removal. The tool supports likeness removal requests for AI videos, and copyright removal requests in case someone has used copyright-protected content without permission. YouTube creators that are members of the YouTube Partner Program will get access to the likeness detection tool over the next few months. In a statement to TheWrap, YouTube said that the first creators selected to use the feature are those that "may have the most immediate use for the tool." All monetized creators will have access by January 2026.
[9]
YouTube declares war on deepfakes with new tool that lets creators flag AI-generated video clones
Initially limited to YouTube Partner Program members, the feature may expand more broadly in the future YouTube is starting to take illicit deepfakes more seriously, rolling out a new deepfake detection tool designed to help creators identify and erase videos with AI-generated versions of their likeness made without their permission. YouTube has begun emailing details to select creators, offering them the chance to scan uploaded videos for potential matches to their face or voice. Once a match is flagged, the creator can review it via a new Content Detection tab in YouTube Studio and decide whether to take action. They can simply report it, submit a takedown request under privacy rules, or file a full copyright claim. For now, the tool is only available to a limited group of users in YouTube's Partner Program, though the service will likely be expanded to become available to any monetized creator on the platform eventually. It's similar to how YouTube worked with Creative Artists Agency (CAA) in 2023 to give high-profile celebrity clients early access to prototype AI detection tools while providing feedback from some of the people most likely to be impersonated by AI. Creators must opt in by submitting a government-issued photo ID and a short video clip of themselves. This biometric proof helps train the detection system to recognize when it's really them. Once enrolled, they'll begin receiving alerts when potential matches are spotted. YouTube warns that not all deepfakes will be caught, though, particularly if they're heavily manipulated or uploaded in low resolution. The new system is much like the current Content ID tool. But while Content ID scans for reused audio and video clips to protect copyright holders, this new tool focuses on biometric mimicry. YouTube understandably believes creators will value having control over their digital selves in a world where AI can stitch your face and voice onto someone else's words in seconds. Still, for creators worried about their reputations, it's a start. And for YouTube, it marks a significant turn in its approach to AI-generated content. Last year, the platform revised its privacy policies to allow ordinary users to request takedowns of content that mimics their voice or face. It also introduced specific mechanisms for musicians and vocal performers to protect their unique voices from being cloned or repurposed by AI. This new tool brings those protections directly into the hands of creators with verified channels - and hints at a larger ecosystem shift to come. For viewers, the change might be less visible, but no less meaningful. The rise of AI tools means that impersonation, misinformation, and deceptive edits are now easier than ever to produce. While detection tools won't eliminate all synthetic content, they do increase accountability: if a creator sees a fake version of themselves circulating, they now have the power to respond, which hopefully means viewers won't fall for a fraud. That matters in an environment where trust is already frayed. From AI-generated Joe Rogan podcast clips to fraudulent celebrity endorsements hawking crypto, deepfakes have been growing steadily more convincing and harder to trace. For the average person, it can be almost impossible to tell whether a clip is real. YouTube isn't alone in trying to address the problem. Meta has said it will label synthetic images across Facebook and Instagram, and TikTok has introduced a tool that allows creators to voluntarily tag synthetic content. But YouTube's approach is more direct about maliciously misused likenesses. The detection system is not without limitations. It relies heavily on pattern matching, which means highly altered or stylized content might not be flagged. It also requires creators to place a certain level of trust in YouTube, both to process their biometric data responsibly, and to act quickly when takedown requests are made. Nonetheless, it's better than doing nothing. And by modeling the feature after the respected Content ID approach to rights protection YouTube is giving some weight to protecting people's likenesses just like any form of intellectual property, respecting that a face and voice are assets in a digital world, and have to be authentic to maintain their value.
[10]
YouTube is using AI to fight AI deepfakes
Creators in YouTube's Partner Program -- those with 1,000 subscribers with 4,000 valid public watch hours in the last year or 1,000 subscribers with 10 million valid public Shorts views in the last three months -- are gaining access to an AI feature that's intended to stop or slow the spread of deepfakes. The likeness detection tool was originally announced at Made on YouTube in September and is meant to help identify and manage AI-generated content that features someone's likeness. As YouTube said in a video posted Tuesday to its Creator Insider channel, it "lets you easily detect, manage, and request the removal of unauthorized videos where your facial likeness may be altered or made with AI -- a critical way to safeguard your identity and ensure your audience isn't misled." Creators first have to confirm their identity by uploading a photo ID and short selfie video. Then, they can review videos that have been flagged in the Content Detection tab on YouTube Studio. If they deem a video as AI-generated content, they can request its removal. "Creators can already request the removal of AI fakes, including face and voice, through our existing privacy process. What this new technology does is scale that protection," Amjad Hanif, YouTube's vice president of creator products, told Axios in September. Today, the tool became available to some creators in the YouTube Partner Program, and it will continue to be rolled out in the coming weeks. "At YouTube, our goal is to build AI technology that empowers human creativity responsibly, and that includes protecting creators and their businesses," YouTube said in its video. "We built this tool to help you monitor how your likeness shows up -- understanding if other people are generating videos using your facial likeness -- to safeguard your identity."
[11]
YouTube's new AI tool hopes to stop the deepfake menace
What's happened? In the ever-evolving realm of content creation, AI-generated deepfakes pose a serious threat, both for creators and viewers. To help curb the issue, YouTube has launched a new tool called likeness detection. Following a pilot test earlier this year, YouTube has officially launched the likeness detection feature for eligible creators in the YouTube Partner Program. This AI-powered tool detects and identifies unauthorized AI-generated videos that use a creator's facial likeness or voice. To use the feature, creators must verify their identity using a government-issued photo ID and a selfie video. Why is this important? Moreover, the new likeness detection feature addresses the growing misuse of a creator's face or voice in misleading content, such as fake endorsements or misinformation. Using the feature, creators can detect fake videos, request removals (based on YouTube's privacy guidelines), submit copyright requests, or archive videos. It also supports broader legislative efforts, like the NO FAKES Act 2024, aimed at regulating AI-generated content. Recommended Videos Why should I care? Moreover, the likeness detection feature contributes to a safer online environment where viewers can consume content without the anxiety of encountering deceptive videos. To enable the feature, YouTube Partner Program members should tap on Content detection on the YouTube Studio dashboard, select Likeness, give the platform permission to process your data, and use your smartphone to complete the identity verification process. You can disable the feature at any point, and YouTube will stop analyzing your videos after 24 hours. For viewers, the feature ensures greater trust in the content they view on a regular basis, reducing the risk of being misled by deepfake videos. OK, what's news? The likeness detection feature will continue to roll out to more YouTube creators, likely with improvements to detection accuracy and review process. Further, other content creation and sharing platforms could adopt a similar version of the AI likeness detection feature.
[12]
YouTube Is Trying to Eradicate Deepfakes With This New Program
This feature is only one part of YouTube's broader effort to control AI-generated content. YouTube is about to get less scammy. The video-sharing platform today rolled out likeness-detection technology designed to identify AI-generated content featuring fake faces and voices of YouTube creators. This program is only open to eligible creators in the YouTube Partner Program right now. Creators interested in the program upload a picture and a voice recording of themselves with proof of their identity, then they can view any detected videos, and request their removal, either through YouTube's privacy guidelines or a copyright request. There's also an option to archive the video, to prevent sneaky deletions. In the short term, this is unlikely to nuke the growing scourge of videos featuring influencers endorsing products or ideas they've never heard of. But with new AI tools making realistic video fakes possible in minutes, this kind of protection may soon be something everyone uses. The likeness identification program is part of YouTube's broader effort to deal with the glut of AI-generated content on its site. Earlier this year, the company began requiring creators to label "realistic" AI videos and updated its monetization policies to cut the revenues earned from the kind of low-effort, inauthentic content that is often generated by AI. Of course proving you're you isn't risk-free. Proving your identity to any company involves uploading a driver's license, passport, or other official ID, or handing over biometric data, and tech companies often fail to keep that private information out of the hands of bad actors. Here are only a few recent examples: YouTube's new system might fight deepfakes and make the platform less spammy, but it also adds to the growing library of personal data people are trusting tech companies to guard.
[13]
YouTube's New Tool Detects AI Deepfakes Using Your Face and Voice - Phandroid
YouTube has launched the first wave of its YouTube AI likeness detection tools for creators, starting with around 5,000 eligible creators in its Partner Program. This new feature is designed to help creators detect, manage, and request removal of unauthorized videos that use AI to generate or alter their facial likeness or voice. It protects them from deepfakes and misuse of their identity on the platform. The tool works similarly to YouTube's Content ID system but focuses on facial likenesses. Creators can access it in YouTube Studio under a "content detection" section. They must consent to data processing and verify their identity by submitting a photo ID and a selfie video that performs random actions for facial verification. Once verified, the tool alerts creators about videos flagged as potentially using their likeness. The tool then provides detailed information about such videos, including the channel, title, views, and dialogue. Additionally, it allows creators to request video removals. This rollout follows a pilot program conducted in partnership with Creative Artists Agency (CAA). The partnership helped refine the technology using high-profile clients before expanding access. YouTube plans to make the tool available to all monetized creators worldwide by January 2026. The goal is to empower creators to safeguard their identities against AI-generated misinformation and impersonation. This is an increasingly pressing issue with the rise of synthetic media. For content creators on YouTube, this protection becomes essential as deepfake technology becomes more accessible. Speaking to TheWrap, Jack Malon, YouTube's policy communications manager said, "Essentially, those that are selected are creators we think may have the most immediate use for the tool. That's going to help us keep developing the tech as we continue to roll out because the more we can actually, practically use a tool, the more we can test it, the more we can improve it." While the YouTube AI likeness tool is a significant step in combating misuse of likenesses, some concerns remain around privacy. The required facial scans and biometric data are stored by Google for ongoing detection. Additionally, the tool currently alerts creators to actual face usages as well as altered or synthetic versions, which may require further refinement. However, Google has been expanding its AI capabilities across multiple platforms. The company's experience with AI detection and processing could help address these concerns as the technology matures.
[14]
YouTube's New Likeness Detection Tool Explained: All You Need to Know
The likeness data is stored for up to three years and can be deleted YouTube on Wednesday launched a new tool to help creators identify and flag content that features their likeness generated using artificial intelligence (AI). First announced at the Made by YouTube event last year, this new technology can detect when a video is imitating the face and voice of a real person and enable creators on the platform to maintain control over their likenesses. Once verified, they can submit a request for the unauthorised content to be removed. From how to sign up to how YouTube manages the creator's data, here's all you need to know about the new likeness detection feature. What is Likeness Detection on YouTube? As per YouTube, its likeness detection tool has been in the pilot phase till now. The company is now expanding its access to all creators who are part of the YouTube Partner Program over the next few months, it announced on its Creator Inside YouTube channel. With this tool, creators can detect, manage and request the removal of unauthorised videos, where their likeness has been altered or generated using AI. It is said to be a measure for safeguarding their audience against misleading content. The tool does this by referencing the appearances made by the creator on their channel, as well as a video of their face provided by them. The new tool is available in YouTube Studio Photo Credit: YouTube The company says if a creator spots their likeness being used by another party on YouTube, they can submit a removal request for review, as part of the platform's privacy guidelines. The new likeness detection tool is available via the Content detection tab in YouTube Studio, alongside the existing Copyright option. Creators can find a new Likeness option in beta. How to Sign Up for Likeness Detection on YouTube? There are several steps required before creators can begin using the new tool. Photo Credit: YouTube Step 1: First, they need to agree to YouTube using biometric technology to search for their likeness on its platform. Step 2: Next, they will be required to submit a photo ID and a video of their face. As per the company, this will be used as a reference for their likeness and to verify their identity. Step 3: Once the aforementioned requirements are met, the platform will take up to five days to review the application. Creators will receive an email from YouTube to confirm their setup is complete. How Does YouTube Manage the Data? The company also explained how it will manage the data. It will give a unique identifier to the brief video of the creator's face, their full legal name, and the face and voice templates. It will be stored on YouTube's internal database for up to three years from the date of their last sign-in. The collected data will be treated in accordance with Google's privacy policies. The likeness detection tool is an opt-in feature. Creators can opt out of it at any time by navigating to the Manage likeness detection tab in YouTube Studio. Their data will then be deleted from the system, the company further added.
[15]
YouTube's New Tool Will Detect Deepfakes of Content Creators
* The tool was launched as a pilot in December 2024 * YouTube is currently offering the tool to select creators * All creators will have to go through an onboarding process YouTube released its artificial intelligence (AI) likeness detection tool for creators on Tuesday. The deepfake capturing tool is aimed at protecting the content creators on the platform from a third party using their likeness or voice via synthetic means. To get access to the tool, eligible creators will have to go through an onboarding process that will require them to submit an ID card and a video selfie. Once the process is complete, these creators will be able to see all the videos that YouTube deems to be an AI-generated deepfake in a dashboard. YouTube to Protect Creators from Deepfakes With Its Likeness Detection Tool In a video on its Creator Insider channel, YouTube announced the release of the likeness detection tool and detailed its working. Currently, it will be available to creators in the YouTube Partner Programme, and the access will be expanded over time. Once a creator has access to the tool, they will be able to see it in the Content ID menu, where users can monitor copyrighted content as well. YouTube's likeness detection dashboard Photo Credit: YouTube The onboarding process to get access to the tool is long and thorough, likely to avoid instances where a scammer registers under a creator's name. The onboarding requires the user to consent to data processing, submission of a government-approved ID card, and upload a video selfie. All of this will be stored on the Google server, and once the creator's identity can be verified, they can access the tool. YouTube will begin showing the videos where it suspects AI was used to create a deepfake. The video streaming giant will also categorise the videos based on priority to bring attention to the user. YouTube highlights that since the tool is in its initial days, it can show the user their own videos alongside AI-generated videos. The company first piloted the feature in December 2024. Creators can request the removal of the video or ask it to be archived. Once a complaint has been raised, YouTube will review the video and take appropriate action. Notably, users will have the option to stop access to the tool whenever they like via the manage tool option on the dashboard. In case one chooses to disable the tool, YouTube will stop processing their data to scan for deepfakes after 24 hours.
[16]
YouTube Expands AI Safety Features With New Likeness Detection System | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Creators can verify their identity in the "Likeness" tab of YouTube Studio using a selfie video and government-issued ID. Once verified, they can review flagged content that mimics their likeness and submit removal requests directly. YouTube said participation is voluntary, and users who opt out will no longer be scanned within 24 hours. The system builds on the company's Content ID infrastructure, which historically has been used to manage copyright claims, extending that protection to likeness and voice replication. The update adds a security layer to YouTube's growing suite of artificial intelligence-driven features. Earlier this year, the platform introduced AI-powered creative tools to help users streamline production, editing and discovery. The new detection tool complements those initiatives by focusing on identity protection as deepfakes and synthetic media become more widespread. A CBS News investigation recently found that complaints about deepfake-driven misuse of celebrity and creator likenesses have more than doubled this year. YouTube said its system is designed to detect AI-generated visuals and audio that replicate real individuals without authorization, allowing creators to act before the content spreads. YouTube CEO Neal Mohan said the company's goal is to give creators "choice and control" over how AI interacts with their content. The company described the system as a "consent-first" technology intended to reinforce privacy and transparency within its creator ecosystem. Analysts say the rollout signals a shift among platforms to address AI risks proactively rather than reactively. YouTube's move comes as platforms across the media industry race to balance innovation and identity protection. The company's approach aligns with its broader AI roadmap, which, as PYMNTS reported, integrates monetization, automation and safety within creator workflows. The likeness detection system will initially be available to a limited group of verified creators before expanding more widely. YouTube said additional privacy controls and transparency updates are planned as the feature scales, positioning the tool as part of a broader shift toward responsible AI governance in digital media.
[17]
YouTube tests 'Likeness Detection' to combat AI identity theft
With the rise of AI-generated video content, identity theft has been increasing. Google is taking initiatives for the same and experimenting with a feature called "Likeness detection " with its YouTube platform. Likeness detection helps users to find content where their face may have been altered or generated by AI, and lets them review any detected instances to decide on the appropriate action. Likeness detection works similarly to Content ID, except that it searches for a person's likeness rather than copyrighted audio and video content, says Google. For beta testers, a new tab called "Likeness" will appear under the Content Detection menu of YouTube Studio. The new tab will list all possible instances of possible likeness videos, and you shall take appropriate action. This feature is optional, meaning the owner has the right to turn on or off detection as per their choice. If they want to use it, the person needs to go through the Google identity verification process, which requires a government ID and a brief video of their face. With this information, the system scans uploads of new videos that may contain images of other individuals to identify videos that potentially contain the user's registered face. Likeness detection is an experimental feature (limited beta). This feature is currently only available in selected countries, but YouTube will be expanding to more countries soon.
[18]
YouTube's AI Likeness Tool Raises Privacy Concerns
YouTube is expanding the scope of its content moderation and taking on the role of an arbiter of identity rights with the rollout of its likeness detection technology for YouTube Creators. This service allows creators to monitor where their facial likeness appears and to request the removal of unauthorized likenesses and AI-altered videos. What's important here is that YouTube is treating the usage of "likeness" as a privacy violation, covering both voice and video. At one level, this is much needed. Personalities are increasingly concerned about their likeness being used for deepfakes. However, the selectiveness of AI companies in protecting personality rights is a cause for concern. In an ideal world, it should be the norm, and Bryan Cranston shouldn't have to thank OpenAI for preventing Sora from creating his deepfakes. People shouldn't be able to generate deepfakes of Robin Williams to send them to his daughter. Celebrities shouldn't need to go to court to enforce their personality rights to prevent AI-based usage. While the generation side of the AI business remains largely irresponsible in preventing misuse of their services (protecting only the powerful), YouTube falls in the distribution side of this battle to prevent misuse. 1. It's a start, but it's not enough: YouTube's "likeness detection technology" is for YouTube creators (channel owners only) to prevent misuse of likeness on its platform, not for a wide variety of personalities. This creates an unhealthy tension where people must become YouTube creators to protect themselves from misuse of their likeness. We've seen this play out before: YouTube's Content ID system requires that copyright owners upload their content to YouTube to protect themselves. Content owners don't have protection unless they join YouTube's content program, which essentially pressures people to upload their content on the platform. 2. You need to give up your facial data to protect it: It's ironic that you're being forced to give up your personal facial data in order to protect it. According to the terms, to complete the likeness detection, you have to provide Google with your data: Provide a government ID and a brief video of your face for verification. This helps us prevent fraudulent and abusive uses of likeness detection. We also use the brief video of your face, and images of your face from content on YouTube, to create face templates used to detect videos where your likeness may be altered or made with AI. Please use a clear picture of your official government ID when verifying. As if Google Photos and its use for fine-tuning facial recognition weren't enough, now likeness detection gives Google the opportunity to collect even more facial data and your government ID. If you don't need to upload your government ID for copyright protection, why should you need it for privacy protection? 3. What if AI likeness is used in parody? For example, if I make a parody review of a Maruti 800 (an Indian car no longer manufactured by Maruti Suzuki) using an AI likeness of MKBHD, he can't take it down under copyright law because it's a parody, but he can get it taken down because of likeness. YouTube, especially in India, has largely failed to protect fair usage. According to YouTube's policies, if the video is real, a copyright claim must be made (taking into account fair use and other copyright exceptions), but if it is AI-generated, one can make a privacy claim. Does this mean that human mimicry is kosher, but AI generated mimicry is not? Why the discrimination? While this arms creators and copyright owners with an additional tool, it harms fair usage and adds identity-based gatekeeping. 4. Do dead people have privacy protection? One of my favorite questions in policy discussions is: who owns a deceased person's voice? YouTube's likeness detection protects voices, but this might not apply to deceased creators. There is certainly no case for this under India's largely useless Digital Personal Data Protection Act. Unless personality rights have been bequeathed to an estate or heirs by a personality (who was also a YouTube creator), copyright may be claimed, but privacy rights cannot. This means there is no recourse for Zelda Williams on YouTube, because Robin Williams didn't submit his video to Google before he passed away a decade ago. 5. Where's the recourse? The approach again positions YouTube as an arbiter of what is right and accurate, without any transparency. As we've seen in India over the past few years, YouTube has effectively acted as an arbiter of copyright, taking down creator content even when used for derivative work, and going beyond India's Copyright Act, without accountability to its community of creators or adequate transparency regarding recourse. Its implementation of recourse is arbitrary. This expansion merely broadens YouTube's enforcement scope without accountability. What will it do about false takedowns, weaponized claims, or impersonation disputes? My guess is, judging by past record: nothing. (Note: If you're from YouTube and there is a recourse and transparency mechanism, let me know, and I'll update this post. There's nothing here: https://support.google.com/youtube/answer/16440338?hl=en) Lastly, now that YouTube has launched this feature, what will Instagram do? My guess is that X will probably do nothing. I'd be surprised if it does unless it's sued.
[19]
YouTube rolls out likeness detection for creators: Will it reduce deepfakes menace?
The internet has always blurred the line between what's real and what's not, but in the age of generative AI, that blur has become something more unsettling. In a world where anyone's face or voice can be replicated within minutes, the threat of losing control over one's own identity is no longer hypothetical, it's personal. To address this growing menace, YouTube has officially launched its new likeness detection tool, an AI-driven system designed to identify and flag videos that use a creator's likeness without their consent. The feature, now available to members of the YouTube Partner Program, gives creators a degree of protection that was previously out of reach in the open, unpredictable ecosystem of online video. At its core, the system scans for appearances or audio resembling a creator who has enrolled in the program. If the AI detects that a creator's face or voice has been used elsewhere on the platform, the video appears in a dashboard where the creator can decide whether to take action. They can request its removal, archive it for reference, or dismiss the alert entirely. The process is designed to be both transparent and flexible - a crucial consideration in an environment where false positives or creative reinterpretations are inevitable. Also read: ChatGPT Atlas launched but do we really need a new browser in 2025? To begin, creators verify their identity by scanning a QR code, submitting an ID, and recording a brief selfie video. Once the setup is complete, YouTube's AI begins its silent watch, continuously scanning for content that matches the creator's verified likeness. If a creator opts out, scanning stops within 24 hours, giving users full control over participation. YouTube's move comes at a time when creators have become increasingly vocal about the misuse of their likeness. The emotional and professional consequences of deepfakes are difficult to measure. When a face or voice becomes detached from the person it belongs to, it erodes trust - not just in the content, but in the creator's relationship with their audience. For many, that trust is the foundation of their livelihood. The new tool doesn't erase those risks entirely, but it does shift the power dynamic back toward creators. Instead of reacting to damage after it's been done, they now have a proactive mechanism to detect and address it early. In that sense, YouTube isn't just moderating content, it's mediating identity. Still, the system's limitations are clear. Access is currently restricted to those within the YouTube Partner Program, meaning smaller creators - who often lack the resources to fight impersonation - remain exposed. Extending this protection more widely could be the next logical step, but one that comes with scale and resource challenges. The tool also raises difficult questions about interpretation. AI-generated likenesses often exist in a gray zone, satire, education, and commentary can all mimic real people without malicious intent. A creator's right to remove such videos will need to be balanced carefully against freedom of expression, especially in cases that test the boundaries of parody or fair use. YouTube's likeness detection system builds on a series of recent initiatives aimed at improving transparency around AI content. Earlier this year, the platform required creators to label realistic AI-generated material, and it has supported legislative efforts like the No Fakes Act, which seeks to grant individuals legal control over their digital likeness. Also read: ChatGPT Atlas vs Perplexity Comet: Major differences, feature comparison, and everything else you should know These moves suggest that YouTube sees AI not only as a tool for innovation, but also as a threat to the authenticity that drives its ecosystem. Platforms built on trust and creativity can't thrive if creators fear being cloned or misrepresented. The company's partnerships with agencies and advocacy for legal frameworks signal that this issue is as much cultural as it is technological. The success of this initiative will depend on its accuracy and fairness - on whether YouTube's detection AI can distinguish imitation from inspiration without stifling creativity. But even with its imperfections, the rollout marks a turning point in how major platforms treat identity in the digital age. As deepfakes grow more sophisticated and accessible, the burden of proof, of what's real and who's real, will increasingly fall on technology itself. YouTube's new tool may not eliminate the problem, but it offers something meaningful: a sense of agency in an era where that's been slipping away. For creators who have built their livelihoods on being authentic, that's a step worth taking and perhaps the beginning of a broader shift in how platforms, policymakers, and audiences define what "real" means online.
Share
Share
Copy Link
YouTube has officially rolled out its AI-powered likeness detection technology to eligible creators in the YouTube Partner Program. This tool aims to help creators identify and remove unauthorized AI-generated content featuring their likeness, addressing growing concerns about deepfakes and misinformation.
In a significant move to address the growing concern of AI-generated deepfakes, YouTube has officially launched its likeness detection technology for eligible creators in the YouTube Partner Program
1
2
. This tool, which has been in development and testing since last year, aims to help creators identify and request the removal of AI-generated content that uses their likeness without authorization.
Source: Digit
The new feature functions similarly to YouTube's existing Content ID system, which detects copyrighted audio and video content
3
. Creators can access the tool through the 'Likeness' tab in YouTube Studio's Content Detection menu1
.
Source: engadget
To set up the protection, creators must go through an identity verification process, which includes:
2
Once verified, the system will flag videos from other channels that appear to feature the creator's face. Creators can then review these flagged videos and submit removal requests if they identify unauthorized use of their likeness .

Source: MediaNama
The introduction of this tool comes at a critical time when AI-generated content has become increasingly sophisticated and prevalent across the internet. Creators and influencers have expressed concerns about the potential misuse of their image and voice for spreading misinformation or unauthorized endorsements
1
4
.YouTube's approach aims to strike a balance between protecting creators and allowing for legitimate uses of AI-generated content. The company has stated that not all AI-generated videos featuring a creator's likeness will automatically be removed. Factors such as parody content, clearly labeled AI videos, or those with an unrealistic style may not meet the threshold for removal
1
.Related Stories
While the likeness detection tool is a significant step forward, it currently has some limitations:
5
.4
.3
.YouTube plans to expand the feature to more creators over the coming months and continues to develop additional tools to manage AI-generated content on the platform
3
.As AI video generation technologies like Google's Veo 3.1 and OpenAI's Sora 2 continue to advance, the need for robust protection against deepfakes becomes increasingly crucial. YouTube's likeness detection tool represents an important step in the ongoing effort to maintain trust and authenticity in the digital content landscape.
Summarized by
Navi