2 Sources
[1]
YouTube is expanding its AI deepfake detection tool to all adult users
YouTube is expanding its AI likeness detection program to all users over the age of 18 -- meaning just about anyone can have the platform hunt for potential deepfakes of themselves. The likeness detection feature uses a selfie-style scan of a person's face to monitor YouTube for lookalikes. If there is a match, YouTube alerts the user; the person then has the option to request that YouTube remove the content. YouTube has said in the past that it has found the number of removal requests to be "very small." YouTube began testing the feature with content creators, and then expanded it to government officials, politicians, journalists, and finally the entertainment industry. The expansion to any user 18 years or older is a significant shift -- it essentially gives the average person the ability to constantly monitor content on YouTube that could use their likeness. Takedown requests are evaluated using YouTube's privacy policy, and the company says it considers removals based on criteria like whether the content is realistic, is labeled as AI-generated, and if a person can be uniquely identified. There are carveouts for things like parody or satire, and the tool only covers facial likeness, not other identifying features like a person's voice. Users can withdraw from the program and have YouTube delete their data. The news was announced on YouTube's creator forum, but spokesperson Jack Malon says there are no requirements on what constitutes a "creator" who is eligible. "With this expansion, we're making clear that whether creators have been uploading to YouTube for a decade or are just starting, they'll have access to the same level of protection," Malon said in an email. Deepfake content often centers on celebrities, politicians, or other public figures, but the ability to create a convincing digital replica is a concern for private citizens, too. There have been instances of teenagers being deepfaked by classmates, and three teenagers sued xAI alleging that the company's Grok chatbot generated child sexual abuse material (CSAM) of them.
[2]
YouTube's AI deepfake detection tool is now available to all creators 18 and older - Engadget
In the coming weeks, YouTube is giving all creators 18 and over access to a tool that can detect whether their likeness has been copied and used in AI videos uploaded to the website. Team YouTube made the announcement on the platform's community page, explaining that their "goal is to provide [users] with more peace of mind by giving [them] easy access to request the removal of unauthorized content." While the likeness detection tool is technically only available to creators, spokesperson Jack Malon told The Verge that anybody can use it. "With this expansion, we're making clear that whether creators have been uploading to YouTube for a decade or are just starting, the'll have access to the same level of protection," Malon said in a statement. It's getting harder and harder to differentiate between real and AI videos these days, and the tool's wider availability could end up helping even ordinary people who suddenly find their faces used in potentially malicious or misleading AI videos. For creators, this could help them catch brands and companies using their likenesses without permission to promote products and services. YouTube first previewed the tool in 2024 before rolling it out in late 2025. It was launched exclusively for Partner Program members, creators who have monetized their channels after gaining 1,000 followers and accumulating enough watch hours or Short views from the public within a certain span of time. YouTube then made the tool available to journalists and politicians before this expansion. Users who want access to the new tool will have to enroll from YouTube Studio on their computer. They can start the process by going to "Likeness" under "Content detection," scan a QR code with their phone, submit a government ID and complete a selfie video verification. Once they're set up, YouTube will scan uploaded videos for possible matches of their face, and they'll see any video that potentially uses their likeness under the same tab. They can then review the video and submit a removal request, where they can provide YouTube with information on how their likeness was used. YouTube will also ask if the video copied their voice for evaluation, but the tool itself can't make detections based on voice alone.
Share
Copy Link
YouTube is making its AI-powered deepfake detection tool available to all users over 18, a significant expansion from its initial rollout to Partner Program creators. The tool allows anyone to submit a selfie-style scan and monitor the platform for their likeness in AI-generated videos. Users can then request content removal if unauthorized use is detected, though YouTube evaluates requests based on its privacy policy.
YouTube is democratizing access to its AI-powered deepfake detection tool by making it available to all users aged 18 and older, marking a significant shift from its previous limited rollout
1
. The platform first previewed the tool in 2024 before launching it exclusively to YouTube Partner Program members—creators who have monetized their channels after gaining 1,000 followers and meeting specific watch hour requirements2
. YouTube then gradually expanded access to journalists, politicians, government officials, and the entertainment industry before this latest expansion.Spokesperson Jack Malon emphasized the platform's commitment to equal protection, stating that "whether creators have been uploading to YouTube for a decade or are just starting, they'll have access to the same level of protection"
1
. Notably, while the tool is technically labeled for creators, there are no requirements on what constitutes a "creator" who is eligible, effectively opening it to any adult user1
.
Source: The Verge
To access the AI deepfake detection feature, users must enroll through YouTube Studio on their computer by navigating to "Likeness" under "Content detection"
2
. The enrollment process requires users to submit a selfie-style scan by scanning a QR code with their phone, providing a government ID, and completing a selfie video verification for identity verification2
.
Source: Engadget
Once enrolled, YouTube continuously scans uploaded videos to monitor the platform for their likeness. When the system detects a potential match, it alerts the user, who can then review the AI-generated videos and request content removal
1
. During the removal request process, users provide information about how their likeness was used, and YouTube also asks if the video copied their voice for evaluation, though the tool itself cannot make detections based on voice alone2
.YouTube evaluates takedown requests using YouTube's privacy policy, considering multiple criteria before approving content removal
1
. The platform examines whether the content appears realistic, if it's labeled as AI-generated through AI labeling, and whether a person can be uniquely identified in the video. Importantly, there are carveouts for parody or satire, and the tool only covers facial likeness rather than other identifying features like voice1
.Interestingly, YouTube has reported that the number of removal requests has been "very small" during testing phases
1
. Users retain control over their participation and can withdraw from the program and have YouTube delete their data at any time.Related Stories
While deepfake content often centers on celebrities, politicians, or other public figures, the ability to create convincing digital replicas poses serious risks for private citizens as well
1
. There have been documented instances of teenagers being deepfaked by classmates, and three teenagers recently sued xAI alleging that the company's Grok chatbot generated child sexual abuse material of them1
.For creators, this tool could help catch brands and companies engaging in unauthorized use of likeness to promote products and services without permission
2
. As it becomes increasingly difficult to differentiate between real and AI videos, the tool's wider availability could protect ordinary people who suddenly find their faces used in potentially malicious or misleading content. The expansion represents YouTube's recognition that misuse of image through AI-generated content is not just a celebrity problem but a concern affecting everyone in the digital age.Summarized by
Navi
1
Technology

2
Technology

3
Business and Economy
