11 Sources
11 Sources
[1]
YouTube expands AI deepfake detection to politicians, government officials, and journalists | TechCrunch
YouTube is expanding its likeness detection technology, which identifies AI-generated deepfakes, to a pilot group of government officials, political candidates, and journalists, the company announced Tuesday. Members of the pilot group will gain access to a tool that detects unauthorized AI-generated content and lets them request its removal if they believe it violates YouTube policy. The technology itself launched last year to roughly 4 million YouTube creators in the YouTube Partner Program, following earlier tests. Similar to YouTube's existing Content ID system, which detects copyright-protected material in users' uploaded videos, the likeness detection feature looks for simulated faces made with AI tools. These tools are sometimes used to try to spread misinformation and manipulate people's perception of reality, as they leverage the deepfaked personas of notable figures -- like politicians or other government officials -- to say and do things in these AI videos that they didn't in real life. With the new pilot program, YouTube aims to balance users' free expression with the risks associated with AI technology that can generate a convincing likeness of a public figure. "This expansion is really about the integrity of the public conversation," said Leslie Miller, YouTube's Vice President of Government Affairs and Public Policy, in a press briefing ahead of Tuesday's launch. "We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we're also being careful about how we use it," she noted. Miller explained that not all of the detected matches would be removed when requested. Instead, YouTube would evaluate each request under its existing privacy policy guidelines to determine whether the content is parody or political critique, which are protected forms of free expression. The company noted it's advocating for these protections at a federal level, too, with its support for the NO FAKES Act in D.C., which would regulate the use of AI to create unauthorized recreations of an individual's voice and visual likeness. To use the new tool, eligible pilot testers must first prove their identity by uploading a selfie and a government ID. They can then create a profile, view the matches that show up, and optionally request their removal. YouTube says it plans to eventually give people the ability to prevent uploads of violating content before they go live or, possibly, allow them to monetize those videos, similar to how its Content ID system works. The company would not confirm which politicians or officials would be among its initial testers, but said the goal is to make the technology broadly available over time. These AI videos will be labeled as such, but the placement of these labels isn't consistent. For some, the label appears in the video's description, while videos focused on more "sensitive topics" will apply the label to the front of the video. This is the same approach YouTube takes with all AI-generated content. "There's a lot of content that's produced with AI, but that distinction's actually not material to the content itself," explained Amjad Hanif, YouTube's Vice President of Creator Products, as to the label's placement. "It could be a cartoon that is generated with AI. And so I think there's a judgment on whether it's a category that maybe merits from a very visible disclaimer," he said. YouTube isn't currently sharing how many removals of these sorts of AI deepfakes have been managed by this deepfake detection technology in the hands of creators, but noted that the amount of content removed so far has been "very small." "I think for a lot of [creators], it's just been the awareness of what's being created, but the volume of actually removal requests is really, really low because most of it turns out to be fairly benign or additive to their overall business," Hanif said. That may not be the case with deepfakes of government officials, politicians, or journalists. In time, YouTube intends to bring its deepfake detection technology to more areas, including recognizable spoken voices and other intellectual property like popular characters.
[2]
YouTube expands AI deepfake detection for politicians, government officials, and journalists | TechCrunch
YouTube is expanding its likeness detection technology, which identifies AI-generated deepfakes, to a pilot group of government officials, political candidates, and journalists, the company announced Tuesday. Members of the pilot group will gain access to a tool that detects unauthorized AI-generated content and lets them request its removal if they believe it violates YouTube policy. The technology itself launched last year to roughly 4 million YouTube creators in the YouTube Partner Program, following earlier tests. Similar to YouTube's existing Content ID system, which detects copyright-protected material in users' uploaded videos, the likeness detection feature looks for simulated faces made with AI tools. These tools are sometimes used to try to spread misinformation and manipulate people's perception of reality, as they leverage the deepfaked personas of notable figures -- like politicians or other government officials -- to say and do things in these AI videos that they didn't in real life. With the new pilot program, YouTube aims to balance users' free expression with the risks associated with AI technology that can generate a convincing likeness of a public figure. "This expansion is really about the integrity of the public conversation," said Leslie Miller, YouTube's Vice President of Government Affairs and Public Policy, in a press briefing ahead of Tuesday's launch. "We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we're also being careful about how we use it," she noted. Miller explained that not all of the detected matches would be removed when requested. Instead, YouTube would evaluate each request under its existing privacy policy guidelines to determine whether the content is parody or political critique, which are protected forms of free expression. The company noted it's advocating for these protections at a federal level, too, with its support for the NO FAKES Act in D.C., which would regulate the use of AI to create unauthorized recreations of an individual's voice and visual likeness. To use the new tool, eligible pilot testers must first prove their identity by uploading a selfie and a government ID. They can then create a profile, view the matches that show up, and optionally request their removal. YouTube says it plans to eventually give people the ability to prevent uploads of violating content before they go live or, possibly, allow them to monetize those videos, similar to how its Content ID system works. The company would not confirm which politicians or officials would be among its initial testers, but said the goal is to make the technology broadly available over time. These AI videos will be labeled as such, but the placement of these labels isn't consistent. For some, the label appears in the video's description, while videos focused on more "sensitive topics" will apply the label to the front of the video. This is the same approach YouTube takes with all AI-generated content. "There's a lot of content that's produced with AI, but that distinction's actually not material to the content itself," explained Amjad Hanif, YouTube's Vice President of Creator Products, as to the label's placement. "It could be a cartoon that is generated with AI. And so I think there's a judgment on whether it's a category that maybe merits from a very visible disclaimer," he said. YouTube isn't currently sharing how many removals of these sorts of AI deepfakes have been managed by this deepfake detection technology in the hands of creators, but noted that the amount of content removed so far has been "very small." "I think for a lot of [creators], it's just been the awareness of what's being created, but the volume of actually removal requests is really, really low because most of it turns out to be fairly benign or additive to their overall business," Hanif said. That may not be the case with deepfakes of government officials, politicians, or journalists. In time, YouTube intends to bring its deepfake detection technology to more areas, including recognizable spoken voices and other intellectual property like popular characters.
[3]
YouTube to Let Politicians, Journalists Request Removal of Deepfake Videos
Certain videos created for parody or satire may still be allowed, YouTube says. YouTube is expanding its AI likeness detection tool to allow political candidates, government officials, and journalists to request the removal of their deepfake videos. Launched last year in YouTube Studio, the tool lets users find videos in which their faces appear to have been altered or generated using AI. To sign up for the program, you need to be a YouTube channel owner or manager and submit a government-issued ID and a short selfie video for verification. YouTube will then search for videos featuring your likeness and list them under the Content detection tab in YouTube Studio. You can review each video and request the removal of any you find manipulative, or request the removal of all of them. YouTube, however, warns that it won't take down all videos. The company says it will continue to protect free expression and allow parody or satirical content, even if they criticize world leaders and influential figures. "We'll continue to carefully evaluate these exceptions when we receive requests for removal," it adds. For now, YouTube will start with a "pilot group" of unnamed journalists, government officials, and candidates who are getting early access to the tool. AI deepfakes remained a major point of discussion throughout the 2024 US presidential election. There were reports that foreign governments were trying to manipulate voters by spreading deepfake videos of former VP Kamala Harris on social media. We also saw fake celebrity endorsements for then-candidate Donald Trump. AI detection won't be enough to tackle the spread of deepfakes, YouTube admits, adding that it continues to support the NO FAKES Act. The bill would require online platforms to remove unauthorized digital replicas of a person upon request from the rights holder.
[4]
YouTube Adds Tool to Help Public Figures Report Fake Videos
Sign up for the On Tech newsletter. Get our best tech reporting from the week. Get it sent to your inbox. YouTube is adding a detection tool for government officials, political candidates and journalists to catch and report videos that use artificial intelligence to display their likeness without permission. The pilot program, announced on Tuesday, is arriving as social media companies and a patchwork of new laws start to address the problem of these so-called deepfakes, which are spreading as A.I. video technology rapidly improves. But the companies have largely relied on users to report fake material. To enroll in YouTube's new program, people need to provide a video selfie and government identification, the company said. The user can then see the videos that YouTube has detected in an online dashboard. From there, there is an option to flag them for review and removal. "As new technology emerges and we participate in the debate around what's the appropriate use and controls around likeness, we feel like it's our responsibility to invest in technology to help handle that," said Leslie Miller, YouTube's vice president of government affairs and public policy. The A.I. content is not blocked from being uploaded, but after it has been detected, participants in the program can request that it be taken down. Exceptions to removal under the pilot program include videos that are clearly made in "parody, satire and public interest," Ms. Miller said. The company said the identity information would be used only to verify the person's identity and not to train Google's A.I. models. Kaylyn Jackson Schiff, a professor at Purdue University who studies A.I. deepfakes, said those depicting high-profile people such as government officials and journalists had become more prevalent. Dr. Jackson Schiff, a co-director of the university's Governance and Responsible A.I. Lab, added that new detection tools were not perfect, noting that they still relied on users to report deepfakes. "The speed at which reports are dealt with is really important because we know that things can go viral very, very quickly," she said, "and things that are related to high-profile political events can spread super, super rapidly and affect many individuals' opinions."
[5]
YouTube Expands AI Deepfake Detection Tool to Politicians, Won't Say If Trump Is Included
The tool lets verified users request unauthorized AI-generated videos featuring their likeness to be taken down. YouTube is making it easier for politicians and journalists to take down AI deepfakes from its platform ahead of this year’s midterm elections. But it's keeping quiet on who now has access to this tool. The video streaming giant announced today that it is expanding access to its likeness detection tool to journalists, government officials, and political candidates. The tool flags videos that feature a user’s likeness in AI-generated content and allows them to request unauthorized videos be taken down. “YouTube is where the world comes to understand the events shaping their livesâ€"from breaking news to the debates that drive civic discourse,†wrote Amjad Hanif, YouTube vice president of creator products, and Leslie Miller, vice president of government affairs and public policy, in a blog post. “As AI-generated content evolves, the individuals at the center of these conversations need reliable tools to protect their identities.†The expansion comes as AI deepfakes have gotten pretty impressive, raising concerns about their potential to spread misinformation especially around elections. The news also comes as YouTube has been increasingly leaning more into AI. Last year, the company brought a custom version of Google’s video-generation model, Veo 3, to Shortsâ€"YouTube’s TikTok- and Instagram Reels-like feed of quick, vertical videos. That tool, along with other AI editing features on the platform, has made it easier than ever for users to create deepfakes. At the same time, YouTube has also tried to roll out tools to mitigate the risks. The company’s likeness detection tool works similarly to Content ID, YouTube’s copyright-flagging system, but for people’s faces. YouTube first started testing the system in 2024 with celebrities and athletes, and expanded it last year to YouTube creators in the company's Partner Program. To enroll in the program, eligible users must verify their identity by sumbitting a video selfie and a government ID. The company said any data submitted will only be used for verification purposes and not to train Google’s AI. Once verified, users can check for videos that use their likeness and request that they be taken down. YouTube, however, emphasizes that just because a video is detected and a removal request is made that does not guarantee it will be taken down. “YouTube has a long history of protecting free expression and content in the public interestâ€"including preserving content like parody and satire, even when used to critique world leaders or influential figures,†the company blog post said. “We’ll continue to carefully evaluate these exceptions when we receive requests for removal.†A YouTube spokesperson told Gizmodo that the company is planning a “broad international rollout,†with access to the tool being expanded in the coming weeks and months. YouTube declined to comment on which politicians and journalists are included in the initial pilot cohort, including whether U.S. President Donald Trump was invited. Trump himself and his adminstration is known for posting AI-generated content using the likenesses of his political and media adversaries.
[6]
YouTube will alert civic leaders and reporters to deepfakes that involve their likeness
AI video generation is a real concern. Even in an era where AI videos tend to sport attributes that give the game away -- like Coca Cola's rapidly-transforming semi-truck in last year's Christmas ad -- it's often good enough to fool audiences, which is why some platform owners are trying to get ahead of any potentially problematic deepfakes. Today, YouTube is expanding its likeness detection too to support politicians, government employees, political candidates, and reporters. The company previously launched the feature last year for its YouTube partners, but now, those who fall into these newly-protected categories won't need to be within that program to participate. Just as with Content ID, YouTube's likeness detection works to find a facial match in AI-generated content on the platform before allowing a matched participant to send a takedown request to that specific video. YouTube says it doesn't automatically pull all matched content, with specific carveouts for parody and satire even against world leaders, but it does look for anything that violates its pre-existing privacy guidelines. Those who qualify for this program will need to verify their identity with Google, though the company states this data is not used to train AI models. YouTube is also using this announcement to call for the passing of the NO FAKES Act in Congress, which it says "establishes a federal right of publicity and acts as a blueprint for international adoption to ensure technology serves -- and never replaces -- human creativity." Unfortunately, if you aren't in YouTube's Partner Program or in one of these supported public-facing roles, likeness detection remains out of reach for the time being.
[7]
YouTube expands deepfake detection tool to politicians and journalists
What they're saying: "This expansion is really about the integrity of the public conversation," Leslie Miller, vice president, government affairs and public policy at YouTube, said on a call with journalists. * "We know that the risks of AI impersonation are particularly high for those in the civic space." How it works: YouTube's likeness detection technology scans videos uploaded to the platform for content that appears to use someone's likeness, namely their face. * If a match is detected, individuals can review the flagged video and request to have it removed through YouTube's privacy complaint process. * Requests don't guarantee the video will be taken down. YouTube allows parody and satire. * YouTube declined to share who exactly has access in this pilot, including when asked specifically about President Trump. Participants must verify their identity by submitting a government ID and video selfie. The big picture: Tech companies are creating more safeguards against AI and impersonation. * YouTube CEO Neal Mohan said one of his top 2026 priorities is AI transparency and protections, including labeling AI content and removing harmful synthetic media. * YouTube started developing the likeness detection tool in 2024 with Creative Artists Agency and testing with top creators including MrBeast and Marques Brownlee. It expanded access to all creators last year. Between the lines: YouTube said creators using the tool over the past year have flagged relatively few videos for removal. * "Most of it turns out to be fairly benign or additive to their overall business," said Amjad Hanif, vice president, creator products at YouTube. What's next: YouTube plans to expand access to any government official, political candidate and journalist. * The likeness detection tool is focused on facial likeness, for now. But Hanif said the company is also exploring voice impersonation. * YouTube is also considering allowing people to monetize their likeness in detected content, as is the case for its Content ID system. 💠Thought bubble from Axios senior tech policy reporter Ashley Gold: Deepfakes and AI impersonation are areas of concern for Congress and the Trump administration. Last year, Trump signed the TAKE IT DOWN Act, which has to do with nonconsensual intimate images (including deepfaked ones). * However, any more sweeping deepfake or election labeling legislation would be a much heavier lift on the Hill and is not likely prior to the midterms. What to watch: YouTube endorsed a federal bill called the NO FAKES Act, where platforms must act quickly when receiving takedown requests for AI-generated likeness.
[8]
YouTube is finally addressing the riskiest side of deepfaked videos
A new tool will help journalists and civic leaders detect AI videos impersonating them. YouTube is stepping up its fight against one of the most troubling uses of AI: deepfake videos that impersonate real people. The company announced it is expanding its likeness detection technology to a pilot group of journalists, government officials, and political candidates. It's a move aimed at protecting public figures from AI-generated impersonation. The feature works somewhat like Content ID for faces. Participants submit a short video and a government ID so the system can learn their likeness. Once enrolled, YouTube scans uploads for AI-generated videos that mimic their appearance. If such content appears, the individual can review it and potentially request its removal. A new shield against AI impersonation YouTube first introduced likeness detection for creators in the YouTube Partner Program last year. The company now believes the next priority is protecting public figures whose identities are often used in misinformation campaigns, especially around elections and political discourse. Deepfakes have become increasingly realistic thanks to generative AI tools, making it easier to create convincing videos of people saying or doing things they never actually did. In politics and journalism, that risk can have serious consequences, from misinformation to reputational damage. However, the system isn't a simple "delete button." YouTube says removal requests will still be subject to its existing privacy and moderation guidelines, meaning some videos may remain online if they qualify as parody, satire, or legitimate commentary. Recommended Videos Interestingly, YouTube says the original rollout to creators didn't lead to many takedowns. Most detected content turned out to be relatively benign, though the company expects the situation to be different for public figures and political leaders who face a higher risk of targeted deepfake attacks. For now, the program will remain limited to influential individuals rather than the general public. But the expansion signals a broader shift across the tech industry: moving quickly to build guardrails before AI-generated media becomes impossible to distinguish from reality.
[9]
YouTube opens deepfake detection tool to politicians and journalists
YouTube on Tuesday started offering a free tool to government officials, journalists and political candidates to help them identify and remove AI-generated videos that resemble their appearance. The company, which is owned by Alphabet, the parent company of Google, said in a blog post that the tool aims to serve individuals at the center of breaking news and civic discourse to "protect their identities." The tool comes more than four months after the platform launched the likeness detection tool to YouTube Partner Program members. "YouTube has a long history of protecting free expression and content in the public interest -- including preserving content like parody and satire, even when used to critique world leaders or influential figures," YouTube said in its blog. The rapid development of AI has fueled the creation and widespread adoption of models that have evolved rapidly to create increasingly realistic video. And while many tech platforms including YouTube have generally embraced AI video, they have also faced challenges around deceptive content that spreads misinformation and can be used to perpetrate scams. AI videos of high-profile people -- sometimes called deepfakes -- have been particularly potent for scammers. As of Tuesday, YouTube, which first rolled out its likeness detection tool in October 2025, will reach out to politicians and journalists on the platform who can then decide if they want to enroll to use the tool, a company spokesperson said. Participants will need to provide a video of themselves along with government identification. YouTube will then notify participants on YouTube Studio of deepfake videos that show a likeness to their appearance. The participants can flag the content and request removal. Users who have not received the invitation to register for the tool can reach out to YouTube directly. The information provided by the participants will not be used to train AI models from Google, which owns YouTube, but will only be used to "power" the detection tool, the spokesperson told NBC News. Google's video generator trains its system using videos posted on YouTube, NBC News previously reported. "Our goal is to get this technology into the hands of the people who need it, and we have plans to significantly expand access over the coming year," the spokesperson said.
[10]
YouTube adds tool to help public figures report fake videos - The Economic Times
The pilot programme, announced Tuesday, is arriving as social media companies and a patchwork of new laws start to address the problem of these so-called deepfakes, which are spreading as AI video technology rapidly improves. But the companies have largely relied on users to report fake material.YouTube is adding a detection tool for government officials, political candidates and journalists to catch and report videos that use artificial intelligence to display their likeness without permission. The pilot programme, announced Tuesday, is arriving as social media companies and a patchwork of new laws start to address the problem of these so-called deepfakes, which are spreading as AI video technology rapidly improves. But the companies have largely relied on users to report fake material. To enroll in YouTube's new program, people need to provide a video selfie and government identification, the company said. The user can then see the videos that YouTube has detected in an online dashboard. From there, there is an option to flag them for review and removal. "As new technology emerges and we participate in the debate around what's the appropriate use and controls around likeness, we feel like it's our responsibility to invest in technology to help handle that," said Leslie Miller, YouTube's vice president of government affairs and public policy. The AI content is not blocked from being uploaded, but after it has been detected, participants in the program can request that it be taken down. Exceptions to removal under the pilot program include videos that are clearly made in "parody, satire and public interest," Miller said. The company said the identity information would be used only to verify the person's identity and not to train Google's AI models. Kaylyn Jackson Schiff, a professor at Purdue University who studies AI deepfakes, said those depicting high-profile people such as government officials and journalists had become more prevalent. Jackson Schiff, a co-director of the university's Governance and Responsible AI Lab, added that new detection tools were not perfect, noting that they still relied on users to report deepfakes. "The speed at which reports are dealt with is really important because we know that things can go viral very, very quickly," she said, "and things that are related to high-profile political events can spread super, super rapidly and affect many individuals' opinions."
[11]
YouTube Gives Political Figures and Journalists Access to AI Deepfake Detection Tool
YouTube Lays Claim to Another Crown: The World's Largest Media Company In a significant move given obvious global events, and with the midterm elections approaching, YouTube is expanding its likeness detection tool to political and civic leaders, as well as journalists, in a bid to curb AI-generated content that may seek to misinform or mislead users of the platform. Politicos and journalists that participate will (after their identities have been verified by YouTube) be able to review videos that have been determined to feature their likeness, and request removal if it violates YouTube's privacy policies. Generative AI, of course, has made it trivially easy to fake the likeness or voice of someone else. YouTube first announced the tool in December 2024, initially rolling it out to A-list actors and athletes. Last year it expanded it to top creators, and now the company says some 4 million creators in the YouTube Partner Program have signed up to use it. "We've always known that there was a need for this tech to go beyond just creators, and so today, we're excited to announce that we're going to expand this pilot to journalists and government officials, and we're starting with a pilot group so we can learn how this group of users will use it to protect their identities online," says Amjad Hanif, VP of Creator Products for YouTube, in a briefing with members of the press ahead of the feature's launch. "And as we learn more from election cycles and how journalists use it, we'll expand it to an even broader group of folks." "This expansion is really about the integrity of the public conversation," adds Leslie Miller, VP of government affairs & public policy for YouTube. "We know that the risks of AI impersonation are particularly high for those in the civic space." The company declined to comment on which political leaders, civic leaders and journalists will be invited into the pilot program, though they said that they expect to ramp it up quickly. "We've been in regular conversations with folks, and we encourage policymakers and others to reach out to the out to us if they want to learn more and fold into the ways in which we are expanding this," Miller says. Importantly, YouTube is also making it clear that the principles of free expression will apply, lest politicians attempt to abuse the tool: "While we are providing this new shield, we're also being careful about how we use it," Miller says. "Detection does not mean automatic takedown. YouTube has a long history of protecting free expression, and that includes parody, satire and political critique. If a video of a world leader is clear parody, it's likely to stay up." Hanif notes that, among the celebrities and YouTube creators that already use the tool, the number of removal requests is surprisingly small. "They may see lots of matches, and I think for a lot of them, it's just been the awareness of what's being created," he says. "But the volume of actually removal requests is really, really low, because most of it turns out to be fairly benign or additive to their overall business." The video platform also released a video explaining the tool and what it will mean for those now eligible to use it.
Share
Share
Copy Link
YouTube is rolling out its likeness detection technology to government officials, political candidates, and journalists through a pilot program. The tool identifies unauthorized AI-generated content featuring their faces and allows removal requests. Initially launched to 4 million creators last year, the expansion aims to protect public conversation integrity while balancing free expression concerns around parody and satire.
YouTube announced Tuesday that it is expanding its likeness detection technology to a pilot group of government officials and journalists, political candidates, and other public figures
1
. The AI deepfake detection tool, which identifies unauthorized AI-generated content featuring a person's face, allows members of the pilot program to request removal of fake videos they believe violate YouTube policy. This expansion comes as AI-generated deepfakes grow increasingly sophisticated, raising concerns about their potential to spread misinformation, particularly around elections5
.
Source: Axios
The technology first launched last year to roughly 4 million YouTube creators in the YouTube Partner Program, following earlier tests with celebrities and athletes
1
. Now, YouTube is targeting those in the civic space who face heightened risks from AI impersonation. "This expansion is really about the integrity of the public conversation," said Leslie Miller, YouTube's Vice President of Government Affairs and Public Policy. "We know that the risks of AI impersonation are particularly high for those in the civic space"1
.Similar to YouTube's existing Content ID system for copyright-protected material, the likeness detection feature scans for simulated faces created with AI tools
1
. To enroll in the program, eligible users must complete identity verification by uploading a video selfie and government ID3
. Google confirmed this data will only be used for verification purposes and not to train the company's AI models4
.Once verified, users can access a dashboard in YouTube Studio where detected matches appear under the Content detection tab
3
. From there, they can review each video and submit removal requests for content they find manipulative. YouTube has not disclosed which specific politicians or officials are included in the initial pilot cohort, including whether U.S. President Donald Trump was invited, though the company plans a broad international rollout in coming weeks and months5
.
Source: NBC
Not all detected deepfakes will be removed when flagged. YouTube emphasizes it will continue protecting free expression and content in the public interest, including parody and satire, even when used to critique world leaders or influential figures
5
. Miller explained that YouTube would evaluate each request under its existing privacy guidelines to determine whether the content qualifies as protected speech1
."YouTube has a long history of protecting free expression," the company stated, noting it will "carefully evaluate these exceptions when we receive requests for removal"
5
. This approach reflects the delicate balance between combating misinformation through digital replicas and preserving legitimate political critique.YouTube is also advocating for these protections at the federal level through its support for the NO FAKES Act, which would regulate the use of AI to create unauthorized recreations of an individual's voice and visual likeness
1
.Related Stories
Amjad Hanif, YouTube's Vice President of Creator Products, revealed that removal requests from creators have been "really, really low" because most detected content "turns out to be fairly benign or additive to their overall business"
1
. However, he acknowledged that the situation may differ significantly with deepfakes of government officials, politicians, or journalists, where the stakes for public discourse are considerably higher.Kaylyn Jackson Schiff, a professor at Purdue University who studies AI deepfakes and co-directs the university's Governance and Responsible A.I. Lab, noted that deepfakes depicting high-profile people have become more prevalent. She emphasized that "the speed at which reports are dealt with is really important because we know that things can go viral very, very quickly, and things that are related to high-profile political events can spread super, super rapidly and affect many individuals' opinions"
4
.YouTube plans to eventually give people the ability to prevent uploads of violating content before they go live or possibly allow them to monetize those videos, similar to how its Content ID system works
1
. The company also intends to bring its deepfake detection technology to more areas, including recognizable spoken voices and other intellectual property like popular characters1
.The expansion comes as YouTube has increasingly leaned into AI features, including bringing Google's video-generation model Veo 3 to Shorts last year, making it easier than ever for users to create AI-generated content
5
. This dual approach of enabling AI creation while building safeguards reflects the platform's attempt to navigate the complex landscape of AI-generated content. As the 2026 midterm elections approach, the effectiveness of these tools in protecting public figures while preserving legitimate discourse will face its first major test.
Source: THR
Summarized by
Navi
[1]
[2]
1
Technology

2
Policy and Regulation

3
Policy and Regulation
