Curated by THEOUTPOST
On Fri, 6 Sept, 12:07 AM UTC
4 Sources
[1]
YouTube Makes AI Deepfake-Detection Tools for Voices, Faces
YouTube is working on multiple deepfake-detection tools to help creators find videos where AI-generated versions of their voices or faces are being used without consent, the Google-owned platform announced Thursday. Two separate tools are expected, but YouTube hasn't shared a release date for either one yet. The first is a singing voice-detection tool that will be added to YouTube's existing Content ID system. Content ID automatically checks for instances of copyright infringement and can take down entire movies or copies of songs that belong to an established musician, for example. This first AI-detection feature will mainly be for musicians whose voices are spoofed by AI to produce new songs, but it's unclear whether the tool will work effectively for less-famous artists whose voices are not widely recognized. It'll likely help big record labels keep AI impersonators off YouTube, however, and give the likes of Drake, Billie Eilish, or Taylor Swift the ability to find and take down channels posting AI songs that mimic them. The second detection tool will help public figures like influencers, actors, athletes, or artists track down and flag AI-generated media of their faces on YouTube. But it's unclear whether YouTube will ever proactively deploy the tool at any point to detect AI-generated images impersonating real people who aren't famous or uploading videos. Reached for comment, a YouTube rep didn't answer this directly but tells PCMag that YouTube's recently updated privacy policy lets anyone request the removal of deepfake or AI-generated impersonation content, so it looks like deepfaked individuals will have to actively hunt down impersonations to get them removed. YouTube did not respond to whether it would consider using this tool to proactively remove the scourge of AI-generated scam videos, either. These videos impersonate famous figures like Elon Musk and have popped up across YouTube countless times, often on hacked accounts, in the past few years. YouTube's Community Guidelines don't allow spam, scams, or deceptive content, but viewers must manually report the videos to get them taken down. While Google and virtually every other major tech firm have evangelized AI's potential and tried to find ways to add it to every corner of their businesses, the widespread, cheap, or free access to AI tools also means it's become much easier to make deepfake media of other people. Last year, one study found that the number of deepfake videos online has spiked 550% since 2021. It tracked over 95,000 deepfake videos on the internet, noting that 98% of them were porn and a staggering 99% of the impersonated individuals were women. The US Department of Homeland Security has also called deepfakes an "increasing threat," flagging misuse of the AI-powered "Wav2Lip" lip-syncing technology as cause for concern. Even just a 15-second Instagram video can be enough material to create deepfake pornography of a person, the DHS notes. Ultimately, YouTube says it wants AI to "enhance human creativity, not replace it," and is developing these new deepfake-detection tools to help public figures delete impersonations as they spread.
[2]
YouTube's new tools help it rat out AI singers that don't exist
Key Takeaways YouTube is developing AI detection tools for voices and faces to protect users from deepfakes. The tools aim to help users identify videos with simulated likenesses and voices, bolstering trust in content. Google must stay ahead of nefarious AI users to ensure YouTube's survival in the AI age. In the age of AI, it is important to ensure the technology is used responsibly. Google may have suffered a few setbacks in Search, but clearly, the YouTube team wants to stay on top of the tech to ensure its partners have the tools to detect and manage AI content that may simulate their singing. While the tool won't be ready until next year, YouTube is also working on new technology to manage AI content that shows user's likenesses. Clearly, YouTube is worried about AI being used to spoof artists' voices and faces, which is definitely a real concern, which is why it's encouraging to see YouTube out in front of the issue with plans to subdue nefarious actors on the platform. You can read the full details of YouTube's two new tools above. The first will give users the ability to detect AI simulating a user's voice, and the second will provide a tool for identifying AI-created faces. Ideally, these tools will allow users to strike down videos using simulated likenesses and voices. While Google's post is couched in language detailing how it will protect artists, actors, and athletes, one would hope the tech will reach beyond the glitterati and be available for the common user. Luckily, users can already report deepfakes of themselves. Responsible AI generation and detection tools go hand in hand Of course, YouTube already offers a few AI features for generating content on the platform, such as its experimental Dream Screen for Shorts that can generate backgrounds. These lean into YouTube's goal of offering safe ways to use AI following its stated guidelines, but of course, not everyone will use the included tools but use outside tech, which is why safeguards are needed, like the two proposed tools revealed today. In a world where it is growing more and more difficult to separate reality from AI, YouTube's newly proposed AI detection tools are surely going to be welcome additions to the Content ID system. After all, if nobody can trust the content on YouTube, it surely won't survive past the AI age, which is why Google will have to stay on its toes, keeping one step ahead of those using AI to spoof and trick users. It won't be an easy job, but surely Google, of all companies, has the ability to see it through. Related Google is already delivering on the Apple Intelligence promise Android takes the lead with AI
[3]
YouTube is making tools to detect face and voice deepfakes
It plans to launch a pilot program for the voice detection tool by early next year. YouTube is developing new tools to protect artists and creators from the unauthorized use of their likenesses. The company said on Thursday that new tech to detect AI-generated content using a person's face or singing voice is in the pipeline, with pilot programs starting early next year. The upcoming face-detection tech will allegedly let people from various industries "detect and manage" content that uses an AI-generated depiction of their face. YouTube says it's building the tools to allow creators, actors, musicians and athletes to find and choose what to do about videos that include a deepfake version of their likeness. The company hasn't yet specified a release date for the face detection tools. Meanwhile, the "synthetic-singing identification" tech will be part of Content ID, YouTube's automated IP protection system. The company says the tool will let partners find and manage content that uses AI-generated versions of their singing voices. "As AI evolves, we believe it should enhance human creativity, not replace it," Amjad Hanif, YouTube's vice president of creator products, wrote in a blog post. "We're committed to working with our partners to ensure future advancements amplify their voices, and we'll continue to develop guardrails to address concerns and achieve our common goals."
[4]
YouTube vows to protect creators from AI fakes
Incoming tools will let creators find and take down fakes of their own voices and faces, among other protections. If you watch as much YouTube as I do, you've no doubt been inundated with AI in the last year or so. AI-generated thumbnails, AI-generated voiceovers, even full-on AI-generated video is now in the cards. Well, YouTube has taken notice and has officially promised to protect the creators on its platform with new tools. YouTube's infamous Content ID system -- the thing that makes YouTubers freak out whenever someone starts humming a song because they don't want their video demonetized -- is being augmented with new AI-hunting tools. Content ID can now search for AI-generated singing voices based on existing artists. This tool is apparently being refined "with [YouTube's] partners," with a plan to implement it beginning in 2025. What about the kind of AI generation that can create images or videos? YouTube says that it's working on that too, "actively developing" tech that can detect and manage (read: take down) videos with AI-generated faces based on existing people. There's no timeframe for when this will reach the hands of users or partners. YouTube also says it's working against systems that are scraping its content to train AI models, which has been a hot-button topic lately. Nvidia has been known to collect publicly accessible videos from YouTube to train its models, which may violate YouTube's terms of service. Training larger models for video generation is a topic of competition within the increasingly competitive AI industry, in which YouTube and Google are active participants. But individual users and artists are likely more worried about targeted scraping that's designed to steal and replicate their likeness. Various tools that claim to train themselves on YouTube data are easy to find and set up, even on relatively low-power consumer hardware. How exactly will YouTube prevent this? Is it even possible? So far, it hasn't been explicitly spelled out. "We'll continue to employ measures to ensure that third parties respect [the terms of service], including ongoing investments in the systems that detect and prevent unauthorized access, up to and including blocking access from those who scrape." Notably, YouTube's terms of service do not prevent YouTube itself or owner Google from processing videos on the platform for its own AI tools. Though newer restrictions require YouTube's video creators to disclose the use of AI for synthetic images, videos, and voices, Google has allowed OpenAI to scrape YouTube content without legal challenge... because it was afraid of establishing a standard for the AI tools it was developing itself, according to a New York Times report from April.
Share
Share
Copy Link
YouTube is creating new tools to identify AI-generated content, including deepfake voices and faces. This move aims to protect creators and maintain trust on the platform amid growing concerns about AI-generated misinformation.
In a significant move to address the growing concerns surrounding AI-generated content, YouTube has announced the development of new tools designed to detect deepfake voices and faces 1. This initiative comes as part of the platform's ongoing efforts to protect creators and maintain trust among its vast user base.
YouTube's new detection tools are specifically tailored to identify AI-generated content, with a focus on synthetic voices and manipulated facial features 2. The technology is being developed to recognize patterns and anomalies that are characteristic of AI-generated media, allowing for more accurate identification of potentially misleading content.
One of the primary motivations behind this development is to safeguard content creators from unauthorized use of their likeness or voice 3. By implementing these detection tools, YouTube aims to prevent the creation and spread of deceptive content that could harm a creator's reputation or mislead viewers.
YouTube plans to integrate these detection tools into its platform, providing users with more information about the content they consume 4. This move towards greater transparency is expected to empower viewers to make more informed decisions about the authenticity of the videos they watch.
While the development of these tools marks a significant step forward, experts acknowledge the ongoing challenges in the field of deepfake detection. As AI technology continues to advance, the sophistication of deepfakes is also likely to increase, necessitating continuous refinement of detection methods.
YouTube's initiative could set a precedent for other social media and content-sharing platforms. As concerns about AI-generated misinformation grow, there is an increasing demand for tech companies to take proactive measures in identifying and managing synthetic content.
As YouTube continues to develop and refine these tools, the platform is expected to work closely with creators, AI experts, and policymakers to ensure a balanced approach that protects against misuse while still allowing for legitimate creative applications of AI technology in content creation.
Reference
[1]
[2]
YouTube announces the development of AI detection tools and creator controls to address concerns about AI-generated content. These tools aim to safeguard creators' work and provide more control over AI training data.
5 Sources
YouTube unveils new AI detection tools to help creators identify AI-generated content, including singing deepfakes. The platform aims to balance innovation with transparency and creator rights.
2 Sources
YouTube is collaborating with Creative Artists Agency (CAA) to test new technology that will help celebrities and athletes identify and manage AI-generated content using their likeness on the platform.
11 Sources
YouTube's introduction of AI-generated content tools sparks debate on creativity, authenticity, and potential risks. While offering new opportunities for creators, concerns arise about content quality and the platform's ecosystem.
4 Sources
YouTube introduces a suite of AI-powered tools to assist creators in producing Shorts and long-form content. These features aim to streamline the content creation process and enhance user engagement.
19 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved