5 Sources
5 Sources
[1]
Google's Gemini app can check videos to see if they were made with Google AI
Google expanded Gemini's AI verification feature to videos made or edited with the company's own AI models. Users can now ask Gemini to determine if an uploaded video is AI-generated by asking, "Was this generated using Google AI?" Gemini will scan the video's visuals and audio for Google's proprietary watermark called SynthID. The response will be more than a yes or no, Google says. Gemini will point out specific times when the watermark appears in the video or audio. The company rolled out this capability for images in November, also limited to images made or edited with Google AI. Some watermarks can be easily scrubbed, as OpenAI learned when it launched its Sora app full of exclusively AI-generated videos. Google calls its own watermark "imperceptible." Still, we don't yet know how easy it will be to remove, or how readily other platforms will detect the SynthID information and tag the content as AI-generated. Google's Nano Banano AI image generation model within the Gemini embeds C2PA metadata, but the general lack of coordinated tagging of AI-generated material across social media platforms allows deepfakes to go undetected. Gemini can handle videos up to 100 MB and 90 seconds for verification. The feature is available in every language and location that the Gemini app is available.
[2]
Google Gemini is getting an AI video detector
Google is expanding its content transparency tools within the Gemini app. It is now possible to verify videos generated by the company's own artificial intelligence models. Keep in mind, this is exclusive to anything Gemini made, because the AI uses an ID that only it can see. It's becoming increasingly difficult to tell if a video you receive is a genuine recording or something cooked up by a computer, but you can now upload it and find out. The process is pretty straightforward. You just upload the video to Gemini and ask a simple question like, "Was this generated using Google AI?" Gemini then gets to work scanning for something called SynthID. This is Google's proprietary digital watermarking technology. It embeds signals into AI-generated content that are imperceptible to humans but easily detectable by the software. The tool is thorough, checking the entire file to see if AI was used for the background music, the footage itself, or both. The response you get isn't just a simple yes or no, either. Gemini uses its own reasoning to give you context, which I think is helpful. It even specifies which segments of the content contain AI-generated elements. For example, you might see a response that says, "SynthID detected within the audio between 10-20 secs. No SynthID detected in the visuals." That level of detail is a great feature for anyone trying to figure out what's real and what isn't in a piece of media. There are some practical limits you should know about if you plan on using this tool frequently. Right now, the files you upload can't be more than 100 MB in size, and they can't run longer than 90 seconds. This means you won't be checking full-length movies, but it's perfectly sufficient for verifying short clips and social media content. This new video verification capability is an expansion of a tool Google launched earlier for images. The company has been pushing its SynthID technology for a while now, looking to establish transparency in the content generated by its tools. Since its introduction in 2023, the tech giant claims it has watermarked more than 20 billion pieces of AI-generated content. That volume of marked content means that if the AI image came from a Google generator, Gemini could spot it almost immediately. This expansion brings that same level of scrutiny to motion pictures and audio. We have to talk about the massive caveat, though. This tool is strictly limited to content that was generated or edited using Google's own internal tools. If an image or a video was created using a non-Google-operated AI model, Gemini won't be able to tell you anything about it. This means the tool is really only useful for transparency within Google's own ecosystem. Google wants you to rely on Gemini for these checks instead of having to run images and videos through third-party checkers. While the company is making it easier to identify its own creations, the lack of support for outside models means this shouldn't be considered a universal AI detection tool. Source: Google
[3]
Gemini can now tell you whether that video is real or AI-generated
Following the November rollout of image verification within Gemini, Google is now doubling down on transparency. For reference, Gemini, back in November, gained the ability to detect whether an image is real or AI-generated. Now, Google is supercharging the same ability by giving Gemini the power to scan videos for the same invisible AI digital fingerprints. Related Gemini can now tell you whether that image is real or AI-generated But there's a catch Posts By Karandeep Singh Oberoi Highlighted in a new blog post, the new feature joins Google's content transparency tools, which mainly leverage SynthID watermarks to identify AI-generated content. Checking whether a video is AI-generated works the same way as checking whether an image is AI-generated. You simply upload it to the Gemini app and ask "was this created by Google AI?" or "Is this AI-generated?" The limitation: Google-only detection That first prompt above also gives away one of the detector's main limitations. It can only mark videos as AI-generated if said video was generated by one of Google's own tools. This limitation also extends to Gemini's ability to spot AI-generated images. "The video was not made with Google AI. However, the tool was unable to determine if it was generated with other AI tools," is what Gemini told me when I shared a random video of my room. Another limitation is that uploaded files can only be up to 100 MB in size and 90 seconds in duration. Gemini will scan for the imperceptible SynthID watermark across both the audio and visual tracks and use its own reasoning to return a response that gives you context and specifies which segments contain elements generated using Google AI. Gemini might then say something along the lines of "SynthID detected in the visuals between 5-10 seconds. No SynthID detected within the audio," or a similar combination depending on the content in question. Video origin verification is now available in the Gemini app in all languages and countries supported by the Gemini app. It is available for me on the web and on mobile.
[4]
Google Gemini app adds video verification for AI-generated content By Investing.com
Investing.com -- Google has expanded its content transparency tools in the Gemini app to help users identify AI-generated videos. The new feature allows users to verify if a video was created or edited using Google AI. Users can now upload videos to the Gemini app and ask questions like "Was this generated using Google AI?" The app will scan for SynthID watermarks in both audio and visual elements of the video. These watermarks are designed to be imperceptible to viewers. After scanning, Gemini provides detailed information about which segments contain Google AI-generated content. For example, it might indicate "SynthID detected within the audio between 10-20 secs. No SynthID detected in the visuals." The video verification feature supports files up to 100 MB and 90 seconds in length. Both image and video verification capabilities are now available across all languages and countries where the Gemini app is supported. This update represents an expansion of Google's existing content transparency tools, which aim to help users distinguish between authentic and AI-generated media. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
[5]
Google Gemini app can now identify AI-generated videos, here's how
When you upload a video, Gemini scans it for the SynthID watermark. Google has introduced a new feature in its Gemini app that makes it easier for people to tell whether a video was created or edited using Google's AI tools. This update is part of Google's push to improve transparency around AI-generated content and help users better understand what they are seeing and hearing online. With this new feature, anyone can upload a video to the Gemini app and ask a simple question such as, "Was this video made using Google AI?" Gemini will then analyse the video and provide an answer with helpful details. This feature is available in all countries and languages supported by the Gemini app. The technology behind this feature is called SynthID. SynthID is an invisible digital watermark that Google embeds into content created by its AI systems. Unlike visible labels or watermarks, SynthID cannot be seen or heard by people. Also read: Meta plans new image and video AI model codenamed Mango, targets 2026 release: Report When you upload a video, Gemini scans it for the SynthID watermark across the entire file. It does not just give a yes-or-no answer. Instead, it explains what it found and where. For example, Gemini might tell you that AI-generated content was detected in the audio between certain seconds, while the visual part of the video shows no signs of AI involvement. This gives users a clearer context and helps them understand exactly how AI was used. Also read: How to create viral drone shots using Google Gemini AI for Instagram and Facebook: Step-by-step guide with prompts There are a few limits to keep in mind. Uploaded videos must be no larger than 100 MB and no longer than 90 seconds. This update builds on a feature Google introduced last November, when Gemini gained the ability to check whether an image was real or created by AI. By adding video support, Google is taking a big step forward, especially as AI-generated videos become more realistic and more common online.
Share
Share
Copy Link
Google expanded its Gemini app to verify AI-generated videos using SynthID watermark technology. Users can upload videos up to 100 MB and 90 seconds to check if they were made with Google AI. The tool scans both audio and visuals but only detects content from Google's own ecosystem, limiting its use as a universal deepfake detector.
Google has rolled out a new video verification feature within its Gemini app, allowing users to determine whether videos were created or edited using Google AI
1
. This expansion builds on the image verification capability Google Gemini introduced in November, bringing the same transparency approach to video content2
. The process is straightforward: users upload a video and ask questions like "Was this generated using Google AI?" to receive detailed analysis about the content's origins3
.
Source: The Verge
The video verification feature relies on SynthID, Google's proprietary digital watermarking technology that embeds imperceptible signals into AI-generated content
5
. When scanning uploaded videos, Google Gemini examines both audio and visual components for these invisible markers. The tool provides more than a simple yes-or-no answer. Instead, it specifies which segments contain AI-generated elements, offering responses like "SynthID detected within the audio between 10-20 secs. No SynthID detected in the visuals"4
. This level of detail helps users understand exactly how and where AI was used in creating or editing the content.Google claims to have watermarked more than 20 billion pieces of AI-generated content since introducing the technology in 2023
2
. The company describes the SynthID watermark as imperceptible to humans but easily detectable by its software, though questions remain about how easily it can be removed or whether other social media platforms will detect and tag this metadata appropriately1
.
Source: Android Police
The most significant constraint is that this AI video detection tool only works with content created using Google's own AI models. If a video was generated using non-Google AI tools, Gemini will return a message stating it cannot determine the origin
3
. This limitation severely restricts its effectiveness as a universal detector for deepfakes and other AI-generated content circulating online. Additionally, file size limits restrict uploads to 100 MB and 90 seconds in duration, making the tool suitable for short clips and social media content but not longer videos2
.Related Stories
While Google's Nano Banano AI image generation model within Gemini embeds C2PA metadata, the general lack of coordinated tagging across social media platforms allows deepfakes to go undetected
1
. This mirrors challenges faced by OpenAI when it launched its Sora app, where watermarks proved easy to scrub from AI-generated videos1
. The video verification feature is now available in all languages and countries where the Gemini app is supported, representing Google's effort to establish content transparency within its own ecosystem4
.For users concerned about AI-generated content authenticity, this tool offers value primarily for verifying Google-created materials. The short-term benefit lies in quickly identifying content from Google's AI tools, but the long-term implications depend on whether industry-wide standards emerge. Without broader adoption of detection technologies across platforms and AI providers, distinguishing authentic content from sophisticated AI-generated videos will remain challenging. Users should watch for potential collaboration between major AI companies on universal watermarking standards and whether social media platforms will implement consistent tagging systems.
Summarized by
Navi
[2]
[3]
[4]
20 Nov 2025•Technology

19 Jun 2025•Technology

11 Jul 2025•Technology
