Gemini now detects AI-generated videos, but only those made with Google AI

Reviewed byNidhi Govil

12 Sources

Share

Google expanded Gemini's verification capabilities to identify AI-generated videos created with its own tools. The feature scans for an invisible SynthID watermark embedded in both audio and visuals, providing specific timestamps of AI-generated content. While helpful for transparency, the tool only works with Google AI models and cannot detect videos created with other platforms like OpenAI or Adobe Firefly.

Gemini Expands Video Verification Capability

Google has rolled out a new AI video detection feature within Google's Gemini app that allows users to verify whether videos were created or edited using the company's artificial intelligence models. This expansion builds on a similar capability for images that launched in November, extending content transparency tools to motion pictures and audio

1

. The process is straightforward: users upload a video to Gemini and ask simple questions like "Was this generated using Google AI?" to receive detailed analysis

2

.

Source: The Verge

Source: The Verge

The feature addresses a growing challenge as AI-generated videos become increasingly difficult to distinguish from genuine recordings. With AI slop problem flooding the internet, Google aims to provide users with tools to sort fact from fiction in an era where deepfakes and synthetic media proliferate across social platforms

4

.

How SynthID Watermarking Works

At the core of this video verification capability lies SynthID, Google's proprietary digital watermarking technology. The invisible SynthID watermark embeds imperceptible signals into AI-generated content that remain undetectable to human viewers but can be easily identified by Gemini's scanning software

5

.

Source: Gadgets 360

Source: Gadgets 360

When analyzing uploaded content, Gemini examines both audio and visuals for these embedded markers.

The responses users receive go beyond simple yes-or-no answers. Gemini provides specific timestamps indicating exactly where AI-generated elements appear. For instance, a typical response might read: "SynthID detected within the video between 5-20 seconds and audio between 10-20 seconds"

1

. This level of detail helps users understand which portions of a video contain synthetic content, whether it's the background music, the footage itself, or both

5

.

Since introducing watermarking technology in 2023, Google claims to have marked more than 20 billion pieces of AI-generated content

5

. The company's Nano Banano AI image generation model within Gemini also embeds C2PA metadata as part of broader transparency efforts

2

.

Critical Limitations and File Restrictions

The most significant limitation of this tool is its exclusive focus on Google AI models. Gemini can only detect AI-generated videos created with Google's own tools and cannot identify content produced by competitors like OpenAI, Adobe Firefly, or other AI platforms

3

. In testing, when videos from Bing and Adobe Firefly were uploaded, Gemini correctly identified they weren't made with Google tools but couldn't determine if they were AI-generated by other means

1

.

There are also practical constraints on file size and duration limit. Users can upload videos up to 100 MB and 90 seconds long

1

2

. While these restrictions prevent checking full-length movies, they're sufficient for verifying short clips and social media content where misinformation typically spreads

5

.

Questions remain about how easily the SynthID watermark can be removed. OpenAI learned hard lessons when watermarks were quickly scrubbed from its Sora app videos. Google describes its watermark as "imperceptible," but its resilience against removal attempts remains untested at scale

2

.

What This Means for Content Detectors and Deepfakes

The lack of coordinated tagging of AI-generated material across social media platforms allows deepfakes to go undetected, even when watermarking technology exists

2

. While Google's effort represents progress in content transparency tools, the ecosystem-specific nature means it shouldn't be considered a universal solution to combat deepfakes.

Interestingly, even when Gemini cannot detect a SynthID watermark in non-Google videos, the AI can still provide analysis. When asked to evaluate suspicious videos based on common AI generation problems, Gemini can identify telltale signs and offer educated assessments about whether content is likely synthetic

1

. This secondary capability adds value beyond the primary watermark detection function.

The feature is now available globally in all languages and countries where the Gemini app operates

1

2

. Users should watch for whether other AI companies adopt compatible watermarking standards and how social platforms integrate these content detectors into their moderation systems. The effectiveness of image verification and video verification tools ultimately depends on industry-wide adoption rather than isolated implementations.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo