6 Sources
6 Sources
[1]
Google Says Gemini Will Now Be Able to Identify AI Images, but There's a Big Catch
Google's betting invisible AI watermarks will be just as good as visible ones. The company is continuing its week of Gemini 3 news with an announcement that it's bringing its AI content detector, SynthID detector, out of a private beta for everyone to use. This news comes in tandem with the release of nano banana pro, Google's ultrapopular AI image editor. The new pro model comes with a lot of upgrades, including the ability to create legible text and upscale your images to 4K. That's great for creators who use AI, but it also means it will be harder than ever to identify AI-generated content. We've had deepfakes since long before generative AI. But AI tools, like the ones Google and OpenAI develop, let anyone create convincing fake content quicker and cheaper than ever before. That's led to a massive influx of AI content online, everything from low-quality AI slop to realistic-looking deepfakes. OpenAI's viral AI video app, Sora, was another major tool that showed us how easily these AI tools can be abused. It's not a new problem, but AI has led to a dramatic escalation of the deepfake crisis. Read more: AI Slop Has Turned Social Media Into an Antisocial Wasteland That's why SynthID was created. Google introduced SynthID in 2023, and every AI model it has released since then has attached these invisible watermarks to AI content. Google adds a small, visible, sparkle-shaped watermark, too, but neither really help when you're quickly scrolling your social media feed and not vigorously analyzing each post. To help prevent the deepfake crisis (that the company helped create) from getting worse, Google is introducing a new tool to use to identify AI content. SynthID Detector does exactly what its name implies; it analyzes images and can pick up on the invisible SynthID watermark. So in theory, you can upload an image to Gemini and ask the chatbot whether it was created with AI. But there's a huge catch -- Gemini can only confirm if an image was made with Google's AI, not any other company's. Because there are so many AI image and video models available, that means Gemini likely isn't able to tell you if it was AI-generated with a non-Google program. Right now, you can only ask about images, but Google said in a blog post that it plans to expand the capabilities to video and audio. No matter how limited, tools like these are still a step in the right direction. There are a number of AI detection tools, but none of them are perfect. Generative media models are improving quickly, sometimes too quickly for detection tools to keep up. That's why it's incredibly important to label any AI content you're sharing online and to remain dubious of any suspicious images or videos you see in your feeds.
[2]
Google Gemini is getting better at identifying AI fakes
Google is making it easier for Gemini users to detect at least some AI-generated content. From today, you'll be able to use the Gemini app to determine if an image was either created or edited by a Google AI tool, simply by asking Gemini "Is this AI-generated?" While the initial launch is limited to images, Google says verification of video and audio will come "soon," and it also intends to expand the functionality beyond the Gemini app, including into Search. The more important expansion will come further down the line, when Google extends verification to support industry-wide C2PA content credentials. The initial image verification is based only on SynthID, Google's own invisible AI watermarking, but an expansion to C2PA would make it possible to detect the source of content generated by a wider variety of AI tools and creative software, including OpenAI's Sora. Google also announced that images generated by its Nano Banana Pro model, also revealed today, will have C2PA metadata embedded. It's the second bit of good news for C2PA this week, after TikTok confirmed it would use C2PA metadata as part of its own invisible watermarking for AI-generated content. Manual content verification in Gemini is a useful step, but C2PA credentials, and other watermarks like SynthID won't be truly useful until social media platforms get better at flagging AI-generated content automatically, rather than putting the onus on users to confirm for themselves.
[3]
You Can Now Ask Google Gemini Whether an Image is AI-Generated or Not
Google has a new feature that allows users to find out whether an image is AI-generated or not -- a much-needed tool in a world of AI slop. The new feature is available via Google Gemini 3, the latest installment of the company's LLM and multi-modal AI. To ascertain whether an image is AI-generated, simply open the Gemini app, upload the image, and ask something like: "Is this image AI-generated?" Gemini will give an answer, but it is predicated on whether that image contains SynthID, Google's digital watermarking technology that "embeds imperceptible signals into AI-generated content." Images that have been generated on one of Google's models, like Nano Banana, for example, will be flagged by Gemini as AI. "We introduced SynthID in 2023," Google says in a blog post. "Since then, over 20 billion AI-generated pieces of content have been watermarked using SynthID, and we have been testing our SynthID Detector, a verification portal, with journalists and media professionals." While SynthID is Google's technology, the company says that it will "continue to invest in more ways to empower you to determine the origin and history of content online." It plans to incorporate the Coalition for Content Provenance and Authority (C2PA) standard so users will be able to check the provenance of an image created by AI models outside of Google's ecosystem. "As part of this, rolling out this week, images generated by Nano Banana Pro (Gemini 3 Pro Image) in the Gemini app, Vertex AI, and Google Ads will have C2PA metadata embedded, providing further transparency into how these images were created," Google adds. "We look forward to expanding this capability to more products and surfaces in the coming months." I put Gemini's latest model to the test to see whether it can accurately spot an AI-generated image. Results below. So far, so good -- and once C2PA is added, the system will feel much more complete. The best part is that it offers a relatively simple way to check whether an image was generated by AI. Photographers should consider adding a C2PA signature to their own photos, which can be done easily in Lightroom or Photoshop.
[4]
How we're bringing AI image verification to the Gemini app
At Google, we've long invested in ways to provide you with helpful context about information you see online. Now, as generative media becomes increasingly prevalent and high-fidelity, we are deploying tools to help you more easily determine whether the content you're interacting with was created or edited using AI. Starting today, we're making it easier for everyone to verify if an image was generated with or edited by Google AI right in the Gemini app, using SynthID, our digital watermarking technology that embeds imperceptible signals into AI-generated content. We introduced SynthID in 2023. Since then, over 20 billion AI-generated pieces of content have been watermarked using SynthID, and we have been testing our SynthID Detector, a verification portal, with journalists and media professionals. If you see an image and want to confirm it has been made by Google AI, upload it to the Gemini app and ask a question such as: "Was this created with Google AI?" or "Is this AI-generated?" Gemini will check for the SynthID watermark and use its own reasoning to return a response that gives you more context about the content you encounter online.
[5]
Google Now Lets You Ask Gemini if an Image Was Created Using AI
Google on Thursday announced the rollout of a new feature in the Gemini app, which allows users to verify if an image was generated or edited using the company's artificial intelligence (AI) tools. The Mountain View-based tech giant said this move is aimed at increasing content transparency using SynthID, its digital watermarking technology. This capability will also be expanded to support additional formats beyond images, such as video and audio clips. SynthID According to Google, SynthID is embedded in all images generated by its tools. It now provides a verification feature in the Gemini app to check whether an image was generated by Google AI. The company said that visible watermarks are used for free and Pro users, while Ultra subscribers and enterprise tools will have the option of removing visible marks for professional work. Google is also testing a verification portal called SynthID Detector with journalists and media professionals. How to Verify if an Image Was Generated or Edited Using Gemini As per Google, SynthID verification is valid for images generated using its proprietary AI tools and won't work with non-Google AI products. The company, notably, recently expanded this feature to academia, researchers, and several media publishers. Google said that it will expand SynthID verification to support additional formats in the future. Currently limited to images, this capability will be extended to video and audio clips, too. Further, the company also has plans to add SynthID verification to more surfaces, such as Search. SynthID, notably, was first unveiled by Google DeepMind in August 2023 as a beta project aimed at correctly labelling AI-generated content.
[6]
Google adds SynthID-based AI image verification to the Gemini app
Google is expanding its tools for identifying AI-generated content by bringing SynthID-based image verification directly to the Gemini app. The update is designed to give users clear context about whether an image was created or edited using Google's AI models. Users can upload an image into the Gemini app and ask questions such as "Was this created with Google AI?" or "Is this AI-generated?" Gemini will then analyze the image, check for the SynthID watermark, and provide contextual information based on the findings. SynthID, introduced in 2023, embeds imperceptible signals into AI-generated or AI-edited content. According to Pushmeet Kohli, VP of Science and Strategic Initiatives at Google DeepMind, more than 20 billion AI-generated items have been watermarked with SynthID to date. Google has also been testing the SynthID Detector portal with journalists and media professionals. Google states that this rollout builds on ongoing work to provide more context for images in Search and on research initiatives such as Backstory from Google DeepMind. The company plans to extend SynthID verification to additional formats including video and audio, and bring it to more Google surfaces like Search. Google is also collaborating with industry partners through the Coalition for Content Provenance and Authenticity (C2PA). Beginning this week, images generated by Nano Banana Pro (Gemini 3 Pro Image) in the Gemini app, Vertex AI and Google Ads will include C2PA metadata. Google says this capability will expand to more products and surfaces in the coming months. Over time, the company will extend verification to support C2PA content credentials, allowing users to check the origins of content produced by models outside of Google's ecosystem. Speaking on the development, Laurie Richardson, Vice President of Trust and Safety at Google, said:
Share
Share
Copy Link
Google launches a new feature allowing Gemini users to verify if images were created by Google AI tools through SynthID watermarking technology. The feature currently works only with Google-generated content but plans to expand to industry-wide standards.
Google has launched a new feature in its Gemini app that allows users to verify whether images were created or edited using the company's artificial intelligence tools.
1
The feature utilizes SynthID, Google's digital watermarking technology that embeds imperceptible signals into AI-generated content, marking a significant step in combating the growing challenge of identifying synthetic media online.The verification process is straightforward for users. Simply upload an image to the Gemini app and ask questions such as "Was this created with Google AI?" or "Is this AI-generated?"
4
Gemini will then check for the SynthID watermark and use its reasoning capabilities to provide context about the content's origin.
Source: The Verge
Since introducing SynthID in 2023, Google has watermarked over 20 billion AI-generated pieces of content.
4
The company has been testing its SynthID Detector verification portal with journalists and media professionals before this public rollout.Despite the promising technology, there's a major limitation: Gemini can only confirm if an image was made with Google's AI tools, not those from other companies.
1
This means the system cannot detect content generated by popular AI tools from OpenAI, Midjourney, or other competitors, significantly limiting its effectiveness in the broader AI content landscape.The timing of this release coincides with Google's announcement of its new Nano Banana Pro model, an AI image editor with enhanced capabilities including legible text creation and 4K upscaling.
1
While these improvements benefit creators, they also make it increasingly difficult to identify AI-generated content through visual inspection alone.Related Stories
Google has outlined ambitious plans to expand the verification capabilities beyond their current limitations. The company intends to extend the feature to support video and audio content verification in the near future.
2
More importantly, Google plans to incorporate the Coalition for Content Provenance and Authority (C2PA) standard, which would enable detection of content created by a wider variety of AI tools and creative software.Images generated by Google's Nano Banana Pro model will now include C2PA metadata, providing additional transparency about their creation process.
2
This development follows TikTok's recent announcement to use C2PA metadata as part of its own invisible watermarking system for AI-generated content.The launch addresses growing concerns about the proliferation of AI-generated content online, from low-quality "AI slop" to sophisticated deepfakes.
1
While deepfakes existed before generative AI, current tools enable anyone to create convincing fake content more quickly and affordably than ever before.Google's approach includes both visible and invisible watermarking systems. Free and Pro users see visible watermarks on generated images, while Ultra subscribers and enterprise users can opt to remove visible marks for professional work.
5
The invisible SynthID watermarks remain embedded regardless of the visible watermark settings.The effectiveness of such detection tools remains limited until social media platforms implement automatic flagging systems rather than relying on users to manually verify content.
2
Currently, the responsibility falls on individual users to remain vigilant and verify suspicious content they encounter online.Summarized by
Navi
[4]
[5]
10 Oct 2024•Technology

27 Aug 2025•Technology

17 Mar 2025•Technology
