2 Sources
[1]
Google's SynthID is the latest tool for catching AI-made content. What is AI 'watermarking' and does it work?
Last month, Google announced SynthID Detector, a new tool to detect AI-generated content. Google claims it can identify AI-generated content in text, image, video or audio. But there are some caveats. One of them is that the tool is currently only available to "early testers" through a waitlist. The main catch is that SynthID primarily works for content that's been generated using a Google AI service - such as Gemini for text, Veo for video, Imagen for images, or Lyria for audio. If you try to use Google's AI detector tool to see if something you've generated using ChatGPT is flagged, it won't work. That's because, strictly speaking, the tool can't detect the presence of AI-generated content or distinguish it from other kinds of content. Instead, it detects the presence of a "watermark" that Google's AI products (and a couple of others) embed in their output through the use of SynthID. A watermark is a special machine-readable element embedded in an image, video, sound or text. Digital watermarks have been used to ensure that information about the origins or authorship of content travels with it. They have been used to assert authorship in creative works and address misinformation challenges in the media. SynthID embeds watermarks in the output from AI models. The watermarks are not visible to readers or audiences, but can be used by other tools to identify content that was made or edited using an AI model with SynthID on board. SynthID is among the latest of many such efforts. But how effective are they? There's no unified AI detection system Several AI companies, including Meta, have developed their own watermarking tools and detectors, similar to SynthID. But these are "model specific" solutions, not universal ones. This means users have to juggle multiple tools to verify content. Despite researchers calling for a unified system, and major players like Google seeking to have their tool adopted by others, the landscape remains fragmented. A parallel effort focuses on metadata - encoded information about the origin, authorship and edit history of media. For example, the Content Credentials inspect tool allows users to verify media by checking the edit history attached to the content. However, metadata can be easily stripped when content is uploaded to social media or converted into a different file format. This is particularly problematic if someone has deliberately tried to obscure the origin and authorship of a piece of content. There are detectors that rely on forensic cues, such as visual inconsistencies or lighting anomalies. While some of these tools are automated, many depend on human judgement and common sense methods, like counting the number of fingers in AI-generated images. These methods may become redundant as AI model performance improves. How effective are AI detection tools? Overall, AI detection tools can vary dramatically in their effectiveness. Some work better when the content is entirely AI-generated, such as when an entire essay has been generated from scratch by a chatbot. The situation becomes murkier when AI is used to edit or transform human-created content. In such cases, AI detectors can get it badly wrong. They can fail to detect AI or flag human-created content as AI-generated. AI detection tools don't often explain how they arrived at their decision, which adds to the confusion. When used for plagiarism detection in university assessment, they are considered an "ethical minefield" and are known to discriminate against non-native English speakers. Read more: Can you spot the AI impostors? We found AI faces can look more real than actual humans Where AI detection tools can help A wide variety of use cases exist for AI detection tools. Take insurance claims, for example. Knowing whether the image a client shares depicts what it claims to depict can help insurers know how to respond. Journalists and fact checkers might draw on AI detectors, in addition to their other approaches, when trying to decide if potentially newsworthy information ought to be shared further. Employers and job applicants alike increasingly need to assess whether the person on the other side of the recruiting process is genuine or an AI fake. Users of dating apps need to know whether the profile of the person they've met online represents a real romantic prospect, or an AI avatar, perhaps fronting a romance scam. If you're an emergency responder deciding whether to send help to a call, confidently knowing whether the caller is human or AI can save resources and lives. Where to from here? As these examples show, the challenges of authenticity are now happening in real time, and static tools like watermarking are unlikely to be enough. AI detectors that work on audio and video in real time are a pressing area of development. Whatever the scenario, it is unlikely that judgements about authenticity can ever be fully delegated to a single tool. Understanding the way such tools work, including their limitations, is an important first step. Triangulating these with other information and your own contextual knowledge will remain essential.
[2]
Google's SynthID is the latest tool for catching AI-made content. what is AI 'watermarking,' and does it work?
Last month, Google announced SynthID Detector, a new tool to detect AI-generated content. Google claims it can identify AI-generated content in text, image, video or audio. But there are some caveats. One of them is that the tool is currently only available to "early testers" through a waitlist. The main catch is that SynthID primarily works for content that's been generated using a Google AI service -- such as Gemini for text, Veo for video, Imagen for images, or Lyria for audio. If you try to use Google's AI detector tool to see if something you've generated using ChatGPT is flagged, it won't work. That's because, strictly speaking, the tool can't detect the presence of AI-generated content or distinguish it from other kinds of content. Instead, it detects the presence of a "watermark" that Google's AI products (and a couple of others) embed in their output through the use of SynthID. A watermark is a special machine-readable element embedded in an image, video, sound or text. Digital watermarks have been used to ensure that information about the origins or authorship of content travels with it. They have been used to assert authorship in creative works and address misinformation challenges in the media. SynthID embeds watermarks in the output from AI models. The watermarks are not visible to readers or audiences, but can be used by other tools to identify content that was made or edited using an AI model with SynthID on board. This means users have to juggle multiple tools to verify content. Despite researchers calling for a unified system, and major players like Google seeking to have their tool adopted by others, the landscape remains fragmented. A parallel effort focuses on metadata -- encoded information about the origin, authorship and editing history of media. For example, the Content Credentials inspect tool allows users to verify media by checking the edit history attached to the content. However, metadata can be easily stripped when content is uploaded to social media or converted into a different file format. This is particularly problematic if someone has deliberately tried to obscure the origin and authorship of a piece of content. There are detectors that rely on forensic cues, such as visual inconsistencies or lighting anomalies. While some of these tools are automated, many depend on human judgment and common sense methods, like counting the number of fingers in AI-generated images. These methods may become redundant as AI model performance improves. How effective are AI detection tools? Overall, AI detection tools can vary dramatically in their effectiveness. Some work better when the content is entirely AI-generated, such as when an entire essay has been generated from scratch by a chatbot. The situation becomes murkier when AI is used to edit or transform human-created content. In such cases, AI detectors can get it badly wrong. They can fail to detect AI or flag human-created content as AI-generated. AI detection tools don't often explain how they arrived at their decision, which adds to the confusion. When used for plagiarism detection in university assessment, they are considered an "ethical minefield" and are known to discriminate against non-native English speakers. Where AI detection tools can help A wide variety of use cases exist for AI detection tools. Take insurance claims, for example. Knowing whether the image a client shares depicts what it claims to depict can help insurers know how to respond. Journalists and fact checkers might draw on AI detectors, in addition to their other approaches, when trying to decide if potentially newsworthy information ought to be shared further. Employers and job applicants alike increasingly need to assess whether the person on the other side of the recruiting process is genuine or an AI fake. Users of dating apps need to know whether the profile of the person they've met online represents a real romantic prospect, or an AI avatar, perhaps fronting a romance scam. If you're an emergency responder deciding whether to send help to a call, confidently knowing whether the caller is human or AI can save resources and lives. Where to from here? As these examples show, the challenges of authenticity are now happening in real time, and static tools like watermarking are unlikely to be enough. AI detectors that work on audio and video in real time are a pressing area of development. Whatever the scenario, it is unlikely that judgments about authenticity can ever be fully delegated to a single tool. Understanding the way such tools work, including their limitations, is an important first step. Triangulating these with other information and your own contextual knowledge will remain essential. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Share
Copy Link
Google introduces SynthID Detector, a tool designed to identify AI-generated content across various media formats, but with limitations in its application and effectiveness.
Google has recently announced SynthID Detector, a new tool designed to identify AI-generated content across text, image, video, and audio formats. This development comes as part of the ongoing efforts to address the challenges posed by the proliferation of AI-generated media 12.
SynthID operates on a principle of digital watermarking, embedding a machine-readable element into AI-generated content. This watermark is invisible to human readers or viewers but can be detected by specialized tools. The primary function of SynthID is to identify content created or edited using Google's AI services, such as Gemini for text, Veo for video, Imagen for images, and Lyria for audio 12.
Source: Tech Xplore
While SynthID represents a step forward in AI content detection, it comes with significant limitations:
Limited Scope: The tool primarily works with content generated by Google's AI services and a few others, making it ineffective for detecting content from platforms like ChatGPT 12.
Fragmented Landscape: The lack of a unified detection system means users must navigate multiple tools for comprehensive content verification 12.
Metadata Vulnerabilities: While some efforts focus on metadata for content verification, this information can be easily stripped when content is shared on social media or converted to different file formats 12.
The effectiveness of AI detection tools, including SynthID, varies considerably:
Despite these challenges, AI detection tools have potential applications in various fields:
As AI-generated content becomes more sophisticated, the need for real-time detection tools, especially for audio and video, is becoming increasingly critical. However, experts caution against relying solely on any single tool for authenticity judgments 12.
Source: The Conversation
The development of tools like SynthID raises important questions about digital authenticity and the evolving landscape of content creation. As AI continues to advance, the challenges of distinguishing between human-created and AI-generated content are likely to become more complex, necessitating a multi-faceted approach to content verification and authentication 12.
In conclusion, while SynthID and similar tools represent progress in the field of AI content detection, they also highlight the need for continued research, development, and collaboration across the tech industry to address the growing challenges of digital authenticity in an AI-driven world.
OpenAI reports an increase in Chinese groups using ChatGPT for various covert operations, including social media manipulation, cyber operations, and influence campaigns. The company has disrupted multiple operations originating from China and other countries.
7 Sources
Technology
16 hrs ago
7 Sources
Technology
16 hrs ago
Palantir CEO Alex Karp emphasizes the dangers of AI and the critical nature of the US-China AI race, highlighting Palantir's role in advancing US interests in AI development.
3 Sources
Technology
16 hrs ago
3 Sources
Technology
16 hrs ago
Microsoft's stock reaches a new all-time high, driven by its strategic AI investments and strong market position in cloud computing and productivity software.
3 Sources
Business and Economy
16 hrs ago
3 Sources
Business and Economy
16 hrs ago
A UN report highlights a significant increase in indirect carbon emissions from major tech companies due to the energy demands of AI-powered data centers, raising concerns about the environmental impact of AI expansion.
3 Sources
Technology
16 hrs ago
3 Sources
Technology
16 hrs ago
WhatsApp is testing a new feature that allows users to create their own AI chatbots within the app, similar to OpenAI's Custom GPTs and Google Gemini's Gems.
2 Sources
Technology
1 day ago
2 Sources
Technology
1 day ago