Curated by THEOUTPOST
On Fri, 25 Oct, 12:03 AM UTC
19 Sources
[1]
Google Photos will label AI-generated photos
Google is introducing AI info labels for AI-edited images in Google Photos. Starting next week, Google Photos will clearly indicate when an image has been edited using generative AI tools like Magic Editor, Magic Eraser, and Zoom Enhance. This information will be visible in the image details section of the Google Photos app, providing users with clearer insight into how their photos have been edited. The new labeling feature will display an "AI info" section in the image details view, both in the app and on the web. This will sit alongside existing information like file name, location, and backup status. Until now, the metadata indicating AI editing was largely invisible to users, but Google is making it accessible to provide more clarity. The decision to make this information available is part of a broader effort to ensure that users understand when and how AI has been used in their photos. The metadata will specify which tools were used to edit the image. For example, if Magic Eraser was used to remove an object from the background, or if Magic Editor was used to enhance certain elements, these details will be included in the "AI info" section. This helps users understand the extent of AI involvement in modifying the photo, which can be particularly relevant when sharing images with others or for professional purposes. In addition to generative AI edits, Google Photos will also label images that include elements from multiple photos, such as those made using the Pixel's Best Take or Add Me features. These features allow users to create composite images by selecting the best expressions or poses from several shots. Best Take, for example, lets users choose the most flattering expressions from a series of group photos, while Add Me allows users to insert themselves into a photo where they were initially absent. Video: Google Google acknowledges that the system isn't foolproof. Users with technical knowledge can still remove or alter this metadata if they choose. Metadata can be edited or stripped from an image using various software tools, meaning that those who intend to conceal AI edits may still find ways to do so. John Fisher, engineering director at Google Photos, mentioned in a blog post that the company is still working on enhancing these transparency features and plans to gather feedback to improve them over time. Google recognizes that transparency around AI-generated content is an evolving issue, and they are committed to exploring additional measures to help users identify when AI has been used in photo editing. This could include more robust metadata standards, watermarks, or other forms of labeling that are harder to remove. The growing prevalence of AI-edited photos has led to broader discussions about the role of technology in shaping what we see. Other companies have approached this issue in different ways. For instance, Apple has taken a more cautious stance regarding generative AI in photo editing. With its upcoming iOS 18.2 release, Apple plans to avoid adding realistic AI-generated elements to images, aiming to prevent potential confusion about the accuracy of what people see. Apple's senior vice president Craig Federighi has expressed concern about AI-generated content blurring the line between what is real and what is artificially created.
[2]
Google Photos will soon label images edited with AI - here's what it'll look like
The company says its AI transparency work is not done and it's still looking for ways to disclose more information about AI edits. Google Photos has added several artificial intelligence (AI)-powered editing features over the last few months, and now it's making sure people use that power responsibly. In a blog post this week, Google announced it will add a note to photos people edited with AI tools such as Zoom Enhance, Magic Eraser, and Magic Editor. "As we bring these tools to more people," Google wrote in a blogpost, "we recognize the importance of doing so responsibly with our AI Principles as guidance." Also: Google survey says more than 75% of developers rely on AI. But there's a catch A photo's metadata already contains information that lets you know if someone used Google's AI tools to edit it. Now, a more visible and easier-to-find "Edited with Google AI" note will appear alongside the photo's file name, backup status, and camera info. However, there won't be a watermark or anything on the photo, so if someone shares it on social media, via text message, or even in person, the person seeing it will not know that the creator used AI. Even within Google Photos, finding this label still takes a little effort -- something most people don't usually do. Of course, if you're looking to get around this for nefarious purposes, stripping metadata is simple. Also: How to use Gemini to generate higher-quality AI images now - for free It is possible, though, that social media platforms could use this metadata to provide their own labels. Facebook and Instagram are already doing this to some degree, and so is Google Search. In addition to this new label, Google says, it's using International Press Telecommunications Council (IPTC) metadata to indicate when someone created an image with non-AI editing tools like Best Take or Add Me. Also: The best AI image generators of 2024 John Fisher, Engineering Director for Google Photos and Google One, added that "the work is not done" around AI transparency. He says Google will continue gathering feedback and evaluating even more solutions to clearly disclose AI edits. This is far from a foolproof method, and it seems like it's more for the person who took the photo, but it's at least a start towards Google clearing up lines that AI has quickly blurred.
[3]
Google Photos increases transparency with new AI label
Users can find if an image has been altered with AI in the photo details, under file name and size. Starting next week, Google Photos will show users if an image has been edited with Google's AI tools such as Magic Editor or Magic Eraser on the Photos app. While the app already noted if an image was edited using AI tools in its metadata, which meant it was rarely if ever seen by users, now, the software giant behind the AI chatbot Gemini will make the information available to users of the Photos app in the Details section under other basic information such as file name and size. "As we bring these tools to more people, we recognise the importance of doing so responsibly," the software giant said in an announcement yesterday (24 October). "This work is not done, and we'll continue gathering feedback and evaluating additional solutions to add more transparency around AI edits." According to the International Press Telecommunications Council (IPTC) standards, that Google follows, the "role" of a generative AI tool should be noted in the photo contributor field. "Contributors are people and things that contributed to the creation of the image, so this includes what an AI generator does," the IPTC explains. In addition to adding this information, Google will also indicate when an image is composed of elements from different photos using non-generative features. "For example, 'Best Take' on Pixel 8 and Pixel 9, and 'Add Me' on Pixel 9 use images captured close together in time to create a blended image to help you capture great group photos," Google explained. Google's AI-powered "realistic" editing tools such as Magic Editor, Magic Eraser and Photo Unblur were made free this May for anyone using Google Photos. Google's new Pixel 9 phones have many integrated AI tools, which has worried some users, who are concerned that it will had become too difficult to spot real from fake images. The company joined the C2PA coalition as a steering committee member earlier this year, working towards increasing online transparency, and as a result, its popular video streaming platform YouTube recently started showing if content uploaded to its service has been edited using AI - although the feature is still in its infancy. While Meta and TikTok have also begun labelling AI content on their platforms. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[4]
Google Photos Will Now Highlight If an Image Was Edited Using AI
The move is aimed at preventing the spread of misinformation Google Photos announced the introduction of specific labels to highlight if an image has been edited using artificial intelligence (AI) tools on Thursday. The Mountain View-based tech giant will start including this information within the metadata of the images to let anyone easily check whether the image was made using synthetic methods. Apart from indicating AI-edited images, Google Photos will also highlight if an image has been composed of multiple photos using non-generative tools. The latter will be used in the case of Pixel-specific features such as Best Take and Add Me. In a blog post, the company detailed its new transparency feature. These AI labels will only be added to images that have been edited using AI tools in Google Photos such as Magic Editor and Magic Eraser. However, the company has not highlighted if it will also label images edited using third-party AI tools. With this implementation, whenever a user enhances an image using AI tools within the app, Google will add this information to the metadata of the photo file. A benefit of this is that the metadata information cannot be removed even if an image is cropped or blurred, the label will still exist. However, this will not be useful in case a screenshot of the image is taken as it will generate new Exchangeable Image File Format (EXIF) file data. The tech giant is following the technical standards from The International Press Telecommunications Council (IPTC) to add the AI information in the metadata. This is different from the Coalition for Content Provenance and Authenticity (C2PA) standard which is used by Meta and OpenAI. Alongside the metadata, Google is also making this information visible in file information that can be viewed directly in the Photos app. This information will be added at the bottom of the page titled "AI Info". This will include credits to the tool that was used to edit the image as well as a "Digital Source Type" which will highlight whether generative AI or some other method was used to edit the image. Even the images that have been edited sophistically without the use of generative AI, such as the Best Take or Add Me feature in compatible Pixel devices, the label will add specific information about it.
[5]
Look for the AI disclaimer from Google on photos that look a little too good to be true
We've all been using photo filters and related tools for years to make our faces, food, and fall decor look their best. AI tools arguably manipulate photos in fundamental ways well beyond better lighting and removing red eyes. Google Photos has several generative AI features that can alter an image, but Google will now mark on a photo that you've used those tools in the name of transparency. Starting next week, any photo edited with Google's Magic Editor, Magic Eraser, or Zoom Enhance tools will show a disclaimer indicating that fact within the Google Photos app. The idea is to balance out how easy it is to use AI editing tools in ways that are hard to spot by looking. Google hopes the update will reduce any confusion about image authenticity, whether innocent or done with malicious intent. Google already marks a photo's metadata if it's been edited with generative AI using technical standards created by the International Press Telecommunications Council (IPTC). The metadata is only seen when examining the data behind a photo, relevant only for investigative purposes and record-keeping. But the update digs out that bit of metadata to show along with an image's more mundane details, such as its file name and location. Google isn't singling out its AI tools for the transparency initiative either. Any blended image will have a disclaimer. For instance, the Google Pixel 8 and Pixel 9 smartphones have two photo features: Best Take and Add Me. Best Take will meld together several photos taken together of a group of people into one image to show everyone at their most photogenic, while Add Me can make it look like someone is in a picture who wasn't there. As these are in the realm of synthetic image creation, Google decided to give them a tag indicating they are built from multiple pictures, though not with AI tools. You probably won't notice the change unless you decide to check a picture that seems a little too amazing or if you want to check everything you see out of well-founded caution. However, professionals will probably appreciate Google's move since they don't want to undermine their credibility in a dispute over whether they used AI. Trusting a photograph isn't always enough when AI tools are good enough to trick the eye. A tag or lack thereof by Google might boost trust in a photo. Google's move points to what may be the future of photography and digital media as AI tools grow more common. Of course, doing so is also a marketing move. It's a very minor change to Google Photos in many ways, but proclaiming it helps Google look like it's being responsible about AI while actually doing so.
[6]
Google Introduces New Features to Help You Identify AI-Edited Photos
Samantha Kelly is a freelance writer with a focus on consumer technology, AI, social media, Big Tech, emerging trends and how they impact our everyday lives. Her work has been featured on CNN, NBC, NPR, the BBC, Mashable and more. Google wants to make it easier for you to determine if a photo was edited with AI. In a blog post Thursday, the company announced plans to show the names of editing tools, such as Magic Editor and Zoom Enhance, in the Photos app when they are used to modify images. "As we bring these tools to more people, we recognize the importance of doing so responsibly with our AI Principles as guidance," wrote John Fisher, engineering director for Google Photos. The company's AI Principles include building "appropriate transparency" into its core design process. The move reflects a growing trend among tech companies to address the rise of AI-generated content and provide users with more transparency about how the technology may influence what they see. In the post, Google said it will also highlight when an image is composed of elements from different photos, even if non-generative features are used. For example, Pixel 8's Best Take and Pixel 9's Add Me combine images taken close together in time to create a blended group photo. "This work is not done, and we'll continue gathering feedback and evaluating additional solutions to add more transparency around AI edits," Fisher wrote. This isn't the first time Google has rolled out ways to inform users about AI use. In July, the company announced a feature called About This Image that works with its Circle to Search for phones and in Google Lens for iOS and Android. It provides context about how a photo has been used or created. OpenAI, Adobe, Microsoft, Apple and Meta are also experimenting with technologies that help people identify AI-edited images. In July, Meta announced plans to rename the labels it applies to social media posts that are suspected to have been manipulated with AI tools by displaying "AI info" alongside a post instead of "Made with AI." The change aims to give users access to more specific information about how AI tools were used rather than only labeling photos as AI-generated. Meanwhile, Apple's upcoming Apple Intelligence features, which let users create new emoji, edit photos and create images using AI, are expected to add code to each image for easier AI identification.
[7]
Google Photos Introduces Labels for AI-Edited Images
The introduction of labels aims to inform users whether an image has been modified using AI technology For those who were growing increasingly concerned with privacy issues in digital photography, Google Photos recently declared it would begin labelling the images and videos edited with AI. This feature, which is expected to be implemented soon, is designed to let the user know whether an image may have been altered using artificial intelligence tools, as specified by the International Press Telecommunications Council (IPTC). The labels will be embedded into the data of photos which have been edited using the Google feature called Google Photos, to indicate the usage of AI brightly. This metadata will remain intact even if some parts of the image are cropped or prevent the photo from blurring. Therefore, it is also important to point out that making a copy of the edited image will create a new Exchangeable Image File Format (EXIF) file will be generated, potentially losing the original metadata. According to Google, the labels will specifically identify images enhanced using its built-in AI features, such as Magic Editor and Magic Eraser. The labels will also indicate if an image has been created using multiple photos through non-generative tools. This is particularly relevant for Pixel device features like Best Take and Add Me, which allow for sophisticated editing without employing generative AI. Users will find the new AI labels visible in the Photos app under a section titled "AI Info," located at the bottom of the file information page. This section will not only highlight whether generative AI was involved but also credit the specific tool used for the edits. This move aligns with Google's commitment to transparency and user awareness, allowing individuals to understand the extent of AI's influence on their images. While Google has not clarified whether images edited with third-party AI tools will receive similar labelling, the initiative marks a significant step toward fostering digital integrity in an era where image manipulation is prevalent. By implementing these labels, is setting a new standard for transparency in photo editing, encouraging users to think critically about the authenticity of the images they encounter. As this feature rolls out, it will undoubtedly reshape how users perceive and interact with digitally edited images, promoting a more informed and responsible approach to photo sharing and consumption.
[8]
Google Photos will label images edited by AI
In acknowledgement of how generative AI makes complex image manipulation very easy and widely accessible, Google Photos is getting an info section that identifies such edits. When you swipe up on an image, the "Details" section at the bottom will show a new "AI info" section if the proper metadata is present. This joins the file name, backup status, and location with the 'i' icon badged by a sparkle. Appearing in Google Photos for Android, iOS, and the web, the "Credit" field will note "Edited with Google AI" or "Made by Google AI" in the case of Pixel Studio and Gemini. AI edit info is rolling out starting next week to Google Photos. In adding this transparency, Google notes how "removing unwanted distractions or objects, perfecting the lighting or even creating a new composition" are no longer "time-consuming complex tasks." As we bring these tools to more people, we recognize the importance of doing so responsibly with our AI Principles as guidance. Looking ahead, Google is looking at "more transparency around AI edits," with the company joining the Coalition for Content Provenance and Authenticity (C2PA) earlier this year.
[9]
Google Photos will show when images have been modified with AI
The new layer of transparency follows the company releasing loads of AI editing tools. Big tech firms have been releasing AI tools all over their software offerings over the past year. But as it becomes ever easier to manipulate images and video with generative AI, there's been a second wave of launching companion policies to better inform people when that technology has been applied to content. Google is the latest to follow the trend. After debuting tools like the last spring and incorporating AI last month, Google Photos will begin labeling visual content that has been modified with AI. Google was already tagging AI-modified images with corresponding metadata, but now a plain language statement will accompany edited photos. In the example the company shared in its , there is a section at the bottom of the image details screen titled "AI Info." This then lists a credit of the AI tool used to adapt the image. It will also state when an image has been modified with generative AI or when an image is a composite of several photos without the use of generative AI, such as with the feature. The new language will appear in Google Photos beginning next week.
[10]
Google Photos Will Now Let You Know If Your Images Were AI-Enhanced
With AI image generation tools getting more powerful at a rapid pace, it's getting pretty tough to differentiate which images are real and which used artificial intelligence to enhance them. In response, some companies have added ways for people to identify when an AI model has either generated or edited an image. Now, Google has added a new feature to the Photos app that reads an image's metadata and lets you know if it has been enhanced by AI in the past. ā Remove Ads Google Photos Gets an AI Identification Tool As announced on Google's blog The Keyword, the Photos app will now let you know if an image's metadata contains any information about AI usage: Photos edited with tools like Magic Editor, Magic Eraser and Zoom Enhance already include metadata based on technical standards from The International Press Telecommunications Council (IPTC) to indicate that they've been edited using generative AI. Now we're taking it a step further, making this information visible alongside information like the file name, location and backup status in the Photos app. ā Remove Ads When looking at an image's details in Google Photos, the app will let you know at the bottom of the panel if someone edited it using AI. It'll also let you know which tools were used to edit the image, including those that don't use AI such as Best Take on Pixel 8 and Pixel 9 and Add Me on Pixel 9. This new feature seems limited to images edited through Google Photos which haven't had their metadata changed to remove all traces of AI editing. As such, it's not designed to help you spot AI generated images online; it's more to let you know which images in your album were digitally enhanced and which weren't.
[11]
Google Photos to Show When an Image Was Edited With AI
Starting next week, Google Photos will show when a photo has been edited with Google AI in the app, a step in the right direction after the Pixel 9 series launched with powerful AI editing tools that seriously lacked transparency. Google, which has remained largely quiet regarding its lack of transparency since launching the Pixel 9 series earlier this year, now says that it "recognizes the importance" of disclosing AI edits to photos, so starting some time next week, it will more prominently show those edits as "edited with Google AI" in the Photos app. "We often make edits to our photos to make them pop. Sometimes, that means making a simple change to a photo, like cropping it. Other times, it might involve more complex changes like removing unwanted distractions or objects, perfecting the lighting or even creating a new composition. These used to be time-consuming complex tasks, but AI has changed that -- powering editing tools like Magic Editor and Magic Eraser in Google Photos," John Fisher, Engineering Director, Google Photos and Google One, says in a blog post. "As we bring these tools to more people, we recognize the importance of doing so responsibly with our AI Principles as guidance. To further improve transparency, we're making it easier to see when AI edits have been used in Google Photos. Starting next week, Google Photos will note when a photo has been edited with Google AI right in the Photos app. Photos edited with tools like Magic Editor, Magic Eraser and Zoom Enhance already include metadata based on technical standards from The International Press Telecommunications Council (IPTC) to indicate that they've been edited using generative AI. Now we're taking it a step further, making this information visible alongside information like the file name, location and backup status in the Photos app." While Google is using embedded IPTC metadata, it is crucially not using the Coalition for Content Provenance and Authenticity (C2PA) standard that has been published by the Content Authenticity Initiative, which is strange given that Google announced it would join the program last February and it already is implementing it into Search, Ads, and on YouTube. Adding a note of any kind on photos edited by Google's AI is nice, but the lack of C2PA means there will still be issues with uploading AI-edited photos to Meta social networks, for example. It's an oddly disjointed rollout of transparency. "In addition to indicating when an image has been edited using generative AI, we will also use IPTC metadata to indicate when an image is composed of elements from different photos using non-generative features. For example, Best Take on Pixel 8 and Pixel 9, and Add Me on Pixel 9 use images captured close together in time to create a blended image to help you capture great group photos," Fisher adds. While a nice addition and a step in the right direction, considering that most of the photos published online live outside of Google Photos, there is more work to be done. Google says it is "evaluating additional solutions" to add more transparency to AI edits, but stops short of specifically saying it be joining the rest of the industry by fully implementing C2PA.
[12]
Google Photos Is Getting A New Update That Will Allow Users To See Details On AI-Edited Images.
While the AI frenzy is going nowhere, companies are actively seeking ways to ensure they do their responsibility to ensure transparency regarding artificially generated content and ethical boundaries are maintained as the technology keeps on evolving. We see Google taking note of this and taking necessary measures to mitigate concerns related to digital content and the blur between authenticity and computer-generated media. The company has an exciting feature update where Google Photos would label AI-generated images. When it comes to transparency in AI, Google is going one step ahead by taking a proactive approach to ensuring original content does not get mixed up with AI-edited images. To continue its commitment, Google Photos is getting an update. An AI info section will be added to the image next to details such as file name, location, and more. If proper metadata is available, the "i" icon will be added with a sparkle. All you have to do is swipe for the details to show at the bottom. The addition of the "Credit" field in Google Photos marks a huge leap for Android, iOS, and even the web, as users can clearly identify images made by Google AI or even edited using Pixel Studio or Gemini. It gives more clarity regarding the images they share and will foster more trust due to the authenticity kept in classifying the media. The digital source would be further distinguished into two parts, where editing is done through Generative AI, and this includes using Zoom Enhance, Magic Eraser, and Magic Editor. Then would be a composition of the captured pictures category where pictures that have elements added from either Add Me or Best Take and are exclusive to Pixel phones would be classified. The interesting part is that pictures do not have to utilize Google tools to be identified; any picture that follows the metadata standard will be distinguished. This feature will start rolling out next week to Google Photos. Google plans to implement more transparency when it comes to AI edits and even joined the Coalition for Content Provenance and Authenticity (C2PA) to further emphasize how they are taking this very seriously.
[13]
Google Photos adds transparent AI metadata for edited photos
Google on Thursday announced new steps to improve transparency in Google Photos regarding AI edits, making it easier to identify when artificial intelligence has been used. This update supports editing tools such as Magic Editor and Magic Eraser in the app. John Fisher, Engineering Director at Google Photos and Google One, emphasized the company's commitment to responsible AI usage, guided by their AI Principles. He highlighted the importance of expanding access to these tools while maintaining transparency and trust. Google Photos already includes metadata for images edited with tools like Magic Editor, Magic Eraser, and Zoom Enhance, following technical standards set by the International Press Telecommunications Council (IPTC). This metadata indicates the use of generative AI in the editing process. Fisher further explained that Google is making this information even more accessible. The app will now display AI-related details, such as the file name, location, and backup status, alongside specific AI edits. Edited photos will show labels like: Beyond generative AI, Google will also use IPTC metadata to identify when an image has been composed of elements from different photos. This applies to features like "Best Take" on Pixel 8 and Pixel 9, and "Add Me" on Pixel 9. These features blend images taken in close succession to create better group photos. Speaking about the update, John Fisher, Engineering Director at Google Photos and Google One, said,
[14]
Google Photos goes official with its tool to spot AI-edited images
Earlier today, we highlighted an upcoming Google Photos feature that would add a new AI Info section to AI-manipulated images. Google has now officially announced the feature, and it will start rolling out to users next week. In its announcement, Google notes that photos edited with AI-powered tools like Magic Eraser, Magic Editor, and Zoom Enhance already include IPTC metadata to indicate that they are edited using generative AI. The upcoming change makes "this information visible alongside information like the file name, location and backup status in the Photos app." As we showcased earlier, the feature will add a new AI Info section to the image description with Credit and Digital source type fields. While Google's announcement only states that the AI info section will appear on images edited with AI, we've discovered that it also shows the IPTC metadata for AI-generated images. Google adds that it also uses IPTC metadata for images captured using tools like Best Take and Add Me on its Pixel devices, which use non-generative features to combine elements from different photos into one. However, the company has not clarified whether Google Photos will show this metadata to users.
[15]
Google adds new disclosures for AI photos, but it's still not obvious at first glance
Starting next week, the Google Photos app will add a new disclosure for when a photo has been edited with one of its AI features, such as Magic Editor, Magic Eraser, and Zoom Enhance. When you click into a photo in Google Photos, there will now be a disclosure when you scroll to the bottom of the "Details" section, noting when a photo was "Edited with Google AI." Google says it's introducing this disclosure to "further improve transparency," however, it's still not that obvious when a photo is edited by AI. There still won't be visual watermarks within the frame of a picture indicating that a photo is AI generated. If someone sees a photo edited by Google's AI on social media, in a text message, or even while scrolling through their photos app, they won't immediately see that the photo is synthetic. Google announced the new disclosure for AI photos in a blog post on Thursday, a little over two months after Google unveiled its new Pixel 9 phones, which are jam-packed with these AI photo editing features. The disclosures seem to be a response to the backlash Google received for widely distributing these AI tools without any visual watermarks that are easily readable by humans. As for Best Take and Add Me -- Google's other new photo-editing features that don't use generative AI -- Google Photos will now also indicate those photos have been edited in their metadata, but not under the Details tab. Those features edit multiple photos together to appear as one clean image. These new tags don't exactly solve the main issue people have with Google's AI editing features: the lack of visual watermarks in the frame of a photo (at least ones you can see at a glance) may help people not feel deceived, but Google doesn't have them. Every photo edited by Google AI already discloses that it's edited by AI in the photo's metadata. Now, there's also an easier-to-find disclosure under the Details tab on Google Photos. But the problem is that most people don't look at the metadata or details tab for photos they see on the internet. They just look and scroll away, without much further investigation. To be fair, visual watermarks in the frame of an AI photo are not a perfect solution either. People can easily crop or edit these watermarks out, and then we're back to square one. We reached out to Google to ask if they're doing anything to help people immediately identify whether a photo is edited by Google AI, but didn't immediately hear back. The proliferation of Google's AI image tools could increase the amount of synthetic content people view on the internet, making it harder to discern what's real and what's fake. The approach Google has taken, using metadata watermarks, relies on platforms to indicate to users that they're viewing AI generated content. Meta is already doing this on Facebook and Instagram, and Google says it plans to flag AI images in Search later this year. But other platforms have been slower to catch up.
[16]
So Big: Google Photos Will Tell You If a Photo Was Edited With AI
We may earn a commission when you click links to retailers and purchase goods. More info. The focus on AI from the biggest tech companies in the world has brought tools into our lives that could (eventually) bring meaningful changes to the way we get things done. For now, though, a lot of the early AI ideas have really geared towards photo and video editing, with this suggestion that AI could improve your digitally captured world. That could be seen as both a good and bad thing, and I say that because AI edits to something like a photo can present a non-reality. AI is asking us to remove objects from pictures or add people that weren't there, and it creates an area of the digital space that could be confusing, used for bad purposes, etc. Google says today that they want to make it more obvious when something is edited with Google AI and will start showing "AI Info" in Google Photos going forward. The announcement from Google is pretty straight-forward here. When a photo has been edited with Google AI, the Google Photos app will note that in the metadata or "Details" area of an image. To access that (seen above), you simply open a photo in Google Photos and then swipe up a touch to present that info that sits below the image. This is the area where you'll find location data for the image, when it was backed up, which device was used to take the photo, the size an resolution of the image, and all of the camera settings active when it was snapped. As you can see here, there is a new "AI info" section that shows that a photo was "Edited with Google AI." This could show up if you use Google's Magic Editor, Magic Eraser, or Zoom Enhance, for example. Google mentioned today that this area will also indicate if an image was composed of elements from different photos, like if you were using Best Take or Add Me. This "AI info" section should start showing up as early as next week in Google Photos.
[17]
Google Photos will soon show you if an image was edited with AI
"Photos edited with tools like Magic Editor, Magic Eraser and Zoom Enhance already include metadata based on technical standards from The International Press Telecommunications Council (IPTC) to indicate that they've been edited using generative AI," John Fisher, engineering director of Google Photos, wrote in a blog post. "Now we're taking it a step further, making this information visible alongside information like the file name, location and backup status in the Photos app."
[18]
More transparency for AI edits in Google Photos
We often make edits to our photos to make them pop. Sometimes, that means making a simple change to a photo, like cropping it. Other times, it might involve more complex changes like removing unwanted distractions or objects, perfecting the lighting or even creating a new composition. These used to be time-consuming complex tasks, but AI has changed that -- powering editing tools like Magic Editor and Magic Eraser in Google Photos. As we bring these tools to more people, we recognize the importance of doing so responsibly with our AI Principles as guidance. To further improve transparency, we're making it easier to see when AI edits have been used in Google Photos. Starting next week, Google Photos will note when a photo has been edited with Google AI right in the Photos app. Photos edited with tools like Magic Editor, Magic Eraser and Zoom Enhance already include metadata based on technical standards from The International Press Telecommunications Council (IPTC) to indicate that they've been edited using generative AI. Now we're taking it a step further, making this information visible alongside information like the file name, location and backup status in the Photos app.
[19]
Google Photos may soon flag AI-generated images
Key Takeaways Google Photos will soon add a feature to identify AI-generated images in the metadata. The AI info section will contain tags displaying the creator and the digital source type. The tool will help combat fraudulent activities but is not yet officially available for users. ā Remove Ads Google Photos has long been a platform where Google flexes its AI capabilities. The app has received many AI features in the past, such as the Magic Editor and, more recently, a new Ask Photos feature. Now, the app appears to be adding another powerful tool to its lineup -- a feature that will let users know if the image they're viewing was generated by AI. Related How to hide personal photos on your Android or iOS device It's in your best interest to protect sensitive media 1 The report comes from Android Authority, which shared a screenshot demonstrating the feature in action. While Google Photos won't display a prominent notification in the main viewing area, it will add labels to the image's metadata, informing users if the photo has been AI-generated. ā Remove Ads Source: Android Authority As shown in the screenshot, a new AI info section will be added to the image's metadata. This section includes two new tags: "Credit" and "Digital source type." The "Credit" tag identifies who or what AI tool created the image, while the "Digital source type" tag indicates how the image was digitally generated or processed. For example, images created using Google Gemini are typically labeled with the "Made with Google AI" credit tag. Google Photos will make it easier to identify AI-generated images In this day and age where AI is increasingly used for fraudulent activities, this tool could be a valuable resource. However, we wish Google would make the AI label more visible, rather than burying it deep in the image metadata, as most users tend to overlook such details. ā Remove Ads This feature appears to be built on Google DeepMind's recently announced AI watermarking tool, which aims to make AI-generated content easier to detect. The report indicates that the info section was seen running on version Google Photos version 7.3, although it's not official yet. We expect an announcement soon, and this feature should be available to users in the near future. ā Remove Ads
Share
Share
Copy Link
Google Photos is implementing a new feature to label AI-edited images, promoting transparency in photo manipulation and addressing concerns about the authenticity of digital content.
Google is set to launch a new feature in Google Photos that will clearly indicate when an image has been edited using artificial intelligence (AI) tools. Starting next week, users will see an "AI info" section in the image details view, both in the app and on the web, providing transparency about AI-powered edits 1.
The new labeling system will cover a range of AI-powered editing tools, including:
Additionally, Google will label images created using non-generative AI features such as Best Take and Add Me, which combine elements from multiple photos 2.
The AI information will be displayed alongside other image details like file name, location, and backup status. While this metadata was previously hidden from users, Google is now making it easily accessible to provide clarity on how photos have been modified 3.
Google is following the International Press Telecommunications Council (IPTC) standards for adding AI information to image metadata. This approach differs from the Coalition for Content Provenance and Authenticity (C2PA) standard used by some other tech companies 4.
While this feature aims to increase transparency, it's not without limitations:
This move by Google is part of a broader industry trend towards transparency in AI-generated content:
John Fisher, Engineering Director for Google Photos, stated that "the work is not done" regarding AI transparency. Google plans to continue gathering feedback and evaluating additional solutions to improve the disclosure of AI edits 2.
As AI tools become more prevalent in photo editing, this labeling system represents an important step in maintaining trust and authenticity in digital imagery. However, the effectiveness of such measures in combating misinformation and preserving photo integrity remains to be seen as the technology continues to evolve.
Reference
[1]
[3]
[4]
Google Photos is developing a new feature to help users identify AI-generated or manipulated images, potentially addressing concerns about deepfakes and misinformation.
6 Sources
Google is set to implement a new feature in its search engine that will label AI-generated images. This move aims to enhance transparency and combat the spread of misinformation through deepfakes.
14 Sources
Google announces plans to label AI-generated images in search results, aiming to enhance transparency and help users distinguish between human-created and AI-generated content.
2 Sources
Meta has updated its policy on labeling AI-generated and AI-edited content across its platforms, moving the AI disclosure to a less prominent position. This change has sparked discussions about transparency and user awareness in the age of artificial intelligence.
5 Sources
Google's upcoming Pixel 9 smartphone introduces an AI-powered Magic Editor feature, allowing users to dramatically alter photos. While innovative, it raises questions about the authenticity of digital images and potential misuse.
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
Ā© 2024 TheOutpost.AI All rights reserved