2 Sources
2 Sources
[1]
Elon Musk teases a new image-labeling system for X...we think?
Elon Musk's X is the latest social network to roll out a feature to label edited images as "manipulated media," if a post by Elon Musk is to be believed. But the company has not clarified how it will make this determination, or whether it includes images that have been edited using traditional tools, like Adobe's Photoshop. So far, the only details on the new feature come from a cryptic X post from Elon Musk saying, "Edited visuals warning," as he reshares an announcement of a new X feature made by the anonymous X account DogeDesigner. That account is often used as a proxy for introducing new X features, as Musk will repost from it to share news. Still, details on the new system are thin. DogeDesigner's post claimed X's new feature could make it "harder for legacy media groups to spread misleading clips or pictures." It also claimed the feature is new to X. Before it was acquired and renamed as X, the company known as Twitter had labeled tweets using manipulated, deceptively altered, or fabricated media as an alternative to removing them. Its policy wasn't limited to AI but included things like "selected editing or cropping or slowing down or overdubbing, or manipulation of subtitles," the site integrity head, Yoel Roth, said in 2020. It's unclear if X is adopting the same rules or has made any significant changes to tackle AI. Its help documentation currently says there's a policy against sharing inauthentic media, but it's rarely enforced, as the recent deepfake debacle of users sharing non-consensual nude images showed. In addition, even the White House now shares manipulated images. Calling something "manipulated media" or an "AI image" can be nuanced. Given that X is a playground for political propaganda, both domestically and abroad, some understanding of how the company determines what's "edited," or perhaps AI-generated or AI-manipulated, should be documented. In addition, users should know whether or not there's any sort of dispute process beyond X's crowdsourced Community Notes. As Meta discovered when it introduced AI image labeling in 2024, it's easy for detection systems to go awry. In its case, Meta was found to be incorrectly tagging real photographs with its "Made with AI" label, even though they had not been created using generative AI. This happened because AI features are increasingly being integrated into creative tools used by photographers and graphic artists. (Apple's new Creator Studio suite, launching today, is one recent example.) As it turned out, this confused Meta's identification tools. For instance, Adobe's cropping tool was flattening images before saving them as a JPEG, triggering Meta's AI detector. In another example, Adobe's Generative AI Fill, which is used to remove objects -- like wrinkles in a shirt, or an unwanted reflection -- was also causing images to be labeled as "Made with AI," when they were only edited with AI tools. Ultimately, Meta updated its label to say "AI info," so as not to outright label images as "Made with AI" when they had not been. Today, there's a standards-setting body for verifying the authenticity and content provenance for digital content, known as the C2PA (Coalition for Content Provenance and Authenticity). There are also related initiatives like CAI, or Content Authenticity Initiative, and Project Origin, focused on adding tamper-evident provenance metadata to media content. Presumably, X's implementation would abide by some sort of known process for identifying AI content, but X's owner, Elon Musk, didn't say what that is. Nor did he clarify whether he's talking specifically about AI images, or just anything that's not the photo being uploaded to X directly from your smartphone's camera. It's even unclear whether the feature is brand-new, as DogeDesigner claims. X isn't the only outlet grappling with manipulated media. In addition to Meta, TikTok has also been labeling AI content. Streaming services like Deezer and Spotify are also scaling initiatives to identify and label AI music, as well. Google Photos is using C2PA to indicate how photos on its platform were made. Microsoft, the BBC, Adobe, Arm, Intel, Sony, OpenAI, and others are on the C2PA's steering committee, while many more companies have joined as members. X is not currently listed among the members, though we've reached out to C2PA to see if that recently changed. X doesn't typically respond to requests for comment, but we asked anyway.
[2]
Elon Musk Announces 'Mainipulated Media' Label on X Images
After the Grok debacle on X, which saw people -- mainly women, and even minors -- undressed with Elon Musk's AI chatbot, the platform's owner has now announced it is rolling out a label on misleading pictures. But as TechCrunch notes, details on the feature are scant. Musk shared a post from DogeDesigner saying, "Edited visuals warning." DogeDesigner, an anonymous account that often announces new features on X, says that "𝕏 now puts a clear warning on posts that use fake or edited visuals to trick people," adding, "This makes it harder for legacy media groups to spread misleading clips or pictures." In the example DogeDesigner gives, there is a "Stay informed. Manipulated Media. Find Out More" tag at the bottom of a video. But there is no other information on exactly how it works or when it is rolling out. The recent manipulated photo shared by the White House of a protester being arrested in Minnesota doesn't have the label. Twitter, as the platform was known before Musk's takeover, previously had a similar system for labeling "deceptively altered" photos and videos that was rolled out in 2020. But attempting to label misleading images that have been either edited or AI-generated is notoriously difficult. Meta found this out when when it tried to introduce "Made With AI" labels on Instagram. As PetaPixel reported in 2024, the labels created confusion as photographers had their posts slapped with the tags for minimal editing or even just for cropping an image in Photoshop. Meta platforms now use "AI Info", rather than Made With AI. PetaPixel was one of the first publications to report on the Grok undressing scandal, which terrorized many women. Photos of people who aren't even on the platform were being uploaded to X just so sinister users could "put them in a bikini". The scandal prompted a ban in a few countries in Asia and an investigation by the E.U. Musk restricted the tool after the backlash.
Share
Share
Copy Link
Elon Musk has announced a new image labeling system for X that flags manipulated media with an "edited visuals warning." The feature aims to combat misleading images, but the company hasn't clarified how it determines what qualifies as manipulated or whether it addresses AI-generated content. The move comes after X faced criticism over non-consensual deepfake images created using its Grok AI chatbot.
Elon Musk has announced that X is introducing a new image labeling system designed to flag manipulated media on the platform. The feature, teased through a cryptic post saying "Edited visuals warning," was first revealed by the anonymous account DogeDesigner, which frequently serves as a proxy for announcing new X features
1
.
Source: TechCrunch
According to DogeDesigner, the system "puts a clear warning on posts that use fake or edited visuals to trick people," claiming it will make it "harder for legacy media groups to spread misleading clips or pictures"
2
.The example provided shows a "Stay informed. Manipulated Media. Find Out More" tag at the bottom of a video, but X has not disclosed how the system determines what qualifies as manipulated content
2
. Critical questions remain unanswered: whether the feature targets AI-generated content specifically, includes images edited with traditional tools like Photoshop, or applies to content edited using Adobe's Generative AI features. The company also hasn't clarified if there's a dispute process beyond its crowdsourced Community Notes system1
.Before Elon Musk's acquisition, Twitter had implemented policies to label tweets containing manipulated, deceptively altered, or fabricated media as an alternative to removing them. In 2020, site integrity head Yoel Roth explained that the policy covered "selected editing or cropping or slowing down or overdubbing, or manipulation of subtitles"
1
. However, X's current help documentation indicates a policy against sharing inauthentic media that is rarely enforced, as evidenced by the recent crisis involving non-consensual deepfake images created using the Grok AI chatbot1
.The Grok debacle saw people, mainly women and even minors, undressed using the AI chatbot, prompting bans in several Asian countries and an investigation by the E.U.
2
. Photos of individuals not even on the platform were uploaded to X so users could manipulate them, creating a significant content moderation crisis. Musk eventually restricted the tool after widespread backlash2
.Attempting to label misleading images proves notoriously difficult, as Meta discovered when it introduced Made With AI labels on Instagram in 2024. The system incorrectly tagged real photographs with the label, even though they hadn't been created using generative AI
1
. This happened because AI features are increasingly integrated into creative tools used by photographers and graphic artists. Adobe's cropping tool was flattening images before saving them as a JPEG, triggering Meta's AI detector, while Adobe's Generative AI Fill for removing objects like wrinkles or reflections also caused false positives1
. Meta ultimately updated its label to say "AI info" rather than "Made With AI" to avoid mislabeling edited content1
.
Source: PetaPixel
Related Stories
A standards-setting body for verifying digital authenticity exists through the Coalition for Content Provenance and Authenticity (C2PA), along with related initiatives like the Content Authenticity Initiative and Project Origin, which focus on adding tamper-evident content provenance metadata to media
1
. Microsoft, the BBC, Adobe, Arm, Intel, Sony, OpenAI, and others serve on the C2PA's steering committee, with many more companies joining as members1
.X is not currently listed among C2PA members, raising questions about whether the platform's implementation follows established protocols for identifying AI-generated content or relies on proprietary methods
1
. Given that X serves as a playground for political propaganda both domestically and abroad, transparency about how the company determines what qualifies as "edited" content is essential. The recent manipulated photo shared by the White House of a protester being arrested in Minnesota notably doesn't carry the new label, suggesting inconsistent enforcement2
.The announcement arrives at a critical moment when misinformation spreads rapidly across social platforms. Other companies are also tackling similar challenges: TikTok has been labeling AI content, while streaming services like Deezer and Spotify are scaling initiatives to identify and label AI music. Google Photos uses C2PA to indicate how photos were made
1
.For X users, the lack of clarity creates uncertainty about what triggers the manipulated media warning and whether legitimate photography or minimal editing could be incorrectly flagged. Without transparent documentation or a clear dispute process, photographers and content creators may face challenges similar to those experienced on Meta platforms. Users should watch for how consistently X applies these labels, whether the system can distinguish between minor edits and genuine deepfake content, and if the platform will join industry-standard initiatives like C2PA to ensure reliable digital authenticity verification.
Summarized by
Navi
1
Business and Economy

2
Policy and Regulation

3
Technology
