2 Sources
2 Sources
[1]
You can't trust your eyes to tell you what's real anymore, says the head of Instagram
The key risk Instagram faces is that, as the world changes more quickly, the platform fails to keep up. Looking forward to 2026, one major shift: authenticity is becoming infinitely reproducible. Everything that made creators matter-the ability to be real, to connect, to have a voice that couldn't be faked-is now accessible to anyone with the right tools. Deepfakes are getting better. Al generates photos and videos indistinguishable from captured media. Power has shifted from institutions to individuals because the internet made it so anyone with a compelling idea could find an audience. The cost of distributing information is zero. Individuals, not publishers or brands, established that there's a significant market for content from people. Trust in institutions is at an all-time low. We've turned to self-captured content from creators we trust and admire. We like to complain about "AI slop," but there's a lot of amazing AI content. Even the quality AI content has a look though: too slick, skin too smooth. That will change - we're going to see more realistic AI content. Authenticity is becoming a scarce resource, driving more demand for creator content, not less. The bar is shifting from "can you create?" to "can you make something that only you could create?" Unless you are under 25, you probably think of Instagram as feed of square photos: polished makeup, skin smoothing, and beautiful landscapes. That feed is dead. People stopped sharing personal moments to feed years ago. The primary way people share now is in DMs: blurry photos and shaky videos of daily experiences. Shoe shots. and unflattering candids. This raw aesthetic has bled into public content and across artforms. The camera companies are betting on the wrong aesthetic. They're competing to make everyone look like a pro photographer from 2015. But in a world where AI can generate flawless imagery, the professional look becomes the tell. Flattering imagery is cheap to produce and boring to consume. People want content that feels real. Savvy creators are leaning into unproduced, unflattering images. In a world where everything can be perfected, imperfection becomes a signal. Rawness isn't just aesthetic preference anymore -- it's proof. It's defensive. A way of saying: this is real because it's imperfect. Relatively quickly, AI will create any aesthetic you like, including an imperfect one that presents as authentic. At that point we'll need to shift our focus to who says something instead of what is being said. For most of my life I could safely assume photographs or videos were largely accurate captures of moments that happened. This is clearly no longer the case and it's going to take us years to adapt. We're going to move from assuming what we see is real by default, to starting with skepticism. Paying attention to who is sharing something and why. This will be uncomfortable - we're genetically predisposed to believing our eyes. Platforms like Instagram will do good work identifying AI content, but they'll get worse at it over time as AI gets better. It will be more practical to fingerprint real media than fake media. Camera manufacturers will cryptographically sign images at capture, creating a chain of custody. Labeling is only part of the solution. We need to surface much more context about the accounts sharing content so people can make informed decisions. Who is behind the account? In a world of infinite abundance and infinite doubt, the creators who can maintain trust and signal authenticity - by being real, transparent, and consistent - will stand out. We need to build the best creative tools. Label AI-generated content and verify authentic content. Surface credibility signals about who's posting. Continue to improve ranking for originality. Instagram is going to have to evolve in a number of ways, and fast.
[2]
Instagram chief: AI is so ubiquitous 'it will be more practical to fingerprint real media than fake media'
It's no secret that AI-generated content took over our social media feeds in 2025. Now, Instagram's top exec Adam Mosseri has made it clear that he expects AI content to overtake non-AI imagery and the significant implications that shift has for its creators and photographers. Mosseri shared the thoughts in a lengthy post about the broader trends he expects to shape Instagram in 2026. And he offered a notably candid assessment on how AI is upending the platform. "Everything that made creators matter -- the ability to be real, to connect, to have a voice that couldn't be faked -- is now suddenly accessible to anyone with the right tools," he wrote. "The feeds are starting to fill up with synthetic everything." But Mosseri doesn't seem particularly concerned by this shift. He says that there is "a lot of amazing AI content" and that the platform may need to rethink its approach to labeling such imagery by "fingerprinting real media, not just chasing fake." From Mosseri (emphasis his): On some level, it's easy to understand how this seems like a more practical approach for Meta. As we've previously reported, technologies that are meant to identify AI content, like watermarks, have proved unreliable at best. They are easy to remove and even easier to ignore altogether. Meta's own labels are far from clear and the company, which has spent tens of billions of dollars on AI this year alone, has admitted it can't reliably detect AI-generated or manipulated content on its platform. That Mosseri is so readily admitting defeat on this issue, though, is telling. AI slop has won. And when it comes to helping Instagram's 3 billion users understand what is real, that should largely be someone else's problem, not Meta's. Camera makers - presumably phone makers and actual camera manufacturers -- should come up with their own system that sure sounds a lot like watermarking to "to verify authenticity at capture." Mosseri offers few details about how this would work or be implemented at the scale required to make it feasible. Mosseri also doesn't really address the fact that this is likely to alienate the many photographers and other Instagram creators who have already grown frustrated with the app. The exec regularly fields complaints from the group who want to know why Instagram's algorithm doesn't consistently surface their posts to their on followers. But Mosseri suggests those complaints stem from an outdated vision of what Instagram even is. The feed of "polished" square images, he says, "is dead." Camera companies, in his estimation, are "are betting on the wrong aesthetic" by trying to "make everyone look like a professional photographer from the past." Instead, he says that more "raw" and "unflattering" images will be how creators can prove they are real, and not AI. In a world where Instagram has more AI content than not, creators should prioritize images and videos that intentionally make them look bad.
Share
Share
Copy Link
Adam Mosseri warns that AI-generated content is overtaking Instagram, making it more practical to verify real media than chase fake imagery. The platform head admits Meta can't reliably detect AI content and suggests camera manufacturers should cryptographically sign authentic captures. As deepfakes improve, creators must lean into raw, imperfect aesthetics to prove authenticity.
Adam Mosseri, head of Instagram, has delivered a stark assessment of the platform's future: AI-generated content will become so ubiquitous that verifying authentic media will prove more practical than identifying fake imagery
1
.
Source: Engadget
In a lengthy post outlining trends for 2026, Mosseri acknowledges that "everything that made creators matterβthe ability to be real, to connect, to have a voice that couldn't be fakedβis now suddenly accessible to anyone with the right tools"
2
. The admission marks a significant shift in how Meta approaches the challenge of distinguishing authentic content from synthetic media on a platform serving 3 billion users.Mosseri's candid remarks reveal the scale of the problem facing Instagram. Deepfakes are advancing rapidly, and AI now generates photos and videos that appear indistinguishable from captured media
1
. While critics complain about "AI slop," Mosseri argues there's "a lot of amazing AI content," though he concedes even quality AI-generated content currently has a telltale lookβ"too slick, skin too smooth"βthat will soon disappear as technology improves1
.The Instagram chief's statement amounts to an acknowledgment that Meta's current approach to labeling AI-generated media has failed. As Engadget reports, technologies meant to identify AI content, like watermarking, have proved unreliableβeasy to remove and easier to ignore
2
. Meta's own labels lack clarity, and the company has admitted it cannot reliably detect AI-generated or manipulated content on its platform, despite spending tens of billions of dollars on AI development this year alone2
.Mosseri's proposed solution shifts responsibility away from Meta. He suggests camera manufacturers should cryptographically sign images at capture, creating a chain of custody to verify authentic content
1
. However, he offers few details about implementation at the scale required to make this feasible across billions of daily posts2
.The flood of AI-generated content is fundamentally altering what signals authenticity on Instagram. Mosseri declares that the feed of "polished makeup, skin smoothing, and beautiful landscapes" is dead, with users having stopped sharing personal moments to their main feed years ago
1
. Instead, people now share primarily in DMsβ"blurry photos and shaky videos of daily experiences" in a raw and unflattering style1
.This aesthetic shift has profound implications for creator content and visual media. Mosseri argues that camera manufacturers are "betting on the wrong aesthetic" by trying to "make everyone look like a pro photographer from 2015"
1
. In a world where AI can generate flawless imagery, professional-looking content becomes "the tell" that something might be fake1
. Rawness and imperfection now serve as proof of authenticityβa defensive signal that content is real precisely because it's unpolished1
.Related Stories
The Instagram chief warns that society faces a fundamental adjustment in how we process visual information. "For most of my life I could safely assume photographs or videos were largely accurate captures of moments that happened. This is clearly no longer the case and it's going to take us years to adapt," Mosseri writes
1
.
Source: The Verge
This eroding trust in visual media will force users to shift from assuming content is real by default to starting with skepticismβan uncomfortable transition given humans are "genetically predisposed to believing our eyes"
1
.Mosseri acknowledges that platforms like Instagram will "do good work identifying AI content, but they'll get worse at it over time as AI gets better"
1
. The solution, he suggests, requires surfacing more context about accounts sharing content so users can make informed decisions about credibility. In this environment, the bar shifts from "can you create?" to "can you make something that only you could create?"1
. Creators who maintain trust through transparency and consistency will stand out in a landscape of infinite abundance and infinite doubt1
.The implications extend beyond Instagram. As AI becomes capable of replicating any aesthetic, including imperfect ones that present as authentic, attention will need to shift from what is being said to who says something
1
. This transformation challenges the foundation of trust that built the creator economy and raises questions about how platforms will verify authentic content while continuing to improve ranking for originality1
. For photographers and creators already frustrated with Instagram's algorithm, Mosseri's vision suggests their concerns stem from an outdated understanding of what the platform has become2
.Summarized by
Navi
16 Dec 2024β’Technology

20 Dec 2024β’Technology

30 Oct 2025β’Entertainment and Society

1
Business and Economy

2
Business and Economy

3
Policy and Regulation
