Instagram Issues Warning Over AI Content And Calls Out Camera Makers
by
Aaron Leong
—
Thursday, January 01, 2026, 11:07 AM EDT
Instagram head Adam Mosseri dropped a telling Threads post suggesting that we're approaching a tipping point where AI will be so deeply integrated into media creation that distinguishing fake content from the real thing will be impossibly difficult. Instead of chasing the infinite tide of AI-generated pixels, the future of digital trust may lie on OEMs' shoulders to watermark or metadata their content the moment an image is taken.
As generative tools become standard features in phones, PCs, and editing software, the line between a real photo and an AI photo is blurring into obsolescence. If a photographer uses AI to remove a stray power line or adjust the lighting of a sunset, the resulting image is technically manipulated, yet it represents a real event. Mosseri argues that because AI is becoming ubiquitous, the industry should focus its energy on verifying human-captured media. This would involve a system of digital watermarking or cryptographic signing that occurs at the moment a camera shutter clicks, which could help create a verifiable trail of authenticity.
"The camera companies are betting on the wrong aesthetic. They're competing to make everyone look like a professional photographer from the past. Every year we see phone cameras boast about more megapixels and image processing. We are romanticising the past," Mosseri says.
He goes on to criticize how portrait mode in modern camera tech artificially blurs backgrounds, which may look good, "but flattering imagery is cheap to produce and boring to consume." He also says that similar to how AI makes it cheap to add polish, phone cameras have done the same for professional-looking photos "and both trends cheapen the aesthetic."
Shrimp Jesus: an example of AI slop
A provenance-based approach is the opposite to the current detect-and-label process. Social media platforms so far have struggled to develop algorithms capable of spotting deepfakes or AI-generated propaganda, plus as AI models grow more sophisticated, detection becomes a losing game of cat-and-mouse. Mosseri suggests that by shifting the focus to fingerprinting real media, content posted to platforms without a verified human fingerprint can be seen as not necessarily a lie, but carries less weight than one that can be traced back to a physical lens and a specific moment in time.
Therefore, if the burden of proof shifts to the "real," we may enter an era where unverified media is treated with immediate skepticism. While this can help combat misinformation, it also raises concerns about accessibility. For instance, not every witness to a breaking news event will have a device capable of hardware-level digital signing. There is a risk that truth could become a premium feature, accessible only to those with the latest tech, while the rest of the world’s visual output is relegated to a sea of unverified noise.
He also goes on to declare that today's social media feeds are effectively dead.
"Unless you're under 25 and use Instagram, you probably think of the app as a feed of square photos. The aesthetic is polished: lots of make up, skin smoothing, high contrast photography, beautiful landscapes.
That feed is dead. People largely stopped sharing personal moments to feed years ago," Mosseri says.
"Stories are alive and well as they provide a less pressurized way to share with your followers, but the primary way people share, even photos and videos, is in DMs. That content is unpolished; it’s blurry photos and shaky videos of people’s daily experiences. Think shoe shots and unflattering candids," he adds.
Ultimately, Mosseri’s perspective reflects a surrender to the reality of AI (and AI slop, for that matter). We can no longer treat AI as an intruder in our digital spaces; it is now the foundation of the space itself. By prioritizing the protection of the authentic over the policing of the synthetic, tech leaders are attempting to build a new infrastructure for trust. What say you?