Grok AI chatbot misidentifies NYC Mayor Zohran Mamdani in bizarre Epstein conspiracy analysis

2 Sources

Share

Elon Musk's Grok AI chatbot confused NYC Mayor Zohran Mamdani with Jimmy Kimmel when asked to analyze a baseless conspiracy theory linking him to Jeffrey Epstein. The incident highlights growing concerns about AI reliability challenges as millions of Epstein files trigger waves of AI-generated misinformation across social media platforms.

News article

Grok AI Chatbot Delivers Bizarre Response to Epstein Conspiracy

Elon Musk's Grok AI chatbot produced what may be one of the most baffling AI responses in recent memory when asked to analyze whether New York City Mayor Zohran Mamdani was related to convicted sex offender Jeffrey Epstein

1

. The chatbot, marketed by Musk as "maximally truth-seeking," initially misidentified Mamdani as late-night TV host Jimmy Kimmel when presented with side-by-side headshots, despite the two looking nothing alike

1

. The incident unfolded as an X account asked the Grok AI chatbot to calculate the probability of a genetic relationship based on perceived facial similarities analysis of eyebrows and mouth shapes.

AI Misinformation Spreads Across Social Media

The gaffe occurred amid a torrent of AI-generated misinformation flooding social media following the Trump administration's release of six million Jeffrey Epstein files last week

2

. Claims that Zohran Mamdani was the "secret, biological son" of Epstein spread tens of thousands of times, fueled by AI-generated images purporting to show him as an infant with Epstein and his mother, filmmaker Mira Nair

2

. Prominent right-wing influencers and conspiracy theorist Alex Jones amplified the baseless conspiracy theory, with Jones claiming a "major investigation" was underway

2

.

Elon Musk's AI Chatbot Struggles with Basic Identification

After initially confusing Mamdani with Kimmel, Grok eventually corrected itself, stating: "That's actually Zohran Mamdani on the left, not Jimmy Kimmel -- my bad on the initial ID. Similarities in eyebrows and smiles are there, but no known family ties to Epstein. Probability of relation: under one percent based on public info"

1

. The error demonstrates AI reliability challenges even for straightforward tasks like identifying well-known figures from well-lit photographs, despite xAI pouring billions of dollars into development

1

. Elon Musk's AI chatbot now operates as a de facto fact-checking machine for millions of X users, raising concerns about its role in shaping public perception

1

.

Epstein Files AI Slop Creates Information Crisis

The Mamdani incident represents the latest example of how generative AI collides with news consumption patterns to create AI misinformation at scale. According to BBC Verify, the fake images originated from a parody account but contained AI watermarks that Grok told at least one user were authentic

2

. The supposed connection stemmed from court documents mentioning Nair's attendance at a 2009 afterparty at Ghislaine Maxwell's home, though no evidence suggests criminal involvement

1

. Mamdani was born in 1991, making the conspiracy theory chronologically impossible

2

.

Pattern of AI Hallucination and Fake Images

This isn't an isolated case. Researchers have debunked AI-generated photos depicting Canadian Prime Minister Mark Carney with Epstein accomplice Maxwell, while fake images have targeted Oprah Winfrey, Kamala Harris, and Sam Altman

2

. The Trump administration regularly distributes artificial intelligence content through official social media channels, including manipulated images and AI-generated depictions

2

. Ironically, Mamdani recently shut down a chatbot costing around half a million dollars that former mayor Eric Adams deployed to help small business owners, citing its unreliability in providing accurate information about labor law

1

.

What This Means for Truth and Fact-Checking

The onslaught of emails, business logs, photos, testimony, videos, and personal correspondence in the Jeffrey Epstein files has spread coverage so thin that parsing truth from fiction becomes nearly impossible when AI-generated content muddies the waters

2

. As xAI continues development and social media platforms increasingly rely on chatbot technology for information verification, the Grok incident raises critical questions about deploying AI systems that struggle with basic tasks to millions of users who may trust their outputs without verification.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo