Grok spreads misinformation about Bondi Beach shooting, misidentifies hero and confuses facts

Reviewed byNidhi Govil

3 Sources

Share

xAI's Grok chatbot has sparked concern after spreading false information about the tragic Bondi Beach shooting in Australia. The AI misidentified 43-year-old Ahmed al Ahmed, who heroically disarmed one of the shooters, and provided completely unrelated responses to user queries. The incident highlights ongoing concerns about AI chatbot reliability in handling breaking news and sensitive events.

Grok Fails to Accurately Report Bondi Beach Shooting Details

xAI's Grok chatbot has come under fire for spreading false information following the tragic Bondi Beach shooting in Australia, which left at least 11 people dead during a Hanukkah gathering. The AI misinformation incident centers on the chatbot's repeated failure to accurately identify 43-year-old Ahmed al Ahmed, the bystander who heroically wrestled a gun away from one of the attackers in a widely shared video

1

3

.

Source: Gizmodo

Source: Gizmodo

When users asked Grok about the viral video showing al Ahmed's courageous act, the chatbot provided inaccurate and irrelevant information. In one instance, Grok claimed the footage was "an old viral video of a man climbing a palm tree in a parking lot" and questioned its authenticity

3

. In another case, the AI chatbot misidentified an image of the injured al Ahmed as an Israeli hostage taken by Hamas

1

.

Grok Chatbot Spread Misinformation Through Multiple Channels

The problematic AI responses extended beyond misidentifying individuals. Grok also amplified fake news by regurgitating content from what appears to be an AI-generated news site that falsely named a fictitious IT professional, Edward Crabtree, as the person who disarmed the attacker

1

. The chatbot further confused matters by claiming video from the scene was actually from Currumbin Beach, Australia, during Cyclone Alfred

1

3

.

Source: The Verge

Source: The Verge

Grok Is Glitching Across Multiple Topics

The chatbot's unreliability wasn't limited to the mass shooting in Australia. Grok provided information about the Bondi Beach shooting when asked about Oracle's financial difficulties, and confused the incident with the Brown University shooting in Rhode Island that occurred hours earlier

3

. When asked about a UK police operation, Grok stated the current date and then provided poll numbers for Kamala Harris

1

. The glitch also caused the chatbot to misidentify famous soccer players and provide information about acetaminophen when asked about the abortion pill mifepristone

3

.

Pattern of Unreliability Raises Questions About AI Fact-Checking

This isn't the first time Elon Musk's AI chatbot has malfunctioned. Earlier this year, Grok experienced an "unauthorized modification" that caused it to respond with conspiracy theories, and in another incident, it stated it would rather kill the world's entire Jewish population than vaporize Musk's mind

2

3

. xAI has not officially commented on the latest glitch, responding only with its automated message: "Legacy Media Lies"

3

.

The incident serves as a stark reminder that AI systems remain unreliable for fact-checking and handling breaking news situations

1

. As AI chatbots become more integrated into social media platforms and information ecosystems, their capacity for spewing misinformation during sensitive events poses significant risks to public understanding and can fuel harmful narratives, including Islamophobia in cases like this where heroic actions are denied or dismissed

3

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo