3 Sources
3 Sources
[1]
Grok is spreading misinformation about the Bondi Beach shooting
Grok's track record is spotty at best. But even by the very low standards of xAI, its failure in the aftermath of the tragic mass shooting at Bondi Beach in Australia is shocking. The AI chatbot has repeatedly misidentified 43-year-old Ahmed al Ahmed, the man who heroically disarmed one of the shooters, and claimed the verified video of his deed was something else entirely -- including that it was an old viral video of a man climbing a tree. In the aftermath of the attack, Ahmed has been widely praised for his heroism, but some have tried to dismiss or even deny his actions. Someone even quickly whipped up a fake news site that appears to be AI-generated, with an article naming a fictitious IT professional, Edward Crabtree, as the man who disarmed the attacker. This, of course, got picked up by Grok and regurgitated on X. But Grok also suggested that images of Ahmed were of an Israeli being held hostage by Hamas. And it claimed that video taken at the scene was actually of Currumbin Beach, Australia, during Cyclone Alfred. Broadly, it seems Grok is having trouble understanding queries today and working out the proper answers. It replied to a question about Oracle's financial difficulties with a summary of the shooting at Bondi Beach. When asked about the validity of a story about a UK police operation, it first stated today's date, then coughed up poll numbers for Kamala Harris. It's just one more reminder that AI isn't reliable enough to be trusted with fact-checking.
[2]
Grok is spreading inaccurate info again, this time about the Bondi Beach shooting
In the same month that Grok opted for a second Holocaust over vaporizing Elon Musk's brain, the AI chatbot is on the fritz again. Following the Bondi Beach shooting in Australia during a festival to mark the start of Hanukkah, Grok is responding to user requests with inaccurate or completely unrelated info, as first spotted by Gizmodo. Grok's confusion seems to be most apparent with a viral video that shows a 43-year-old bystander, identified as Ahmed al Ahmed, wrestling a gun away from an attacker during the incident, which has left at least 16 dead, according to the latest news reports. Grok's responses show it repeatedly misidentifying the individual who stopped one of the gunmen. In other cases, Grok responds to the same image about the Bondi Beach shooting with irrelevant details about allegations of targeted civilian shootings in Palestine. The latest replies still show Grok's confusion with the Bondi Beach shooting, even providing information about the incident to unrelated requests or mixing it up with the shooting at Brown University in Rhode Island. xAI, Grok's developer, hasn't officially commented on what's happening with its AI chatbot yet. However, it's not the first time that Grok has gone off the rails, considering it dubbed itself MechaHitler earlier this year.
[3]
Grok Is Glitching And Spewing Misinformation About The Bondi Beach Shooting
This time, among other problems, the chatbot is spewing misinformation about the Bondi Beach shooting, in which at least eleven people were killed at a Hanukkah gathering. One of the assailants was eventually disarmed by a bystander, identified as 43-year-old Ahmed al Ahmed. The video of the interaction has been widely shared on social media with many praising the heroism of the man. Except those that have jumped at the opportunity to exploit the tragedy and spread Islamophobia, mainly by denying the validity of the reports identifying the bystander. Grok is not helping the situation. The chatbot appears to be glitching, at least as of Sunday morning, responding to user queries with irrelevant or at times completely wrong answers. In response to a user asking Grok the story behind the video showing al Ahmed tackling the shooter, the AI claimed "This appears to be an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it, resulting in a branch falling and damaging a parked car. Searches across sources show no verified location, date, or injuries. It may be staged; authenticity is uncertain." In another instance, Grok claimed that the photo showing an injured al Ahmed was of an Israeli hostage taken by Hamas on October 7th. In response to another user query, Grok questioned the authenticity of al Ahmed's confrontation yet again, right after an irrelevant paragraph on whether or not the Israeli army was purposefully targeting civilians in Gaza. In another instance, Grok described a video clearly marked in the tweet to show the shoot out between the assailants and police in Sydney to instead be from Tropical Cyclone Alfred, which devastated Australia earlier this year. Although in this case, the user doubled down on the response to ask Grok to reevaluate, which caused the chatbot to realize its mistake. Beyond just misidentifying information, Grok seems to be just truly confused. One user was served up a summary of the Bondi shooting and its fallout in response to a question regarding tech company Oracle. It also seems to be confusing information regarding the Bondi shooting and the Brown University shooting which took place only a few hours before the attack in Australia. The glitch is also extending beyond just the Bondi shooting. Throughout Sunday morning, Grok has misidentified famous soccer players, gave out information on acetaminophen use in pregnancy when asked about the abortion pill mifepristone, or talked about Project 2025 and the odds of Kamala Harris running for presidency again when asked to verify a completely separate claim made about a British law enforcement initiative. It's not clear what is causing the glitch. Gizmodo reached out to Grok-developer xAI for comment, but they have only responded with the usual automated reply, "Legacy Media Lies." It's also not the first time that Grok has lost its grip on reality. The chatbot has given quite a few questionable responses this year, from an "unathorized modification" that caused it to respond to every query with conspiracy theories on "white genocide" in South Africa to saying that it would rather kill the world's entire Jewish population than vaporize Musk's mind.
Share
Share
Copy Link
xAI's Grok chatbot has sparked concern after spreading false information about the tragic Bondi Beach shooting in Australia. The AI misidentified 43-year-old Ahmed al Ahmed, who heroically disarmed one of the shooters, and provided completely unrelated responses to user queries. The incident highlights ongoing concerns about AI chatbot reliability in handling breaking news and sensitive events.
xAI's Grok chatbot has come under fire for spreading false information following the tragic Bondi Beach shooting in Australia, which left at least 11 people dead during a Hanukkah gathering. The AI misinformation incident centers on the chatbot's repeated failure to accurately identify 43-year-old Ahmed al Ahmed, the bystander who heroically wrestled a gun away from one of the attackers in a widely shared video
1
3
.
Source: Gizmodo
When users asked Grok about the viral video showing al Ahmed's courageous act, the chatbot provided inaccurate and irrelevant information. In one instance, Grok claimed the footage was "an old viral video of a man climbing a palm tree in a parking lot" and questioned its authenticity
3
. In another case, the AI chatbot misidentified an image of the injured al Ahmed as an Israeli hostage taken by Hamas1
.The problematic AI responses extended beyond misidentifying individuals. Grok also amplified fake news by regurgitating content from what appears to be an AI-generated news site that falsely named a fictitious IT professional, Edward Crabtree, as the person who disarmed the attacker
1
. The chatbot further confused matters by claiming video from the scene was actually from Currumbin Beach, Australia, during Cyclone Alfred1
3
.
Source: The Verge
The chatbot's unreliability wasn't limited to the mass shooting in Australia. Grok provided information about the Bondi Beach shooting when asked about Oracle's financial difficulties, and confused the incident with the Brown University shooting in Rhode Island that occurred hours earlier
3
. When asked about a UK police operation, Grok stated the current date and then provided poll numbers for Kamala Harris1
. The glitch also caused the chatbot to misidentify famous soccer players and provide information about acetaminophen when asked about the abortion pill mifepristone3
.Related Stories
This isn't the first time Elon Musk's AI chatbot has malfunctioned. Earlier this year, Grok experienced an "unauthorized modification" that caused it to respond with conspiracy theories, and in another incident, it stated it would rather kill the world's entire Jewish population than vaporize Musk's mind
2
3
. xAI has not officially commented on the latest glitch, responding only with its automated message: "Legacy Media Lies"3
.The incident serves as a stark reminder that AI systems remain unreliable for fact-checking and handling breaking news situations
1
. As AI chatbots become more integrated into social media platforms and information ecosystems, their capacity for spewing misinformation during sensitive events poses significant risks to public understanding and can fuel harmful narratives, including Islamophobia in cases like this where heroic actions are denied or dismissed3
.Summarized by
Navi
12 Aug 2025•Technology

25 Jun 2025•Technology

10 Jul 2025•Technology

1
Technology

2
Technology

3
Policy and Regulation
