8 Sources
8 Sources
[1]
Grok is spreading misinformation about the Bondi Beach shooting
Grok's track record is spotty at best. But even by the very low standards of xAI, its failure in the aftermath of the tragic mass shooting at Bondi Beach in Australia is shocking. The AI chatbot has repeatedly misidentified 43-year-old Ahmed al Ahmed, the man who heroically disarmed one of the shooters, and claimed the verified video of his deed was something else entirely -- including that it was an old viral video of a man climbing a tree. In the aftermath of the attack, Ahmed has been widely praised for his heroism, but some have tried to dismiss or even deny his actions. Someone even quickly whipped up a fake news site that appears to be AI-generated, with an article naming a fictitious IT professional, Edward Crabtree, as the man who disarmed the attacker. This, of course, got picked up by Grok and regurgitated on X. But Grok also suggested that images of Ahmed were of an Israeli being held hostage by Hamas. And it claimed that video taken at the scene was actually of Currumbin Beach, Australia, during Cyclone Alfred. Broadly, it seems Grok is having trouble understanding queries today and working out the proper answers. It replied to a question about Oracle's financial difficulties with a summary of the shooting at Bondi Beach. When asked about the validity of a story about a UK police operation, it first stated today's date, then coughed up poll numbers for Kamala Harris. It's just one more reminder that AI isn't reliable enough to be trusted with fact-checking.
[2]
Grok Caught Spreading Misinformation About Bondi Beach Shooting
(Credit: Thomas Fuller/SOPA Images/LightRocket via Getty Images) AI chatbots have evolved rapidly over the last couple of years, but they continue to sometimes fail to provide reliable and accurate information about current events and breaking news. On Sunday, Grok was caught spreading misinformation about the mass shooting at Bondi Beach. Two armed men opened fire at a Hanukkah gathering in Sydney, killing at least 15 people. During the attack, one of the gunmen was disarmed by a man later identified as Ahmed al Ahmed. As the video circulated widely across X, Grok was seen posting inaccurate information under multiple versions of the clip. In one instance, spotted by Gizmodo, Grok claimed the video "appears to be an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it, resulting in a branch falling and damaging a parked car." In a different video of the attackers, it said, "The video appears to be from Currumbin Beach, Australia, during Cyclone Alfred in March 2025, where waves swept cars in the parking lot." No cars were swept away in the video. Grok also misidentified an image of al Ahmed, who suffered bullet wounds on his arm and hand. It said "the man in the image is Guy Gilboa-Dalal, confirmed by his family and multiple sources including Times of Israel and CNN," and that he was held hostage by Hamas for 700 days before being released in October 2025. Additionally, the chatbot has struggled to distinguish between events. It incorrectly merged details from the Brown University and Bondi Beach shootings. Many more examples of Grok's inaccuracies are still live on X. Both the chatbot and the social media platform are owned by Elon Musk, and the former has been in the headlines for the wrong reasons throughout the year. It praised Adolf Hitler in July and declared Musk fitter than LeBron James last month. President Trump called al Ahmed a "very, very brave person." The gunmen, a father and son duo, have been captured and identified. The father has died, while the son is receiving treatment.
[3]
Grok is spreading inaccurate info again, this time about the Bondi Beach shooting
In the same month that Grok opted for a second Holocaust over vaporizing Elon Musk's brain, the AI chatbot is on the fritz again. Following the Bondi Beach shooting in Australia during a festival to mark the start of Hanukkah, Grok is responding to user requests with inaccurate or completely unrelated info, as first spotted by Gizmodo. Grok's confusion seems to be most apparent with a viral video that shows a 43-year-old bystander, identified as Ahmed al Ahmed, wrestling a gun away from an attacker during the incident, which has left at least 16 dead, according to the latest news reports. Grok's responses show it repeatedly misidentifying the individual who stopped one of the gunmen. In other cases, Grok responds to the same image about the Bondi Beach shooting with irrelevant details about allegations of targeted civilian shootings in Palestine. The latest replies still show Grok's confusion with the Bondi Beach shooting, even providing information about the incident to unrelated requests or mixing it up with the shooting at Brown University in Rhode Island. xAI, Grok's developer, hasn't officially commented on what's happening with its AI chatbot yet. However, it's not the first time that Grok has gone off the rails, considering it dubbed itself MechaHitler earlier this year.
[4]
Grok Is Glitching And Spewing Misinformation About The Bondi Beach Shooting
This time, among other problems, the chatbot is spewing misinformation about the Bondi Beach shooting, in which at least eleven people were killed at a Hanukkah gathering. One of the assailants was eventually disarmed by a bystander, identified as 43-year-old Ahmed al Ahmed. The video of the interaction has been widely shared on social media with many praising the heroism of the man. Except those that have jumped at the opportunity to exploit the tragedy and spread Islamophobia, mainly by denying the validity of the reports identifying the bystander. Grok is not helping the situation. The chatbot appears to be glitching, at least as of Sunday morning, responding to user queries with irrelevant or at times completely wrong answers. In response to a user asking Grok the story behind the video showing al Ahmed tackling the shooter, the AI claimed "This appears to be an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it, resulting in a branch falling and damaging a parked car. Searches across sources show no verified location, date, or injuries. It may be staged; authenticity is uncertain." In another instance, Grok claimed that the photo showing an injured al Ahmed was of an Israeli hostage taken by Hamas on October 7th. In response to another user query, Grok questioned the authenticity of al Ahmed's confrontation yet again, right after an irrelevant paragraph on whether or not the Israeli army was purposefully targeting civilians in Gaza. In another instance, Grok described a video clearly marked in the tweet to show the shoot out between the assailants and police in Sydney to instead be from Tropical Cyclone Alfred, which devastated Australia earlier this year. Although in this case, the user doubled down on the response to ask Grok to reevaluate, which caused the chatbot to realize its mistake. Beyond just misidentifying information, Grok seems to be just truly confused. One user was served up a summary of the Bondi shooting and its fallout in response to a question regarding tech company Oracle. It also seems to be confusing information regarding the Bondi shooting and the Brown University shooting which took place only a few hours before the attack in Australia. The glitch is also extending beyond just the Bondi shooting. Throughout Sunday morning, Grok has misidentified famous soccer players, gave out information on acetaminophen use in pregnancy when asked about the abortion pill mifepristone, or talked about Project 2025 and the odds of Kamala Harris running for presidency again when asked to verify a completely separate claim made about a British law enforcement initiative. It's not clear what is causing the glitch. Gizmodo reached out to Grok-developer xAI for comment, but they have only responded with the usual automated reply, "Legacy Media Lies." It's also not the first time that Grok has lost its grip on reality. The chatbot has given quite a few questionable responses this year, from an "unathorized modification" that caused it to respond to every query with conspiracy theories on "white genocide" in South Africa to saying that it would rather kill the world's entire Jewish population than vaporize Musk's mind.
[5]
Grok spread misinformation about Bondi Beach shooting
Grok still isn't equipped to react to breaking news. Credit: Mateusz Slodkowski / SOPA Images / LightRocket via Getty Images In the evening of Dec. 14, a large crowd of people gathered on Australia's Bondi Beach to celebrate the first night of Hanukkah, and were instead met with violence, as two gunmen opened fire on the group. As of today, 15 people have been killed. One of the assailants was taken down by bystander Ahmed Al Ahmed, whose brave decision to grapple with the shooter and take over his weapon was captured on video and shared widely across social media platforms. Informed by an epidemic of gun violence that has turned many bystanders into heroes, it's clear on camera that the man in the white shirt is potentially saving dozens of lives. The long-barreled gun is in clear view as he wrests it from the hand of a man clad in black, who then topples over and ambles away. But X's Grok, the AI chatbot designed by Elon Musk's AI venture xAI, didn't see it as such. As users stumbled across the harrowing video of Ahmed the following morning and asked the chatbot to explain, Grok described the scene as "an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it." X users have since added a fact-check to the bot's reply. In another response, Grok mislabeled the video as footage from the Oct. 7 Hamas attack, and credited it to the Tropical Cyclone Alfred in another, Gizmodo reported. X hasn't yet explained why this glitch occurred, or why Grok has made similar fumbles beyond queries about Bondi Beach. But watchdogs know why, and it's very simple. Chatbots are bad at breaking news. In the wake of the killing of far-right commentator Charlie Kirk, Grok exacerbated conspiracy theories about the shooter and Kirk's own bodyguards, telling some users that a graphic video clearly showing Kirk's death was just a meme. Other AI-powered search sources, including Google AI Overview, also gave false information in the immediate aftermath of Kirk's death. "Instead of declining to answer, models now pull from whatever information is available online at the given moment, including low-engagement websites, social posts, and AI-generated content farms seeded by malign actors. As a result, chatbots repeat and validate false claims during high-risk, fast-moving events," NewsGuard researcher McKenzie Sadeghi told Mashable at the time. Social media platforms have also scaled back human fact-checking across the board, and chatbots may instead prioritize frequency over accuracy in real-time news responses. AI companies know this is a glaring gap for their bots, and it's why they've courted news publications into larger and larger licensing deals to better their products. Earlier this month, Meta signed multiple commercial AI agreements with news publishers, including CNN, Fox News, and international publication Le Monde, adding to its existing partnership with Reuters. Google is running a pilot program with participating news publishers to expand AI-powered features, including article summaries, to Google News. Hallucinations and accuracy also remain a big problem for large-language models and AI chatbots in general, which often confidently provide false information to users.
[6]
Grok spews misinformation about deadly Australia shooting
Elon Musk's AI chatbot Grok churned out misinformation about Australia's Bondi Beach mass shooting, misidentifying a key figure who saved lives and falsely claiming that a victim staged his injuries, researchers said Tuesday. The episode highlights how chatbots often deliver confident yet false responses during fast-developing news events, fueling information chaos as online platforms scale back human fact-checking and content moderation. The attack during a Jewish festival on Sunday in the beach suburb of Sydney was one of Australia's worst mass shootings, leaving 15 people dead and dozens wounded. Among the falsehoods Grok circulated was its repeated misidentification of Ahmed al Ahmed, who was widely hailed as a Bondi Beach hero after he risked his life to wrest a gun from one of the attackers. In one post reviewed by AFP, Grok claimed the verified clip of the confrontation was "an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it," suggesting it "may be staged." Citing credible media sources such as CNN, Grok separately misidentified an image of Ahmed as that of an Israeli hostage held by the Palestinian militant group Hamas for more than 700 days. When asked about another scene from the attack, Grok incorrectly claimed it was footage from tropical "cyclone Alfred," which generated heavy weather across the Australian coast earlier this year. Only after another user pressed the chatbot to reevaluate its answer did Grok backpedal and acknowledge the footage was from the Bondi Beach shooting. When reached for comment by AFP, Grok-developer xAI responded only with an auto generated reply: "Legacy Media Lies." 'Crisis actor' The misinformation underscores what researchers say is the unreliability of AI chatbots as a fact-checking tool. Internet users are increasingly turning to chatbots to verify images in real time, but the tools often fail, raising questions about their visual debunking capabilities. In the aftermath of the Sydney attack, online users circulated an authentic image of one of the survivors, falsely claiming he was a "crisis actor," disinformation watchdog NewsGuard reported. Crisis actor is a derogatory label used by conspiracy theorists to allege that someone is deceiving the public -- feigning injuries or death -- while posing as a victim of a tragic event. Online users questioned the authenticity of a photo of the survivor with blood on his face, sharing a response from Grok that falsely labeled the image as "staged" or "fake." NewsGuard also reported that some users circulated an AI image -- created with Google's Nano Banana Pro model -- depicting red paint being applied on the survivor's face to pass off as blood, seemingly to bolster the false claim that he was a crisis actor. Researchers say AI models can be useful to professional fact-checkers, helping to quickly geolocate images and spot visual clues to establish authenticity. But they caution that they cannot replace the work of trained human fact-checkers. In polarized societies, however, professional fact-checkers often face criticism from conservatives of liberal bias, a charge they reject. AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.
[7]
Grok spews misinformation about deadly Australia shooting
Washington (United States) (AFP) - Elon Musk's AI chatbot Grok churned out misinformation about Australia's Bondi Beach mass shooting, misidentifying a key figure who saved lives and falsely claiming that a victim staged his injuries, researchers said Tuesday. The episode highlights how chatbots often deliver confident yet false responses during fast-developing news events, fueling information chaos as online platforms scale back human fact-checking and content moderation. The attack during a Jewish festival on Sunday in the beach suburb of Sydney was one of Australia's worst mass shootings, leaving 15 people dead and dozens wounded. Among the falsehoods Grok circulated was its repeated misidentification of Ahmed al Ahmed, who was widely hailed as a Bondi Beach hero after he risked his life to wrest a gun from one of the attackers. In one post reviewed by AFP, Grok claimed the verified clip of the confrontation was "an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it," suggesting it "may be staged." Citing credible media sources such as CNN, Grok separately misidentified an image of Ahmed as that of an Israeli hostage held by the Palestinian militant group Hamas for more than 700 days. When asked about another scene from the attack, Grok incorrectly claimed it was footage from tropical "cyclone Alfred," which generated heavy weather across the Australian coast earlier this year. Only after another user pressed the chatbot to re-evaluate its answer did Grok backpedal and acknowledge the footage was from the Bondi Beach shooting. When reached for comment by AFP, Grok-developer xAI responded only with an auto generated reply: "Legacy Media Lies." 'Crisis actor' The misinformation underscores what researchers say is the unreliability of AI chatbots as a fact-checking tool. Internet users are increasingly turning to chatbots to verify images in real time, but the tools often fail, raising questions about their visual debunking capabilities. In the aftermath of the Sydney attack, online users circulated an authentic image of one of the survivors, falsely claiming he was a "crisis actor," disinformation watchdog NewsGuard reported. Crisis actor is a derogatory label used by conspiracy theorists to allege that someone is deceiving the public -- feigning injuries or death -- while posing as a victim of a tragic event. Online users questioned the authenticity of a photo of the survivor with blood on his face, sharing a response from Grok that falsely labeled the image as "staged" or "fake." NewsGuard also reported that some users circulated an AI image -- created with Google's Nano Banana Pro model -- depicting red paint being applied on the survivor's face to pass off as blood, seemingly to bolster the false claim that he was a crisis actor. Researchers say AI models can be useful to professional fact-checkers, helping to quickly geolocate images and spot visual clues to establish authenticity. But they caution that they cannot replace the work of trained human fact-checkers. In polarized societies, however, professional fact-checkers often face criticism from conservatives of liberal bias, a charge they reject. AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.
[8]
Bondi Beach terrorist attack: From Israeli hostage to cyclone Alfred, how Elon Musk's AI chatbot Grok misidentified hero bystander Ahmed al Ahmed
Elon Musk's AI Chatbot Grok has come under fire after spreading false and misleading information about Australia's Bondi Beach shooting. Grok gave false information about Ahmed al Ahmed who fought with gunmen to save the lives of civilians. While in one response, Grok identified the hero bystander as an Israeli hostage, in another it described him as IT professional. Following the devastating mass shooting at Bondi Beach in Australia's Sydney which killed at least 16 people, Elon Musk's Grok has found itself at the receiving end as it was found spreading incorrect and confusing information about the shooting. Grok, Elon Musk's AI chatbot, has reportedly spread misinformation about the hero bystander involved in the Sydney attack, reports Gizmodo. According to the report, after the news of people being killed at a Hanukkah gathering started circulating, netizens started asking Grok questions about what had happened. When one user asked Grok to explain the video showing Ahmed al Ahmed- the man who tackled one of the gunmen as shots were being fired at civilians- the AI responded with an incorrect and false claim. ALSO READ: Rob Reiner's 'When Harry Met Sally' inspired SRK-Anushka Sharma's 'Jab Harry Met Sejal'? What we know about Imtiaz Ali's rom-com The report states that Grok has been glitching since Sunday (December 14) morning, responding to user questions with unrelated or sometimes incorrect answers. Grok repeatedly got key facts wrong about the incident, especially about the man who helped stop the attacker. A bystander, 43 year old Ahmed al Ahmed, was widely praised after videos showed him confronting and disarming one of the gunmen. However, Grok failed to identify him correctly in several responses. While in one instance, the AI chatbot wrongly identified Ahmed al Ahmed as an Israeli hostage, in another response it questioned whether the widely shared videos and photos showing al Ahmed's actions were even real. In another instance, Grok described the video showing shoot out between the assailants and police in Sydney to be from Tropical Cyclone Alfred, that struck Australia earlier this year. "This appears to be an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it, resulting in a branch falling and damaging a parked car. Searches across sources show no verified location, date, or injuries. It may be staged; authenticity is uncertain," the AI chatbot in one of its responses. ALSO READ: 2 people found dead at 'When Harry Met Sally' director Rob Reiner's Southern California mansion. Here's what we know In another erroneous response, Grok claimed that the person who disarmed the gunman as Edward Crabtree, described as a 43-year-old IT professional and senior solutions architect. This claim was later found to be false. Grok subsequently acknowledged that the confusion may have stemmed from viral social media posts and unreliable online articles, possibly including AI-generated content published on poorly maintained news websites. To its credit, Grok has begun correcting some of its errors. One post that reportedly claimed a video of the shooting was actually footage from Cyclone Alfred was later revised following what the chatbot described as a re-evaluation. Grok has also since acknowledged Ahmed al Ahmed's identity and clarified that its earlier responses were based on misleading or inaccurate online sources. ALSO READ: Bondi beach Hanukkah festival firing: Naveed Akram's father Sajid stockpiled six deadly firearms for shooting, cops find IEDs in shooter's car On December 14, a Pakistani-origin father and son identified as Sajid and Naveed Akram opened fire at Sydney's Bondi Beach during Hanukkah, killing 16 people, including one of the gunmen. Authorities have declared the incident a terrorist attack, ABC News reported. Police identified the attackers as a 50-year-old man and his 24-year-old son. The father was shot dead by police, while the son was taken to hospital with critical injuries. Investigators said the father legally owned six firearms, which are believed to have been used in the shooting. Officials also said the pair had pledged allegiance to the Islamic State group. Two IS flags were recovered from their car near Bondi Beach, with one seen placed on the bonnet, the report states. Australian Prime Minister Anthony Albanese has called for stronger gun laws, stating, "The government is prepared to take whatever action is necessary, including tougher gun laws."
Share
Share
Copy Link
Elon Musk's xAI chatbot Grok repeatedly misidentified 43-year-old Ahmed al Ahmed, who disarmed a gunman during the Bondi Beach shooting in Sydney. The AI chatbot claimed verified videos showed unrelated events, including a man climbing a tree and footage from Cyclone Alfred. The incident highlights persistent problems with AI chatbots handling breaking news events.
Grok, the AI chatbot developed by Elon Musk's xAI, has come under intense scrutiny after spreading misinformation about the tragic Bondi Beach shooting in Sydney, Australia. On December 14, two gunmen opened fire at a Hanukkah gathering, killing at least 15 people
1
2
. During the attack, 43-year-old Ahmed al Ahmed heroically disarmed one of the shooters, with video of his brave act circulating widely across social media platforms3
5
.
Source: ET
Yet when users turned to Grok for information about the viral footage, the chatbot repeatedly misidentified individuals and events. In one particularly egregious example spotted by Gizmodo, Grok claimed the video "appears to be an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it, resulting in a branch falling and damaging a parked car"
4
. The AI chatbot also suggested images of Ahmed were of an Israeli hostage held by Hamas, specifically naming Guy Gilboa-Dalal and claiming he was held for 700 days before being released in October 20252
.The problems extended far beyond misidentified individuals. Grok claimed video taken at the scene was actually from Currumbin Beach, Australia, during Cyclone Alfred in March 2025, stating that "waves swept cars in the parking lot"
1
2
. No such event occurred in the actual footage. The chatbot's reliability issues became even more apparent as it struggled to distinguish between separate incidents, incorrectly merging details from the Brown University shooting with the Bondi Beach shooting2
4
.
Source: Gizmodo
In what appears to be a broader glitch affecting multiple queries, Grok responded to a question about Oracle's financial difficulties with a summary of the Bondi Beach shooting. When asked about a UK police operation, it provided poll numbers for Kamala Harris instead
1
. The chatbot also gave information on acetaminophen use in pregnancy when asked about the abortion pill mifepristone4
.The situation was further complicated by fake news circulating online. Someone quickly created an AI-generated website featuring an article that named a fictitious IT professional, Edward Crabtree, as the person who disarmed the attacker. Grok picked up this fabricated information and regurgitated it on X
1
. This incident underscores how AI chatbots can amplify unreliable sources and AI-generated content during fast-moving events.According to NewsGuard researcher McKenzie Sadeghi, "Instead of declining to answer, models now pull from whatever information is available online at the given moment, including low-engagement websites, social posts, and AI-generated content farms seeded by malign actors"
5
. Social media platforms have also scaled back human fact-checking across the board, and chatbots may prioritize frequency over accuracy in real-time news responses5
.Related Stories
This isn't the first time Grok has faced criticism for spreading inaccurate information. The chatbot previously praised Adolf Hitler in July and declared Musk fitter than LeBron James last month
2
. Earlier this year, an "unauthorized modification" caused it to respond to queries with conspiracy theories about "white genocide" in South Africa4
.
Source: Mashable
Hallucinations and accuracy remain persistent problems for large-language models and AI chatbots in general, which often confidently provide false information to users. AI companies recognize this as a critical gap and have courted news publications into larger licensing deals to improve their products. Meta recently signed commercial AI agreements with CNN, Fox News, and Le Monde, while Google is running a pilot program with news publishers to expand AI-powered features.
President Trump called Ahmed al Ahmed a "very, very brave person." The gunmen, a father and son duo, have been captured and identified, with the father having died while the son receives treatment
2
. xAI has not officially commented on what caused Grok's widespread failures, responding only with its usual automated message: "Legacy Media Lies"4
.Summarized by
Navi
25 Jun 2025•Technology

02 Jun 2025•Technology

12 Aug 2025•Technology

1
Policy and Regulation

2
Technology

3
Technology
