6 Sources
6 Sources
[1]
Charlie Kirk's death proves AI chatbots aren't built for breaking news
Grok, Perplexity, and Google's AI all reportedly spread misinformation about Kirk, watchdog reports. Credit: Bloomberg / Contributor / Bloomberg via Getty Images It took mere hours for the internet to spin out on conspiracies about the murder of Charlie Kirk -- who died yesterday after being shot at a public event in Utah -- according to reports. The far-right commentator, who often engaged in vitriolic debates about immigration, gun control, and abortion on college campuses, was killed while on a university tour with his conservative media group, Turning Point USA. The organization has spent the last decade building conservative youth coalitions at top universities and has become closely affiliated with the nationalist MAGA movement and President Trump. As early reports of the incident rolled in from both reputed news agencies and pop culture update accounts, it was unclear if Kirk was alive or if his shooter had been apprehended. But internet sleuths on both sides of the political aisle were already mounting for battle on social media, trying to identify the names of individuals in the crowd and attempting keyboard forensic science as they zoomed in closer and closer on the graphic video of Kirk being shot. Some alleged that Kirk's bodyguards were trading hand signals right before the shot rang out. Others claimed the killing was actually a cover-up to distract from Trump's unearthed communications with deceased sex trafficker Jeffrey Epstein. Exacerbating the matter were AI-powered chatbots, which have taken over social media platforms both as integrated robotic helpers and as AI spam accounts that automatically reply to exasperated users. This Tweet is currently unavailable. It might be loading or has been removed. In one example, according to media and misinformation watchdog NewsGuard, an X account named @AskPerplexity, seemingly affiliated with the AI company, told a user that its initial claim that Charlie Kirk had died was actually misinformation and that Kirk was alive. The reversal came after the user prompted the bot to explain how common sense gun reform could have saved Kirk's life. The response has been removed since NewsGuard's report was published. "The Perplexity Bot account should not be confused with the Perplexity account," a Perplexity clarified in a statement to Mashable. "Accurate AI is the core technology we are building and central to the experience in all of our products. Because we take the topic so seriously, Perplexity never claims to be 100% accurate. But we do claim to be the only AI company working on it relentlessly as our core focus." Elon Musk's AI bot, Grok, erroneously confirmed to a user that the video was an edited "meme" video, after claiming that Kirk had "faced tougher crowds" in the past and would "survive this one easily." The chatbot then doubled down, writing: "Charlie Kirk is debating, and effects make it look like he's 'shot' mid-sentence for comedic effect. No actual harm; he's fine and active as ever." Security experts said at the time that the videos were authentic. This Tweet is currently unavailable. It might be loading or has been removed. In other cases NewsGuard documented, users shared chatbot responses to confirm their own conspiracies, including those claiming his assassination was planned by foreign actors and that his death was a hit by Democrats. One user shared an AI-generated Google response that claimed Kirk was on a hit list of perceived Ukrainian enemies. Grok told yet another X user that CNN, NYT, and Fox News had all confirmed a registered Democrat was seen at the crime and was a confirmed suspect -- none of that was true. "The vast majority of the queries seeking information on this topic return high quality and accurate responses. This specific AI Overview violated our policies and we are taking action to address the issue," a Google spokesperson told Mashable. Mashable also reached out to Grok parent company xAI for comment. While AI assistants may be helpful for simple daily tasks -- sending emails, making reservations, creating to-do lists -- their weakness at reporting news is a liability for everyone, according to watchdogs and media leaders alike. "We live in troubled times, and how long will it be before an AI-distorted headline causes significant real world harm?" asked Deborah Turness, the CEO of BBC News and Current Affairs, in a blog from earlier this year. One problem is that chatbots just repeat what they're told, with minimal discretion; they can't do the work that human journalists conduct before publishing breaking news, like contacting local officials and verifying images or videos that quickly spread online. Instead, they infer an answer from whatever is at their fingertips. That's significant in the world of breaking news, in which even humans are known to get it wrong. Compared to the black box of AI, most newsrooms have checks and balances in place, like editors double-checking stories before they go live. On the other hand, chatbots offer personal, isolated interactions and are notoriously sycophantic, doing everything they can to please and confirm the beliefs of the user. "Our research has found that when reliable reporting lags, chatbots tend to provide confident but inaccurate answers," explained McKenzie Sadeghi, NewsGuard researcher and author of the aforementioned analysis. "During previous breaking news events, such as the assassination attempt against Donald Trump last year, chatbots would inform users that they did not have access to real-time, up-to-date information." But since then, she explained, AI companies have leveled up their bots, including affording them access to real-time news as it happens. This Tweet is currently unavailable. It might be loading or has been removed. "Instead of declining to answer, models now pull from whatever information is available online at the given moment, including low-engagement websites, social posts, and AI-generated content farms seeded by malign actors. As a result, chatbots repeat and validate false claims during high-risk, fast-moving events," she said. "Algorithms don't call for comment." Sadeghi explained that chatbots prioritize the loudest voices in the room, instead of the correct ones. Pieces of information that are more frequently repeated are granted consensus and authority by the bot's algorithm, "allowing falsehoods to drown out the limited available authoritative reporting." The Brennan Center for Justice at NYU, a nonpartisan law and policy institute, also tracks AI's role in news gathering. The organization has raised similar alarms about the impact of generative AI on news literacy, including its role in empowering what is known as the "Liar's Dividend" -- or the benefits gained by individuals who stoke confusion by claiming real information is false. Such "liars" contend that truth is impossible to determine because, as many now argue, any image or video can be created by generative AI. Even with the inherent risks, more individuals have turned to generative AI for news as companies continue ingraining the tech into social media feeds and search engines. According to a Pew Research survey, individuals who encountered AI-generated search results were less likely to click on additional sources than those who used traditional search engines. Meanwhile, major tech companies have scaled back their human fact-checking teams in favor of community-monitored notes, despite widespread concerns about growing misinformation and AI's impact on news and politics. In July, X announced it was piloting a program that would allow chatbots to generate their own community notes.
[2]
Elon Musk's Grok AI Spread Ludicrous Misinformation After Charlie Kirk's Shooting, Saying Kirk Survived and Video Was Fake
Popular right wing influencer Charlie Kirk was killed in a shooting in Utah yesterday, rocking the nation and spurring debate over the role of divisive rhetoric in political violence. As is often the case in breaking news about public massacres, misinformation spread quickly. And fanning the flames this time was Elon Musk's Grok AI chatbot, which is now deeply integrated into X-formerly-Twitter as a fact-checking tool -- giving it a position of authority from which it made a series of ludicrously false claims in the wake of the slaying. When one user asked, for instance, if Kirk could have survived the gunshot wound, Grok responded in a cheery tone that the Turning Point USA founder was fine. "Charlie Kirk takes the roast in stride with a laugh -- he's faced tougher crowds," the bot wrote. "Yes, he survives this one easily." When another user countered that Kirk had been "shot through the neck" and asked Grok "wtf" it was talking about, the chatbot doubled down. "It's a meme video with edited effects to look like a dramatic 'shot' -- not a real event," Grok retorted. "Charlie Kirk is fine; he handles roasts like a pro." Musk has promoted his AI efforts as "maximum-truth-seeking," but the reality is that even the best chatbots still hallucinate blatantly false information and struggle to sift through complex or conflicting claims during a breaking news event. But that hasn't stopped Musk from interweaving Grok deeply into X, where users now routinely invoke it to explain or fact-check posts. The chatbot eventually started to correct itself on Kirk's condition, conceding that the influencer had been "shot at a Utah Valley University event and has since been confirmed dead by official statements." By this morning, though, it had reversed course again, claiming that Kirk is alive and that reports of his death are "satirical." "Charlie Kirk is alive and active as of today -- no credible reports confirm his death or a posthumous Medal of Freedom from Trump," it wrote. "It's likely a meme referencing political figures, but major news outlets like Fox, CNN, and NYT show no such event." "If new info emerges," the chatbot added, "I'll update." Along with its flip-flopping about whether or not Kirk is dead, it appears that Grok also amplified fake news regarding the identity of the shooter, who is currently still unidentified and at large. As the New York Times reports, a 77-year-old Canadian man named Michael Mallinson was misattributed as Kirk's assassin after a fake account purporting to be a Fox affiliate in Reno used his photo and name in a since-deleted post. In a statement to the newspaper, the Fox affiliate's owner, Sinclair Broadcast Group, said the account that misidentified Mallinson as Kirk's shooter -- which has since been taken down -- was "impersonating the station." In replies to X users, the NYT reports, Grok repeated the false claim from the impersonator account about Mallinson, though those posts seem to have been deleted. At press time, though, Grok's posts claiming Kirk survived the shooting are still live on X.
[3]
AI fuels false claims after Charlie Kirk's death, CBS News analysis reveals
Julia Ingram is a data journalist for CBS News Confirmed. She uses data analysis and computation to cover misinformation, AI and social media. False claims, conspiracy theories and posts naming people with no connection to the incident spread rapidly across social media in the aftermath of conservative activist Charlie Kirk's killing on Wednesday, some amplified and fueled by AI tools. CBS News identified 10 posts by Grok, X's AI chatbot, that misidentified the suspect before his identity, now known to be southern Utah resident Tyler Robinson, was released. Grok eventually generated a response saying it had incorrectly identified the suspect, but by then, posts featuring the wrong person's face and name were already circulating across X. The chatbot also generated altered "enhancements" of photos released by the FBI. One such photo was reposted by the Washington County Sheriff's Office in Utah, which later posted an update saying, "this appears to be an AI enhanced photo" that distorted the clothing and facial features. One AI-enhanced image portrayed a man appearing much older than Robinson, who is 22. An AI-generated video that smoothed out the suspect's features and jumbled his shirt design was posted by an X user with more than 2 million followers and was reposted thousands of times. On Friday morning, after Utah Gov. Spencer Cox announced that the suspect in custody was Robinson, Grok's replies to X users' inquiries about him were contradictory. One Grok post said Robinson was a registered Republican, while others reported he was a nonpartisan voter. Voter registration records indicate Robinson is not affiliated with a political party. CBS News also identified a dozen instances where Grok said that Kirk was alive the day following his death. Other Grok responses gave a false assassination date, labeled the FBI's reward offer a "hoax" and said that reports about Kirk's death "remain conflicting" even after his death had been confirmed. Most generative AI tools produce results based on probability, which can make it challenging for them to provide accurate information in real time as events unfold, S. Shyam Sundar, a professor at Penn State University and the director of the university's Center for Socially Responsible Artificial Intelligence, told CBS News. "They look at what is the most likely next word or next passage," Sundar said. "It's not based on fact checking. It's not based on any kind of reportage on the scene. It's more based on the likelihood of this event occurring, and if there's enough out there that might question his death, it might pick up on some of that." X did not respond to a request for comment about the false information Grok was posting. Meanwhile, the AI-powered search engine Perplexity's X bot described the shooting as a "hypothetical scenario" in a since-deleted post, and suggested a White House statement on Kirk's death was fabricated. Perplexity's spokesperson told CBS News that "accurate AI is the core technology we are building and central to the experience in all of our products," but that "Perplexity never claims to be 100% accurate." Another spokesperson added the X bot is not up to date with improvements the company has made to its technology, and the company has since removed the bot from X. Google's AI Overview, a summary of search results that sometimes appears at the top of searches, also provided inaccurate information. The AI Overview for a search late Thursday evening for Hunter Kozak, the last person to ask Kirk a question before he was killed, incorrectly identified him as the person of interest the FBI was looking for. By Friday morning, the false information no longer appeared for the same search. "The vast majority of the queries seeking information on this topic return high quality and accurate responses," a Google spokesperson told CBS News. "Given the rapidly evolving nature of this news, it's possible that our systems misinterpreted web content or missed some context, as all Search features can do given the scale of the open web." Sundar told CBS News that people tend to perceive AI as being less biased or more reliable than someone online who they don't know. "We don't think of machines as being partisan or bias or wanting to sow seeds of dissent," Sundar said. "If it's just a social media friend or some somebody on the contact list that's sent something on your feed with unknown pedigree ... chances are people trust the machine more than they do the random human." Misinformation may also be coming from foreign sources, according to Cox, Utah's governor, who said in a press briefing on Thursday that foreign adversaries including Russia and China have bots that "are trying to instill disinformation and encourage violence." Cox urged listeners to spend less time on social media. "I would encourage you to ignore those and turn off those streams, and to spend a little more time with our families," he said.
[4]
False AI 'fact-checks' stir online chaos after Kirk assassination
Washington (AFP) - With a fire hose of misinformation surrounding the assassination of US right-wing activist Charlie Kirk, social media users have turned to AI chatbots for reliable updates -- only to encounter contradictory or inaccurate responses, further fueling online confusion. The trend highlights how chatbots often generate confident responses, even when verified information is unavailable during fast-developing news events, energizing misinformation across platforms that have largely scaled back human fact-checking and content moderation. A day after Kirk, a 31-year-old prominent ally of President Donald Trump, was fatally gunned down at a university in Utah, the X account of AI chatbot Perplexity falsely stated that the activist was never shot and was "still alive," according to the watchdog NewsGuard. When posts containing an authentic video of Kirk being shot swirled online, the X account of Grok -- Elon Musk's AI chatbot -- stated that it was a satirical clip. "The video is a meme edit -- Charlie Kirk is debating, and effects make it look like he's 'shot' mid-sentence for comedic effect. No actual harm; he's fine and active as ever," Grok wrote. Grok also falsely claimed that a Utah-based registered Democrat named Michael Mallinson had been identified as the shooter, wrongly attributing the information to major news outlets such as CNN and the New York Times. Mallinson, in reality a 77-year-old retired Canadian banker living in Toronto, said he was "shocked" by thousands of social media posts that labeled him the culprit. Breaking news events often spark a frantic search for new information on social media, frequently leading to false conclusions that chatbots then regurgitate, contributing to further online chaos. The tide of misinformation comes amid a volatile environment in the United States following Kirk's assassination, with many right-wing influencers from Trump's Make America Great Again (MAGA) political base calling for violence and "retribution" against the left. The motives of the gunman involved in the shooting -- who remains at large -- are unknown. 'Liar's dividend' Meanwhile, some conspiracy theorists have baselessly claimed that the video showing Kirk being shot was AI-generated, asserting that the entire incident was staged. The assertion underscores how the rise of cheap and widely available AI tools has given misinformation peddlers a handy incentive to cast doubt about the authenticity of real content -- a tactic researchers have dubbed as the "liar's dividend." "We have analyzed several of the videos (of Kirk's shooting) circulating online and find no evidence of manipulation or tampering," said Hany Farid, the co-founder of GetReal Security and a professor at the University of California, Berkeley. Farid also reported seeing some AI-generated videos. "This is an example of how fake content can muddy the waters and in turn cast doubt on legitimate content," he said. The falsehoods underline how facts are increasingly under assault in a misinformation-filled internet landscape, an issue exacerbated by public distrust of institutions and traditional media. It has exposed an urgent need for stronger AI detection tools, experts say, as major tech platforms have largely weakened safeguards by reducing investment in human fact-checking. Researchers say chatbots have previously made errors verifying information related to other crises such as the Israel-Hamas war in the Middle East, the recent India-Pakistan conflict and anti-immigration protests in Los Angeles. A recent audit by NewsGuard found that 10 leading AI chatbots repeated false information on controversial news topics at nearly double the rate compared to one year ago. "A key factor behind the increased fail rate is the growing propensity for chatbots to answer all inquiries, as opposed to refusing to answer certain prompts," NewsGuard said in a report last week. "The Large Language Models (LLMs) now pull from real-time web searches -- sometimes deliberately seeded by vast networks of malign actors."
[5]
Charlie Kirk killing: Rumors, misinformation rampant on social media - The Economic Times
Online posts also shared fake headlines about the killing, or real headlines with fake timestamps to claim the media had advance knowledge of the plan. And social media users trying to get clarity from AI chatbots found they were misled.Confusion and conspiracy theories spread online after conservative activist Charlie Kirk was fatally shot during a university appearance in Orem, Utah, on Wednesday. As the manhunt continued, online speculation, much of it baseless, emerged about the circumstances of the shooting and the identity of the shooter. Online posts also shared fake headlines about the killing, or real headlines with fake timestamps to claim the media had advance knowledge of the plan. And social media users trying to get clarity from AI chatbots found they were misled. has examined some of the viral rumors, conspiracies, and false information spreading online in the aftermath of Kirk's death. Misidentified suspects Video shared online in the aftermath of the shooting shows an older man being detained by Provo police and an officer holding a rifle, which the voiceover said belonged to the suspect. But there is no evidence the encounter was related to the Kirk shooting. The Utah Department of Public Safety did not respond to a request for comment. One video posted within hours of Kirk's shooting falsely identified a Black man 700 miles (1,126 km) away as having been arrested for killing Kirk. But the video is from June and shows the arrest of a suspect in a Santa Monica police officer shooting. The same video was shared by Fox News that month. Other posts shared video of a man on the run after a gunman opened fire outside a casino in Reno, Nevada, on July 28, a shooting that killed three and injured three others. The posts claimed it was footage of Kirk's shooter. The image of a 29-year-old Washington state resident was shared in a series of posts baselessly suggesting the shooter is transgender. She told Reuters in a message the picture had been lifted from her X account without her knowledge, adding that she was in Seattle at the time of the shooting. She wrote earlier on Instagram, after her image circulated widely online, that she is not the shooter.At the time of writing, authorities have not said the suspect is transgender. Headline fakes Dark memes following the shooting included a fabricated CNN headline dated 2021 that quotes Kirk as saying, "If Somebody Ever Shoots Me Through The Neck During A Speech In Utah In 2025, I Lowkey Think That Rocks." There is no evidence Kirk ever made this statement. A CNN spokesperson said in an email, "This is a fabricated image and CNN never published a story with that headline." A screenshot of a genuine New York Times headline appearing in Google search results was used to suggest the media knew about the shooting in advance. The headline, "Charlie Kirk is Apparently Shot During Utah Valley University Event," as it appeared in Google, was shared in an X post after the shooting and captioned, "NY Times 19 hours ago (last night 15 hours before shooting) is standard CIA pysop." An archive of the article shows the first post on the outlet's live blog was published after Kirk's shooting, at 3:02 p.m. ET. The New York Times said in an email that the page went live at 3:01 p.m. ET on Wednesday. This timestamp discrepancy in search engine results can happen when a web page provides a time zone different from the local time when it was published, or when multiple dates are listed on the page, a Google spokesperson said. "Given the low resolution and incomplete screenshots, we're not able to confirm if these are Google Search results," the spokesperson said in an email. "We provide guidance to site owners about how they can help us identify the most accurate date and time to show in Search." AI chatbots amplify confusion In the aftermath of Kirk's shooting, Reuters found that both Perplexity's bot account and xAI's Grok chatbot provided incorrect responses to queries on X. In response to a query beneath a clip condemning Kirk's killing, Perplexity's bot account incorrectly said the individual was describing a "hypothetical scenario" and that Kirk was "still alive." It also responded to a graphic released by the White House that featured a statement on the incident, saying that it appeared to be "fabricated," incorrectly adding that there had been "no official confirmation" by the White House that Kirk had died. Early online rumors falsely suggested that a man named Michael Mallinson had been detained by police. This was elevated by Grok, which cited unspecified "reports" that he was in custody. In later posts, Grok said Mallinson had been "falsely accused." Mallinson could not be reached for comment. Grok also labelled a real statement as fabricated, incorrectly saying that a screenshot of the statement released by Turning Point USA, the conservative student group founded by Kirk, appeared to be "fake." A spokesperson for Perplexity told Reuters, "Because we take the topic so seriously, Perplexity never claims to be 100% accurate. But we do claim to be the only AI company working on it relentlessly as our core focus." xAI did not immediately respond to a request for comment.
[6]
Grok Under Fire For Calling Charlie Kirk Assassination Video A "Meme Edit," Exposing AI's Role In Spreading Misinformation
AI chatbots are increasingly becoming more advanced, but with the evolution, we are faced with issues such as gimmicks and the platforms acting up at times. The recent assassination of activist Charlie Kirk was rather an unfortunate incident, one that caused quite the uproar and got widely covered by mainstream media, even sharing explicit footage of the attack. While the conversation was going on about the tragedy, the AI tool landed itself in hot water after it kept dismissing the incident and called the video that was being circulated a "meme edit." In the wake of Charlie Kirk's tragedy, another controversy erupted when Grok, the AI tool by xAI, dismissed the assassination video and called it nothing more than a "meme edit." While the error might have been a simple gimmick by the tool, such a mistake invited a lot of criticism for the inaccuracy of the response, and it is rather alarming that, despite the incident being verified by law enforcement agencies and multiple reputable news outlets, it is still being misrepresented. By framing the assassination video as a prank, Grok showed how AI platforms blur the line between verifiable facts and reckless speculation. The reason why such errors tend to occur is that AI tools are trained to generate responses that sound believable based on patterns of data, without checking for facts. So, for instance, if there are memes and rumors being generated surrounding a topic, the chatbot can end up repeating the information and even giving more weight to it. However, when such slips occur during a crisis, they tend to add to the ongoing confusion and chaos. The issue lies with us users as well because we start seeking these platforms for the wrong reasons or for responses they are not meant to deliver. Systems like Grok are meant to be more of a conversational aid or help you improve your efficiency with tasks rather than a source for seeking news or confirming an ongoing crisis. This would help missteps like these appear less huge, given we know the limitations of chatbots. While users do have a responsibility, this situation is nonetheless alarming, especially considering how such tragedies are prone to being twisted by AI and how these tools can play a role in spreading misinformation, adding more to the air of uncertainty. Grok's mistake cannot be taken as a one-off moment but serves as a stark reminder of how quickly inaccurate information can spread in the AI era if necessary safety guardrails are absent.
Share
Share
Copy Link
AI-powered chatbots, including Elon Musk's Grok and Perplexity, amplified false information following the shooting death of conservative activist Charlie Kirk, highlighting the challenges of AI in breaking news situations.
The recent assassination of conservative activist Charlie Kirk at Utah Valley University has sparked a wave of misinformation across social media platforms, with AI-powered chatbots playing a significant role in amplifying false claims and conspiracy theories .
In the aftermath of Kirk's shooting, popular AI chatbots such as Elon Musk's Grok and Perplexity's AI assistant provided inaccurate and contradictory information to users seeking updates on the incident . Grok, integrated into X (formerly Twitter) as a fact-checking tool, made several false claims, including:
Perplexity's AI bot also contributed to the confusion by describing the shooting as a "hypothetical scenario" and suggesting that a White House statement on Kirk's death was fabricated .
The incident highlights the limitations of AI chatbots in handling real-time, evolving news events. S. Shyam Sundar, director of Penn State University's Center for Socially Responsible Artificial Intelligence, explained that generative AI tools produce results based on probability rather than fact-checking or on-scene reporting .
The spread of misinformation by AI chatbots has fueled various conspiracy theories, including:
Related Stories
The proliferation of AI-generated misinformation poses significant challenges for social media platforms, many of which have scaled back human fact-checking and content moderation efforts . This trend has exposed an urgent need for stronger AI detection tools and improved safeguards against the spread of false information .
The incident underscores the growing assault on facts in the digital landscape, exacerbated by public distrust of institutions and traditional media . It also highlights the "liar's dividend" phenomenon, where the existence of AI-generated content casts doubt on authentic information .
As the investigation into Charlie Kirk's assassination continues, the role of AI in spreading misinformation during breaking news events remains a critical concern for tech companies, policymakers, and the public alike.
Summarized by
Navi