6 Sources
6 Sources
[1]
BBC Verify Live: Debunking false claims about Minneapolis ICE shooting
Thomas Copeland BBC Verify Live journalist As footage from yesterday's deadly shooting in Minneapolis has spread online, BBC Verify has seen many attempts to use AI to unmask the ICE agent involved in the shooting. At no point in the footage reviewed by BBC Verify does this agent remove his mask, but screenshots of him have been fed into AI-generation tools in an effort to identify him. Many of these AI-manipulated images have circulated widely across social media without being labelled or explained as being AI. We've reported before on misinformation fuelled by "AI-enhanced" images of President Trump, a man suspected of shooting Charlie Kirk and a picture from the Epstein files As you can see, attempts to use AI to generate the ICE agent's face without a mask has resulted in very different outcomes every time. That's because when you ask AI to generate an image it can only make a prediction based on the images it has been trained on Prof Thomas Nowotny, head of the AI research group at the University of Sussex, has told BBC Verify."AI will only ever be able to generate a likely image, of which there are many different equally plausible versions," Nowotny said.
[2]
Stop Using AI to Unmask the ICE Agent Who Killed Renee Good
An ICE agent shot and killed 37-year-old Renee Good in Minneapolis on Wednesday, just the latest example of federal authorities terrorizing communities with deadly force at the direction of President Donald Trump. The ICE agent can be seen shooting at Good's car in three separate viral videos, though the shooter hasn't yet been publicly identified. Internet sleuths are asking AI tools to remove the ICE agent's face mask. The problem is that AI chatbots can't do that with any accuracy. Video from the scene of the shooting on Wednesday was tough to watch, but it instantly flooded all of the major social media platforms. The video was damning, appearing to show Good initially attempting to wave the ICE agents on before the masked men give conflicting orders. They first told her to move on, according to eyewitnesses who spoke with Minnesota Public Radio, before trying to get her out of the car. Video shows Good moved the car forward, with her wheels turned away from the agents, but one of the men can be seen shooting at the car multiple times. Homeland Security Secretary Kristi Noem claimed that Good was trying to run over the ICE agents and committed an act of "domestic terrorism." Vice President JD Vance called it "classic terrorism" on Thursday. Visual investigations from Bellingcat and the New York Times contradicted their account. Not long after the videos went viral, social media users on platforms like X started to ask the AI chatbot Grok to unmask the agent who shot Good. Fake images created by unknown AI tools also spread on sites like TikTok and Instagram. But AI simply can't do that. It just creates an image from scratch that doesn't show the actual face of that person. It's roughly as useful as picking a random photo from the internet. Some of the images have gotten enormous traction, attracting over a million views in a single tweet, and have spread widely across many networks, driven by an ignorance of what AI tools are capable of generating. Unfortunately, AI is also not good at identifying whether images are created with AI. The image above is not real, but when Gizmodo asked Gemini whether it was created by AI, the chatbot said it wasn't. Google recently introduced the SynthID watermark detector in Gemini, but that's only useful when the image was actually created with a Google tool like Nano Banana Pro. The watermark is invisible to the naked eye, but Gemini has no way to definitely rule on an image created with a different company's tools. The image above is AI but was not created with Google, and Gemini replied: "Based on my analysis, the image is likely a real photograph, not AI-generated." AI detection software similarly struggles with whether text has been created with artificial intelligence tools like ChatGPT, leading to false accusations against students who swear they didn't get AI to write their papers. Unmasking people is simply beyond the capabilities of AI tools at the moment. These fake images are currently going viral, and people appear to be running them through facial recognition and getting false positives. One common name that's cropped up on sites like Reddit and X is "Steve Grove," a real person who owns a gun shop in Springfield, Missouri. The Springfield Daily Citizen spoke with the real Steve Grove, who said that his Facebook account has been inundated with messages. “I never go by â€~Steve,'†Grove told the news outlet on Thursday. “And then, of course, I’m not in Minnesota. I don’t work for ICE, and I have, you know, 20 inches of hair on my head, but whatever.†Steve Grove is the name of the CEO of the Star Tribune newspaper in Minneapolis, which may be where this claim originated. Other fake images created on Wednesday tried to show Good in her car before the shooting. One AI-generated image spread widely on Bluesky in a cropped form, but also appeared on Facebook in a wider shot. Notably, the fake image doesn't show anyone behind the wheel, with the woman supposedly trying to represent Good sitting in the passenger's seat. The cropped version has been flipped so that it appears more like she's in the driver's seat. Most disturbingly, one X user took a screenshot of Good, seen slumped over lifeless in her car, and told Grok to put her in a bikini. Grok dutifully complied, mirroring the activity of the AI chatbot making non-consensual sexualized images of women and young girls in recent weeks. It's a federal crime to create child sexual abuse material, but Grok continues to do it at the request of users. We've seen this reliance on AI as an investigative tool over and over again in the past year. When security camera images of the suspect in the Charlie Kirk shooting were released by the FBI, people ran them through AI tools in an attempt to get a clearer picture of the person without sunglasses. When a suspect was eventually arrested, some people were confused because Tyler Robinson's mugshot didn't look anything like the AI-altered images they had seen circulating on social media. When Trump appeared ill over Labor Day weekend last year, social media tried to "enhance" grainy photos of the president using generative artificial intelligence tools. The enhancement added a gigantic lump to his head. But AI is just introducing flaws, not creating a clearer picture. All you need to do to understand what's happening is to look at the flag on Trump's hat. It didn't create a more accurate American flag. The AI looked for patterns and extrapolated from those patterns, sharpening the focus but not creating a more accurate picture of reality. And you can't always blame internet sleuths exclusively for some of the dumbest comments in these situations. Greg Kelly, an anchor at Newsmax, tried to suggest on Wednesday that the stickers on the back of Good's car were somehow suspicious. "TOTALLY JUSTIFIED SHOOTING!!!!!! NOT EVEN CLOSE!!! (Curious about these Stickers on the Back of the Car. Various WACK JOB groups and affiliations? )" Kelly wrote on X. Those stickers obviously look like stickers from the National Parks. And a report from the Associated Press suggests she was simply dropping off her son at school and got caught up in the middle of the ICE incident, according to her ex-husband. There's no evidence that Good was some kind of left-wing radical. And even if she was, that wouldn't have justified her killing. Good had two children from her first marriage, ages 15 and 12, according to Minnesota Public Radio, and a 6-year-old son from her second marriage. A GoFundMe fundraising campaign for Good's surviving wife and son has raised over $600,000 at the time of this writing.
[3]
See how AI images claiming to reveal Minneapolis ICE agent's face spread confusion
Armchair online detectives keep using artificial intelligence to try to identify people involved in high-profile incidents. It doesn't work. An amateur forensics mob assembled after a federal immigration officer shot and killed Renee Nicole Good in Minneapolis on Wednesday. Using images taken from witnesses' videos, the online sleuths attempted to use artificial intelligence tools to reveal the identity of the masked Immigration and Customs Enforcement officer who shot Good and who had not yet been publicly identified.
[4]
AI images and internet rumors spread confusion about ICE agent involved in shooting
An original still image from an eyewitness video shows the masked ICE agent who shot Renee Nicole Good (Left). Users on social media "unmasked" the agent using Grok (Right). Experts warn AI cannot "unmask" individuals. NPR is publishing both images to show how AI is being used to manipulate images of news events. Screenshots by NPR/Image by Courtney Theophin/NPR hide caption In the hours after a fatal shooting of Renee Good, 37, in Minneapolis, an image of the ICE agent who took the shots began to circulate. While the agent wore a mask in eyewitness videos taken of the event, he appeared to be unmasked in many of the social media posts. That image appeared to have been generated by xAI's generative AI chatbot, Grok, in response to users on X asking the bot to "unmask" the agent. NPR is publishing both images to show how AI is being used to manipulate real evidence of news events, but using AI to try to "unmask" anyone is ill-advised, according to experts. "AI-powered enhancement has a tendency to hallucinate facial details leading to an enhanced image that may be visually clear, but that may also be devoid of reality with respect to biometric identification," Hany Farid, a professor at the University of California, Berkeley who specializes in the analysis of digital images, wrote to NPR in an email. Regardless, the AI-generated image began to circulate late Wednesday, along with a name -- Steve Grove. The origin of that name was not immediately clear, but by Thursday morning, it was leading to an outpouring of anger towards at least two Steve Groves who are in no way linked to the shooting. One was the owner of a gun shop in Springfield, Mo., named Steven Grove. That Grove awoke to discover his Facebook page under attack. "I never go by 'Steve,'" Steven Grove told the Springfield Daily Citizen. "And then, of course, I'm not in Minnesota. I don't work for ICE, and I have, you know, 20 inches of hair on my head, but whatever." The second Steve Grove was the publisher of the Minnesota Star Tribune. In a statement, the paper said it was monitoring what it believed to be a "coordinated online disinformation campaign." "We encourage people looking for factual information reported and written by trained journalists, not bots, to follow and subscribe to the Minnesota Star Tribune," the paper wrote. Meanwhile, the Star Tribune and others, including NPR, have identified the name of the ICE agent as Jonathan Ross. Court documents show Ross was dragged by a car during another traffic stop in June of last year in Bloomington, Minn.
[5]
After Minneapolis shooting, AI fabrications of victim and shooter
Washington (United States) (AFP) - Hours after a fatal shooting in Minneapolis by an immigration agent, AI deepfakes of the victim and the shooter flooded online platforms, underscoring the growing prevalence of what experts call "hallucinated" content after major news events. The victim of Wednesday's shooting, identified as 37-year-old Renee Nicole Good, was hit at point-blank range as she apparently tried to drive away from masked agents who were crowding around her Honda SUV. AFP found dozens of posts across social media platforms, primarily the Elon Musk-owned X, in which users shared AI-generated images purporting to "unmask" the agent from the Immigration and Customs Enforcement (ICE) agency. "We need his name," Claude Taylor, who heads the anti-Trump political action committee Mad Dog, wrote in a post on X featuring the AI images. The post racked up more than 1.3 million views. Taylor later claimed he deleted the post after he "learned it was AI," but it was still visible to online users. An authentic clip of the shooting, replayed by multiple media outlets, does not show any of the ICE agents with their masks off. Many of the fabrications were created using Grok, the AI tool developed by Elon Musk's startup xAI, which has faced heavy criticism over a new "edit" feature that has unleashed a wave of sexually explicit imagery. Some X users used Grok to digitally undress an old photo of Good smiling, as well as a new photo of her body slumped over after the shooting, generating AI images showing her in a bikini. Another woman wrongly identified as the victim was also subjected to similar manipulation. 'New reality' Another X user posted the image of a masked officer and prompted the chatbot: "Hey @grok remove this person's face mask." Grok promptly generated a hyper-realistic image of the man without a mask. There was no immediate comment from X. When reached by AFP, xAI replied with a terse, automated response: "Legacy Media Lies." The viral fabrications illustrate a new digital reality in which self-proclaimed internet sleuths use widely available generative AI tools to create hyper-realistic visuals and then amplify them across social media platforms that have largely scaled back content moderation. "Given the accessibility of advanced AI tools, it is now standard practice for actors on the internet to 'add to the story' of breaking news in ways that do not correspond to what is actually happening, often in politically partisan ways," Walter Scheirer, from the University of Notre Dame, told AFP. "A new development has been the use of AI to 'fill in the blanks' of a story, for instance, the use of AI to 'reveal' the face of the ICE officer. This is hallucinated information." AI tools are also increasingly used to "dehumanize victims" in the aftermath of a crisis event, Scheirer said. One AI image portrayed the woman mistaken for Good as a water fountain, with water pouring out of a hole in her neck. Another depicted her lying on a road, her neck under the knee of a masked agent, in a scene reminiscent of the 2020 police killing of a Black man named George Floyd in Minneapolis, which sparked nationwide racial justice protests. AI fabrications, often amplified by partisan actors, have fueled alternate realities around recent news events, including the US capture of Venezuelan leader Nicolas Maduro and last year's assassination of conservative activist Charlie Kirk. The AI distortions are "problematic" and are adding to the "growing pollution of our information ecosystem," Hany Farid, co-founder of GetReal Security and a professor at the University of California, Berkeley, told AFP. "I fear that this is our new reality," he added.
[6]
The ICE Shooter's Face Was Everywhere Online Within Hours. There's Just 1 Problem.
"In this situation where half of the face [on the ICE agent] is obscured, AI or any other technique is not, in my opinion, able to accurately reconstruct the facial identity," Farid said. And yet, so many people continue to use AI-generated image tools because it takes seconds to do so. Solomon Messing, an associate professor at New York University in the Center for Social Media and Politics, prompted Grok, the AI chatbot created by Elon Musk, to generate two images of the apparent federal agent "without a mask," and got images of two different white men. Doing so did not even require signing in to access this service.
Share
Share
Copy Link
After a fatal shooting in Minneapolis by an ICE agent, AI-generated images flooded social media claiming to reveal the masked shooter's identity. Internet sleuths used tools like Grok to create fake unmasked images, leading to false accusations against innocent people. Experts warn that AI cannot accurately unmask individuals and only generates plausible predictions, highlighting the growing problem of hallucinated content polluting the information ecosystem.
Hours after an ICE agent shot and killed 37-year-old Renee Nicole Good in a fatal shooting in Minneapolis on Wednesday, AI-generated images claiming to unmask the ICE agent flooded social media platforms. The masked agent, who has not yet been publicly identified in official channels, became the target of internet sleuths attempting to use AI image generation tools to reveal his face
1
2
. Screenshots from eyewitness videos were fed into the AI chatbot Grok, developed by Elon Musk's xAI, with users prompting the tool to digitally remove the agent's mask5
.
Source: Gizmodo
The problem is that AI cannot accurately perform this task. "AI will only ever be able to generate a likely image, of which there are many different equally plausible versions," Prof Thomas Nowotny, head of the AI research group at the University of Sussex, told BBC Verify
1
. When asked to generate an image, AI tools can only make predictions based on the images they have been trained on, resulting in very different outcomes every time. At no point in the footage reviewed by BBC Verify does the agent remove his mask1
.
Source: NPR
The AI fabrications quickly spread confusion across platforms like X, TikTok, and Instagram, with many of these manipulated images circulating without being labeled as AI-generated content. One image attracted over 1.3 million views in a single post
5
. Along with the fake images, a name began circulating: Steve Grove. This led to an outpouring of anger directed at two innocent men who share that name .Source: Washington Post
Steven Grove, a gun shop owner in Springfield, Missouri, awoke to find his Facebook page under attack. "I never go by 'Steve,'" Grove told the Springfield Daily Citizen. "And then, of course, I'm not in Minnesota. I don't work for ICE, and I have, you know, 20 inches of hair on my head, but whatever" . The second Steve Grove, publisher of the Minnesota Star Tribune, also became a target. The paper issued a statement monitoring what it believed to be a "coordinated online disinformation campaign" .
Meanwhile, news outlets including NPR and the Star Tribune have identified the ICE agent as Jonathan Ross, based on court documents showing Ross was dragged by a car during another traffic stop in June of last year in Bloomington, Minnesota .
Hany Farid, a professor at the University of California, Berkeley who specializes in the analysis of digital images and co-founder of GetReal Security, emphasized the dangers of using AI for biometric identification. "AI-powered enhancement has a tendency to hallucinate facial details leading to an enhanced image that may be visually clear, but that may also be devoid of reality with respect to biometric identification," Farid wrote to NPR . He described these AI distortions as "problematic" and contributing to the "growing pollution of our information ecosystem"
5
.The issue extends beyond attempting to unmask an ICE agent. AI detection software, including tools like Gemini, struggles to identify whether images have been created with artificial intelligence. When Gizmodo tested an AI-generated image with Gemini, the chatbot incorrectly identified it as "likely a real photograph"
2
. Google's SynthID watermark detector only works for images created with Google tools, leaving content generated by other platforms undetectable2
.Related Stories
The Minneapolis shooting represents a troubling pattern where deepfakes and hallucinated content emerge immediately after major news events. Walter Scheirer from the University of Notre Dame told AFP that "given the accessibility of advanced AI tools, it is now standard practice for actors on the internet to 'add to the story' of breaking news in ways that do not correspond to what is actually happening, often in politically partisan ways"
5
.Beyond the attempts to identify the shooter, AI was also used to dehumanize Renee Nicole Good. Some X users used Grok to digitally undress photos of Good, generating images showing her in a bikini, including one of her body slumped over after the shooting
5
. Grok has faced heavy criticism over its "edit" feature that has unleashed a wave of sexually explicit imagery. Creating such content of minors constitutes child sexual abuse material and is a federal crime, yet Grok continues to comply with these requests2
.This incident mirrors previous cases where social media users relied on AI as an investigative tool. BBC Verify has reported on similar misinformation fueled by "AI-enhanced" images, including manipulated photos of President Trump, a suspect in the Charlie Kirk shooting, and images from the Epstein files
1
. When security camera images of the Charlie Kirk shooting suspect were released, people ran them through AI tools attempting to get clearer pictures without sunglasses. When Tyler Robinson was eventually arrested, his mugshot looked nothing like the AI-altered images circulating online2
.Hany Farid concluded with a sobering assessment: "I fear that this is our new reality"
5
. As AI tools become more accessible and social media platforms scale back content moderation, the challenge of distinguishing authentic evidence from AI fabrications will only intensify, threatening the integrity of the information ecosystem that people rely on to understand critical news events.Summarized by
Navi
[3]
11 Sept 2025•Technology

10 Dec 2025•Technology

03 Jul 2025•Technology

1
Policy and Regulation

2
Technology

3
Technology