AI-Altered Image of Alex Pretti Reaches US Senate Floor, Exposing Misinformation Crisis

Reviewed byNidhi Govil

2 Sources

Share

An AI-enhanced photograph of Alex Pretti, the protester killed by border agents in Minneapolis, was displayed on the US Senate floor and aired by major news outlets including MS NOW. The manipulated image contained digital distortions and sparked debate about how generative AI models are sowing confusion during breaking news events and influencing political discourse.

AI-Enhanced Photograph Spreads Across Media and Government

An AI-altered image depicting the final moments before US immigration agents shot 37-year-old intensive care nurse Alex Pretti has exposed a troubling gap in how media organizations and government officials verify visual content during breaking news events. The manipulated photograph, which purports to show Pretti surrounded by officers with one pointing a gun at his head, was not only aired by cable news channel MS NOW but also displayed on the US Senate floor by Senator Dick Durbin, a Democrat from Illinois

1

2

. The fatal shooting of Pretti in Minneapolis sparked nationwide outrage, and the subsequent spread of AI-generated misinformation has raised urgent questions about how generative AI models are being weaponized to misrepresent reality in politically charged situations.

MS NOW tells Snopes that it did not create the edit itself but rather sourced the image from the internet without realizing it had been altered. Other news organizations, including the Daily Mail and International Business Times, also ran the same AI-enhanced photograph

1

. The original photo came from the United States Department of Veterans Affairs official portrait of Pretti, where he worked as an ICU nurse. Someone, likely an internet user, ran that low-quality image through a generative AI model to produce a clearer version, inadvertently creating digital distortions that altered Pretti's appearance, making his shoulders appear broader, his skin more tanned, and his nose less prominent.

Source: PetaPixel

Source: PetaPixel

How Generative AI Models Create Fabricating Details

When an image is fed into AI models like ChatGPT or similar tools and users request quality improvements, the AI treats the image and text as a prompt to create an entirely novel image. While the output may closely resemble the original photo, it no longer represents reality. AI models carry inherent biases that tend to make people appear more attractive, which explains why Pretti appeared more handsome in the altered version

1

. This phenomenon has serious implications for journalism and political debate, as these subtle changes can alter public perception and emotional responses to tragic events.

The AI-altered image that reached the US Senate contained several obvious errors, including a headless agent, yet it still spread rapidly across Instagram, Facebook, X, and Threads

2

. The manipulated image also led some social media users to falsely claim the object in Pretti's right hand was a weapon, when verified footage showed he was holding a phone. This contradiction directly challenged claims by officials in the Trump administration that Pretti posed a threat to officers.

Political Fallout and Senate Acknowledgment

Senator Dick Durbin displayed the AI-enhanced photograph during a Thursday speech on the Senate floor, writing on X: "I am on the Senate floor to condemn the killing of US citizens at the hands of federal immigration officers and to demand the Trump Administration take accountability for its actions"

2

. After X users demanded an apology for promoting the manipulated image, Durbin's office acknowledged the mistake on Friday. A spokesperson told AFP: "Our office used a photo on the Senate floor that had been widely circulated online. Staff didn't realize until after the fact that the image had been slightly edited and regret that this mistake occurred."

This gaffe underscores how lifelike AI visuals are seeping into everyday discourse, sowing confusion during critical moments and influencing political debate at the highest levels. Walter Scheirer from the University of Notre Dame told AFP that even subtle changes to a person's appearance can alter how an image is received. "In the recent past, creating lifelike visuals took some effort. However now, with AI, this can be done instantly, making such content available to politicians on command"

2

.

Broader Pattern of Visual Misinformation

The Pretti portrait is not the only AI creation spreading misinformation from Minneapolis. Another viral image that purportedly showed the moment Pretti was shot and killed was also AI-generated, and Reuters reports it does not match multiple videos taken of the deceased protester's death

1

. Meanwhile, the White House published a manipulated photo of a protester being arrested, altering the face of Nekima Levy Armstrong to make it appear she was crying with tears streaming down her face when she actually wore a stoic expression.

Source: France 24

Source: France 24

Deepfakes and Digital Manipulation Escalate

Pretti's killing marked the second fatal shooting of a Minneapolis protester this month by federal agents. Earlier in January, AI deepfakes flooded online platforms following the killing of another protester, 37-year-old Renee Nicole Good. AFP found dozens of posts across social media where users shared AI-generated images purporting to "unmask" the agent who shot her. Some X users even used AI chatbot Grok to digitally undress an old photo of Good

2

. On Friday, the Trump administration charged prominent journalist Don Lemon and others with civil rights crimes over coverage of immigration protests in Minneapolis, as the president branded Pretti an "agitator."

For AI researchers and policymakers, these incidents signal an urgent need for verification protocols that can keep pace with the speed at which generative AI can produce convincing but false imagery. The challenge extends beyond detecting obvious fabrications to identifying subtle enhancements that fundamentally alter the truth while maintaining plausibility. As AI tools become more accessible and powerful, the line between documentation and digital manipulation continues to blur, threatening the integrity of visual evidence in journalism, legal proceedings, and political discourse.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo