AI Fakes of Cole Tomas Allen Flood Facebook With Fabricated Celebrity Ties and Sports Team Links

Reviewed byNidhi Govil

2 Sources

Share

Facebook became overrun with AI-generated deepfake images of Cole Tomas Allen within hours of his identification as the White House Correspondents' Dinner gunman. The fabricated images falsely linked the 31-year-old to over 50 celebrities and numerous sports teams, revealing how AI slop spreads rapidly during major news events and making it harder for users to distinguish reality from fabricated content.

AI Fakes Saturate Facebook After White House Incident

Within hours of authorities identifying Cole Tomas Allen as the suspect in the White House Correspondents' Dinner incident on April 26, social media misinformation exploded across Facebook in an unprecedented wave of AI-generated deepfake images

1

2

. The 31-year-old from Torrance, California, was charged with attempting to assassinate President Donald Trump during the Saturday event, but the real story quickly became obscured by a deluge of fabricated biographical details and celebrity connections that flood social media platforms.

Source: France 24

Source: France 24

An AFP investigation documented more than 50 public figures falsely associated with the accused US press gala gunman, including actors Tom Hanks and Sydney Sweeney, musicians Chris Brown and Taylor Swift, politicians like former President Barack Obama and Canada's Pierre Poilievre, Pope Leo XIV, and NBC News anchor Savannah Guthrie

2

. These fabricated images depicted Allen as their "former driver," "assistant," or "production crew member," creating false narratives that pinballed across Facebook with alarming speed.

Sports Teams Become Primary Vector for AI Slop

A separate category of AI fakes emerged simultaneously, falsely claiming Cole Tomas Allen had worked for over 40 different professional and collegiate sports teams. The visual fakes dressed him in gear for teams across the NFL, NHL, NBA, WNBA, and NASCAR, targeting fans of specific franchises to maximize virality

2

. Many of these AI-generated deepfake images appeared to originate from a Facebook page called West Coast Sluggers, which posted variations claiming Allen worked as security staff for teams like the L.A. Dodgers, Montreal Canadiens, Oregon Ducks, and Michigan State Spartans

1

.

Source: Gizmodo

Source: Gizmodo

The posts typically featured captions stating: "BREAKING: The shooter at the White House Correspondents' Dinner has been identified as 30-year-old Cole Allen from Torrance, California. Prior to the incident, he worked as a security staff member for the Los Angeles Dodgers and had appeared multiple times at their games." None of this was true. Allen is reportedly a teacher and engineer, with many renderings appearing based on a picture from a tutoring company's post recognizing him as "teacher of the month" in December 2024

1

2

.

Content Farms Exploit AI Technologies for Profit

The template-driven format of these fabricated images resembles output from content farm operations that mass-produce clickbait stories. Digital literacy expert Mike Caulfield told AFP, "This looks a lot like the same content farm behavior, just with AI"

2

. Other accounts propagating this misinformation included The Ohio Spirit, which linked to slop articles featuring ads from Capital One Shopping, prompting users to install extensions with potentially malicious intent

1

.

Hany Farid, University of California, Berkeley professor and chief science officer at GetReal Security, explained the technological shift: "Two years ago, you probably wouldn't have been able to make those images of him, because we could only really make compelling fakes of celebrities who had a large digital footprint from which the AI systems had been trained. Now, all I need is a single image of you"

2

. This advancement in generative content creation means anyone with a limited online footprint can now become a target for deepfake manipulation.

Distinguishing Reality From Fabricated Content Grows Harder

Recent improvements in AI technologies have made visual fakes easier to create and more convincing, with once-telltale mishaps such as six-fingered hands increasingly less common. Jen Golbeck, a professor at the University of Maryland's College of Information, noted: "AI makes it trivially easy to take existing photos and change their clothes, environment, or to swap out someone else's face. As soon as someone gets an idea, they can make it a visual reality. Five years ago, it would not have been unusual to see people manually photoshopping pictures like the ones we are seeing, but it would never have been at this volume"

2

.

Independent journalist Aaron Parnas, whose likeness appeared in AI-enabled posts claiming Allen worked for him, pleaded on Facebook for people to report the "completely fake" images, warning "This is extremely dangerous"

2

. Meta did not immediately respond to AFP's request for comment about the AI slop flooding their platform

2

.

Pattern Emerges Across Major News Events

Researchers expressed concerns about the sheer quantity of misinformation wearing on social media users, who could tire of constantly determining what is real. AFP documented similar bursts of fakes after other major events, including the US capture of Venezuelan leader Nicolas Maduro in January and Charlie Kirk's assassination last year

2

. Farid warned: "These things are being designed for virality, and then of course the algorithms pick up on them. It's super profitable. Every time there's a world event, we are just flooded with this kind of nonsense. I don't think that's going away"

2

.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved