Curated by THEOUTPOST
On Fri, 10 Jan, 8:03 AM UTC
2 Sources
[1]
Los Angeles Is Burning -- And AI Deepfakes Are Fueling Misinformation - Decrypt
As firefighters continue to battle wildfires across Los Angeles, another scourge threatens recovery efforts: AI-generated deepfakes. On Wednesday, images began circulating on X (formerly Twitter) showing the iconic Hollywood sign engulfed in flames as fire raged across Mount Lee. However, these images were entirely fabricated using generative artificial intelligence tools. The wildfire and looting deepfakes are part of a growing trend of misinformation and conspiracy theories that spread during crises. Similar tactics were seen during past disasters, such as Hurricane Helene last September, when scammers disseminated AI-generated images of destruction in areas untouched by the storm. "It's likely trolling, and they think it's funny," Tim Weninger, professor of computer science and engineering at University of Notre Dame, told Decrypt. "It could also be social or political, like implying California deserves to burn, criticizing Governor Newsom, or reacting to DEI in firefighting. These are the main reasons, but there could be others." On Thursday, U.S. President-elect Donald Trump again posted on Truth Social to take aim at California Governor Gavin Newsom and the handling of the state's water. Trump called him "Governor Gavin Newscum" after previously calling on the government to resign over his handling of the fires in Southern California. Thanks to the connectivity of social media, these AI-generated images can spread rapidly. Unsuspecting users, particularly those unfamiliar with the area or crisis specifics, are particularly susceptible to believing that AI deepfakes are real. Compounding this issue is a lack of communication by government officials, which can make it harder for the public to distinguish fact from fiction. Beyond inciting panic during crises, AI deepfakes are increasingly weaponized for political manipulation. Last year, as the East Coast of the United States grappled with the effects of Hurricane Milton and Helene, AI deepfakes began surfacing online showing destroyed buildings amid reports that areas predominately housing supporters of then-candidate Trump would not receive aid, further polarizing an already-tense political climate. Weninger emphasized that the spread of misinformation often stems from societal values rather than technological limitations, noting that by sharing or liking content, individuals are also endorsing it. "Social media problems are sometimes technology or fact-checking problems, but mostly they are values issues," Weninger said. "As a society, we don't value sharing accurate, truthful information enough. Every social media user must decide this for themselves. Technology can't do it for them, nor can anyone else." A spokesperson with the Hollywood Sign Trust -- a nonprofit organization that preserves, maintains, and promotes the Hollywood Sign -- confirmed to Decrypt that the sign is, in fact, undamaged and standing. "Griffith Park is closed for precautionary reasons," they said. "The sign is safe and sound, and there is no validity to these false rumors."
[2]
People Think the Hollywood Sign Is on Fire Because of AI Slop
Far-right hate speech incubator X is riddled with disinformation about the ongoing and devastating wildfires surrounding Los Angeles right now. The iconic Hollywood sign adorning the hills behind West Hollywood has quickly become a particularly popular inspiration for AI slop. A quick search on X reveals numerous fictional images and even videos showing the decades-old cultural icon up in flames. In reality, the landmark remains unaffected by the fires, with a wide freeway separating it from the still-burning Sunset Fire miles away -- though the blaze has engulfed more than 40 acres in the hills north of downtown LA. Other AI slop relating to the ongoing natural disaster is even more insidious. In an apparent effort to sow discord and score points with the platform's more gullible and potentially racist users, a user going by the name Kevin Dalton shared an image of shadowy figures that seem to be looting the devastation of burning buildings. "The remains of Pacific Palisades will get picked clean tonight," Dalton wrote in the caption. That's a common refrain among far-right pundits, who sow fear by tapping into racial animosity by baselessly accusing minorities of ransacking businesses following natural disasters. Except, in this case, the imagery is literally fake. Understandably, the tasteless post drew the ire of other users on the platform. "I cannot describe how enraging and utterly despicable it is to see people spreading AI-generated images of the LA fires when everyone around you is terrified, devastated, and trying to find accurate information to keep themselves and their loved ones safe," Wired senior business editor Louise Matsakis tweeted in response. Dalton has spent much of the last couple of days filling the platform with rage-inciting and -inducing posts about the ongoing situation, taking potshots at California governor Gavin Newsom and arguing that Trump should "fire" him, which isn't something the US president can do. It's a firehose of disinformation, in other words. In one post, Dalton furthered unsubstantiated conspiracy theories about a Newsom-affiliated arsonist being to blame for the blazes. It's far from the first time we've seen the spread of AI-generated content during or in the wake of a natural disaster. In October, an image depicting president-elect Donald Trump wading through Hurricane Helene floodwaters went viral. Other sloppily AI-generated images of devastation following the historic storm spread across a number of social media platforms, including Facebook. Particularly following a chaotic presidential election, there are plenty of reasons to believe that natural disaster AI slop is here to stay. Platforms have made it as easy as possible not only to create but to spread these images. And now that Meta CEO Mark Zuckerberg has chosen to closely follow in the footsteps of X owner Elon Musk and largely give up on content moderation, there's more than just a chance we'll see plenty more disinformation being spread on social media in the future. It's a sad state of affairs, especially considering the real world of natural disasters. The wildfires the city of Los Angeles is currently battling have already left a devastating path of destruction. Preying on the victims by scoring cheap political points with AI-generated images isn't just tasteless; it's a worrying glimpse into the future of social media and how trust in the news cycle is being dismantled bit by bit.
Share
Share
Copy Link
As wildfires rage in Los Angeles, AI-generated deepfakes are spreading misinformation, causing panic and political manipulation. The incident highlights the growing challenge of distinguishing fact from fiction in crisis situations.
As firefighters battle wildfires across Los Angeles, a new threat has emerged: AI-generated deepfakes fueling misinformation. Social media platforms, particularly X (formerly Twitter), have been inundated with fabricated images and videos depicting the iconic Hollywood sign engulfed in flames 1.
AI-generated content showing the Hollywood sign on fire has rapidly circulated online, despite the landmark remaining unaffected. A spokesperson for the Hollywood Sign Trust confirmed to Decrypt that "the sign is safe and sound, and there is no validity to these false rumors" 1. The actual Sunset Fire, which has consumed over 40 acres north of downtown LA, is miles away from the sign 2.
Some users have exploited the crisis to sow discord and manipulate public opinion. One user shared an AI-generated image of shadowy figures seemingly looting amid burning buildings, tapping into racial animosity and baselessly accusing minorities of ransacking businesses following natural disasters 2. This incident is part of a broader trend of using AI-generated content for political manipulation, as seen during previous disasters like Hurricane Milton and Helene 1.
Tim Weninger, professor of computer science and engineering at the University of Notre Dame, suggests that the spread of such misinformation often stems from societal values rather than technological limitations. "Social media problems are sometimes technology or fact-checking problems, but mostly they are values issues," Weninger told Decrypt 1.
The rapid spread of AI-generated content during crises poses significant challenges:
As AI technology advances and social media platforms struggle with content moderation, the spread of misinformation during crises is likely to persist. The situation in Los Angeles serves as a stark reminder of the need for improved digital literacy and responsible sharing practices among social media users 12.
The proliferation of AI-generated deepfakes during natural disasters not only preys on victims but also erodes trust in the news cycle. Louise Matsakis, senior business editor at Wired, expressed her frustration: "I cannot describe how enraging and utterly despicable it is to see people spreading AI-generated images of the LA fires when everyone around you is terrified, devastated, and trying to find accurate information" 2.
As the line between reality and fiction becomes increasingly blurred, the incident in Los Angeles serves as a cautionary tale about the potential misuse of AI technology and the importance of critical thinking in the digital age.
Artificial intelligence poses a significant threat to the integrity of the 2024 US elections. Experts warn about the potential for AI-generated misinformation to influence voters and disrupt the electoral process.
2 Sources
2 Sources
An AI-generated image depicting a young girl affected by Hurricane Helene has gone viral, with right-wing influencers mistaking it for a real photograph and using it to criticize the government's disaster response.
2 Sources
2 Sources
A controversy erupts as Donald Trump shares an AI-generated crowd photo, raising concerns about the impact of artificial intelligence on political discourse and public trust in media.
4 Sources
4 Sources
As the 2024 US presidential election approaches, the rise of AI-generated fake content is raising alarms about potential voter manipulation. Experts warn that the flood of AI-created misinformation could significantly impact the electoral process.
5 Sources
5 Sources
Recent studies reveal that AI-generated misinformation and deepfakes had little influence on global elections in 2024, contrary to widespread concerns. The limited impact is attributed to the current limitations of AI technology and users' ability to recognize synthetic content.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved