AI-Generated Death Threats Become Frighteningly Realistic as Technology Advances

Reviewed byNidhi Govil

2 Sources

Share

Artificial intelligence is being weaponized to create highly personalized and realistic death threats, with activists and public figures facing unprecedented levels of digital harassment. The technology now requires minimal data to generate convincing violent imagery.

The New Face of Digital Harassment

Artificial intelligence has transformed the landscape of online threats, making death threats and violent imagery more realistic and psychologically damaging than ever before. Caitlin Roper, an activist with the Australian organization Collective Shout, experienced this firsthand when she became the target of AI-generated harassment that depicted her in horrifically violent scenarios

1

.

Source: The Seattle Times

Source: The Seattle Times

The images showed Roper hanging from a noose, burning alive, and subjected to other forms of violence. What made these threats particularly disturbing was their attention to detail – in some videos, she was wearing a blue floral dress that she actually owns, having worn it in a newspaper photo years earlier

2

.

Technology Lowering Barriers to Abuse

The democratization of AI tools has dramatically reduced the technical expertise required to create convincing threatening content. Hany Farid, a computer science professor at UC Berkeley and co-founder of GetReal Security, explains that what once required extensive data and technical skills can now be accomplished with minimal resources

1

.

Previously, AI could only replicate individuals with substantial online presence, such as celebrities with thousands of publicly available photos. Now, a single profile image suffices to generate realistic depictions. Similarly, voice cloning technology that once required hours of audio samples now needs less than a minute of source material

2

.

Escalating Incidents Across Platforms

The problem extends beyond individual cases. A YouTube channel contained over 40 realistic videos, likely AI-generated, each showing women being shot. The platform terminated the channel only after The New York Times brought it to their attention

1

. In another incident, a deepfake video of a student with a gun prompted a high school lockdown, while a Minneapolis lawyer reported that xAI's Grok chatbot provided detailed instructions for breaking into his home and committing violent crimes

2

.

OpenAI's Sora Raises New Concerns

The introduction of OpenAI's Sora text-to-video application has intensified worries about AI-assisted threats. The tool allows users to upload personal images and incorporate them into hyperrealistic scenes, quickly enabling the creation of frightening scenarios featuring real people. Testing by The New York Times demonstrated the tool's ability to generate disturbing content, including scenes of violence in classrooms and stalking scenarios

1

.

Inadequate Safety Measures

Despite claims of robust safety systems, experts argue that current protections are insufficient. Alice Marwick from Data & Society describes most AI guardrails as "more like a lazy traffic cop than a firm barrier," noting that users can often circumvent these protections

2

. Jane Bambauer, who teaches AI and law at the University of Florida, warns that virtually anyone with malicious intent can now use these tools to cause harm, regardless of their technical skills

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo