2 Sources
2 Sources
[1]
A.I. Is Making Death Threats Way More Realistic
Tiffany Hsu has written for years about benefits and abuses of generative artificial intelligence. Even though she was toughened by years spent working in internet activism, Caitlin Roper found herself traumatized by the online threats she received this year. There was the picture of herself hanging from a noose, dead. And another of herself ablaze, screaming. The posts were part of a surge of vitriol directed at Ms. Roper and her colleagues at Collective Shout, an Australian activist group, on X and other social media platforms. Some of it, including images of the women flayed, decapitated or fed into a wood chipper, was seemingly enabled -- and given a visceral realism -- by generative artificial intelligence. In some of the videos, Ms. Roper was wearing a blue floral dress that she does, in fact, own. "It's these weird little details that make it feel more real and, somehow, a different kind of violation," she said. "These things can go from fantasy to more than fantasy." Artificial intelligence is already raising concerns for its ability to mimic real voices in service of scams or to produce deepfake pornography without a subject's permission. Now, the technology is also being used for violent threats -- priming them to maximize fear by making them far more personalized, more convincing and more easily delivered. "Two things will always happen when technology like this gets developed: We will find clever and creative and exciting ways to use it, and we will find horrific and awful ways to abuse it," said Hany Farid, a professor of computer science at the University of California, Berkeley. "What's frustrating is that this is not a surprise." Digitally generated threats have been possible for at least a few years. A judge in Florida was sent a video in 2023, most likely made using a character customization tool in the Grand Theft Auto 5 video game, that featured an avatar who looked and walked like her being hacked and shot to death. But threatening images are rapidly becoming easier to make, and more persuasive. One YouTube page had more than 40 realistic videos -- most likely made using A.I., according to experts who reviewed the channel -- each showing a woman being shot. (YouTube, after The New York Times contacted it, said it had terminated the channel for "multiple violations" of its guidelines.) A deepfake video of a student carrying a gun sent a high school into lockdown this spring. In July, a lawyer in Minneapolis said xAI's Grok chatbot had provided an anonymous social media user with detailed instructions on breaking into his house, sexually assaulting him and disposing of his body. Until recently, artificial intelligence could replicate real people only if they had a huge online presence, such as film stars with throngs of publicly accessible photos. Now, a single profile image will suffice, said Dr. Farid, who co-founded GetReal Security, a service that identifies malicious digital content. (Ms. Roper said she had worn the blue floral dress in a photo published a few years ago in an Australian newspaper.) The same is true of voices -- what once took hours of example data to clone now requires less than a minute. "The concern is that now, almost anyone with no skills but with motive or lack of scruples can easily use these tools to do damage," said Jane Bambauer, a professor who teaches about A.I. and the law at the University of Florida. Worries about A.I.-assisted threats and extortion intensified with the introduction this month of Sora, a text-to-video app from OpenAI. The app, which allows users to upload images of themselves to be incorporated into hyper-realistic scenes, quickly depicted actual people in frightening situations. The Times tested Sora and produced videos that appeared to show a gunman in a bloody classroom and a hooded man stalking a young girl. Grok also readily added a bloody gunshot wound to a photo of a real person. "From the perspective of identity, everyone's vulnerable," Dr. Farid said. An OpenAI spokeswoman said the company relied on multiple defenses, including guardrails to block unsafe content from being created, experiments to uncover previously unknown weaknesses and automated content moderation systems. (The Times sued OpenAI in 2023, claiming copyright infringement of news content related to A.I. systems, an assertion that OpenAI has denied.) Experts in A.I. safety, however, said companies had not done nearly enough. Alice Marwick, director of research at Data & Society, a nonprofit organization, described most guardrails as "more like a lazy traffic cop than a firm barrier -- you can get a model to ignore them and work around them." Ms. Roper said the torrent of online abuse starting this summer -- including hundreds of harassing posts sent specifically to her -- was linked to her work on a campaign to shut down violent video games glorifying rape, incest and sexual torture. On X, where most of the abuse appeared, she said, some harassing images and accounts were taken down. But the company also told her repeatedly that other posts depicting her violent death did not violate the platform's terms of service. In fact, X once included one of her harassers on a list of recommended accounts for her to follow. Some of the harassers also claimed to have used Grok not just to create the images but to research how to find the women at home and at local cafes. Fed up, Ms. Roper decided to post some examples. Soon after, according to screenshots, X told her that she was in breach of its safety policies against gratuitous gore and temporarily locked her account. Neither X nor xAI, the company that owns Grok, responded to requests for comment. A.I. is also making other kinds of threats more convincing. For example: swatting, the practice of placing false emergency calls with the aim of inciting a large response from the police and emergency personnel. A.I. "has significantly intensified the scale, precision and anonymity" of such attacks, the National Association of Attorneys General said this summer. On a lesser scale, a spate of A.I.-generated videos showing supposed home invasions have caused the targeted residents to call police departments around the country. Now, perpetrators of swatting can compile convincing false reports by cloning voices and manipulating images. One serial swatter used simulated gunfire to suggest that a shooter was in the parking lot of a Washington State high school. The campus was locked down for 20 minutes; police officers and federal agents showed up. A.I. was already complicating schools' efforts to protect students, raising concerns about personalized sexual images or rumors spread via fake videos, said Brian Asmus, a former police chief who was working as the senior manager of safety and security for the school district when the swatter called. Now, the technology is adding an extra security challenge, making false alarms harder to distinguish from true emergency calls. "How does law enforcement respond to something that's not real?" Mr. Asmus asked. "I don't think we've really gotten ahead of it yet." Stuart A. Thompson contributed reporting.
[2]
AI is making death threats way more realistic
Even though she was toughened by years spent working in internet activism, Caitlin Roper found herself traumatized by the online threats she received this year. There was the picture of herself hanging from a noose, dead. And another of herself ablaze, screaming. The posts were part of a surge of vitriol directed at Roper and her colleagues at Collective Shout, an Australian activist group, on X and other social media platforms. Some of it, including images of the women flayed, decapitated or fed into a wood chipper, was seemingly enabled -- and given a visceral realism -- by generative artificial intelligence. In some of the videos, Roper was wearing a blue floral dress that she does, in fact, own. "It's these weird little details that make it feel more real and, somehow, a different kind of violation," she said. "These things can go from fantasy to more than fantasy." AI is already raising concerns for its ability to mimic real voices in service of scams or to produce deepfake pornography without a subject's permission. Now the technology is also being used for violent threats -- priming them to maximize fear by making them far more personalized, more convincing and more easily delivered. "Two things will always happen when technology like this gets developed: We will find clever and creative and exciting ways to use it, and we will find horrific and awful ways to abuse it," said Hany Farid, a professor of computer science at the University of California, Berkeley. "What's frustrating is that this is not a surprise." Digitally generated threats have been possible for at least a few years. A judge in Florida was sent a video in 2023, most likely made using a character customization tool in the Grand Theft Auto 5 video game, that featured an avatar who looked and walked like her being hacked and shot to death. But threatening images are rapidly becoming easier to make, and more persuasive. One YouTube page had more than 40 realistic videos -- most likely made using AI, according to experts who reviewed the channel -- each showing a woman being shot. (YouTube, after The New York Times contacted it, said it had terminated the channel for "multiple violations" of its guidelines.) A deepfake video of a student carrying a gun sent a high school into lockdown this spring. In July, a lawyer in Minneapolis said xAI's Grok chatbot had provided an anonymous social media user with detailed instructions on breaking into his house, sexually assaulting him and disposing of his body. Until recently, artificial intelligence could replicate real people only if they had a huge online presence, such as film stars with throngs of publicly accessible photos. Now a single profile image will suffice, said Farid, who co-founded GetReal Security, a service that identifies malicious digital content. (Roper said she had worn the blue floral dress in a photo published a few years ago in an Australian newspaper.) The same is true of voices -- what once took hours of example data to clone now requires less than a minute. "The concern is that now, almost anyone with no skills but with motive or lack of scruples can easily use these tools to do damage," said Jane Bambauer, a professor who teaches about AI and the law at the University of Florida. Worries about AI-assisted threats and extortion intensified with the September introduction of Sora, a text-to-video app from OpenAI. The app, which allows users to upload images of themselves to be incorporated into hyperrealistic scenes, quickly depicted actual people in frightening situations. The Times tested Sora and produced videos that appeared to show a gunman in a bloody classroom and a hooded man stalking a young girl. Grok also readily added a bloody gunshot wound to a photo of a real person. "From the perspective of identity, everyone's vulnerable," Farid said. An OpenAI spokesperson said the company relied on multiple defenses, including guardrails to block unsafe content from being created, experiments to uncover previously unknown weaknesses and automated content moderation systems. (The Times sued OpenAI in 2023, claiming copyright infringement of news content related to AI systems, an assertion that OpenAI has denied.) Experts in AI safety, however, said companies had not done nearly enough. Alice Marwick, director of research at Data & Society, a nonprofit organization, described most guardrails as "more like a lazy traffic cop than a firm barrier; you can get a model to ignore them and work around them." Roper said the torrent of online abuse starting this summer -- including hundreds of harassing posts sent specifically to her -- was linked to her work on a campaign to shut down violent video games glorifying rape, incest and sexual torture. On X, where most of the abuse appeared, she said, some harassing images and accounts were taken down. But the company also told her repeatedly that other posts depicting her violent death did not violate the platform's terms of service. In fact, X once included one of her harassers on a list of recommended accounts for her to follow. Some of the harassers also claimed to have used Grok not just to create the images but to research how to find the women at home and at local cafes. Fed up, Roper decided to post some examples. Soon after, according to screenshots, X told her that she was in breach of its safety policies against gratuitous gore and temporarily locked her account. Neither X nor xAI, the company that owns Grok, responded to requests for comment. AI is also making other kinds of threats more convincing -- for example, swatting, the practice of placing false emergency calls with the aim of inciting a large response from the police and emergency personnel. AI "has significantly intensified the scale, precision and anonymity" of such attacks, the National Association of Attorneys General said this summer. On a lesser scale, a spate of AI-generated videos showing supposed home invasions have caused the targeted residents to call police departments around the country. Now perpetrators of swatting can compile convincing false reports by cloning voices and manipulating images. One serial swatter used simulated gunfire to suggest that a shooter was in the parking lot of a Washington state high school. The campus was locked down for 20 minutes; police officers and federal agents showed up. AI was already complicating schools' efforts to protect students, raising concerns about personalized sexual images or rumors spread via fake videos, said Brian Asmus, a former police chief who was working as the senior manager of safety and security for the school district when the swatter called. Now the technology is adding an extra security challenge, making false alarms harder to distinguish from true emergency calls. "How does law enforcement respond to something that's not real?" Asmus asked. "I don't think we've really gotten ahead of it yet."
Share
Share
Copy Link
Artificial intelligence is being weaponized to create highly personalized and realistic death threats, with activists and public figures facing unprecedented levels of digital harassment. The technology now requires minimal data to generate convincing violent imagery.
Artificial intelligence has transformed the landscape of online threats, making death threats and violent imagery more realistic and psychologically damaging than ever before. Caitlin Roper, an activist with the Australian organization Collective Shout, experienced this firsthand when she became the target of AI-generated harassment that depicted her in horrifically violent scenarios
1
.
Source: The Seattle Times
The images showed Roper hanging from a noose, burning alive, and subjected to other forms of violence. What made these threats particularly disturbing was their attention to detail – in some videos, she was wearing a blue floral dress that she actually owns, having worn it in a newspaper photo years earlier
2
.The democratization of AI tools has dramatically reduced the technical expertise required to create convincing threatening content. Hany Farid, a computer science professor at UC Berkeley and co-founder of GetReal Security, explains that what once required extensive data and technical skills can now be accomplished with minimal resources
1
.Previously, AI could only replicate individuals with substantial online presence, such as celebrities with thousands of publicly available photos. Now, a single profile image suffices to generate realistic depictions. Similarly, voice cloning technology that once required hours of audio samples now needs less than a minute of source material
2
.The problem extends beyond individual cases. A YouTube channel contained over 40 realistic videos, likely AI-generated, each showing women being shot. The platform terminated the channel only after The New York Times brought it to their attention
1
. In another incident, a deepfake video of a student with a gun prompted a high school lockdown, while a Minneapolis lawyer reported that xAI's Grok chatbot provided detailed instructions for breaking into his home and committing violent crimes2
.Related Stories
The introduction of OpenAI's Sora text-to-video application has intensified worries about AI-assisted threats. The tool allows users to upload personal images and incorporate them into hyperrealistic scenes, quickly enabling the creation of frightening scenarios featuring real people. Testing by The New York Times demonstrated the tool's ability to generate disturbing content, including scenes of violence in classrooms and stalking scenarios
1
.Despite claims of robust safety systems, experts argue that current protections are insufficient. Alice Marwick from Data & Society describes most AI guardrails as "more like a lazy traffic cop than a firm barrier," noting that users can often circumvent these protections
2
. Jane Bambauer, who teaches AI and law at the University of Florida, warns that virtually anyone with malicious intent can now use these tools to cause harm, regardless of their technical skills1
.Summarized by
Navi
[1]
[2]
1
Business and Economy

2
Business and Economy

3
Technology
