3 Sources
3 Sources
[1]
A.I. Is Making Death Threats Way More Realistic
Tiffany Hsu has written for years about benefits and abuses of generative artificial intelligence. Even though she was toughened by years spent working in internet activism, Caitlin Roper found herself traumatized by the online threats she received this year. There was the picture of herself hanging from a noose, dead. And another of herself ablaze, screaming. The posts were part of a surge of vitriol directed at Ms. Roper and her colleagues at Collective Shout, an Australian activist group, on X and other social media platforms. Some of it, including images of the women flayed, decapitated or fed into a wood chipper, was seemingly enabled -- and given a visceral realism -- by generative artificial intelligence. In some of the videos, Ms. Roper was wearing a blue floral dress that she does, in fact, own. "It's these weird little details that make it feel more real and, somehow, a different kind of violation," she said. "These things can go from fantasy to more than fantasy." Artificial intelligence is already raising concerns for its ability to mimic real voices in service of scams or to produce deepfake pornography without a subject's permission. Now, the technology is also being used for violent threats -- priming them to maximize fear by making them far more personalized, more convincing and more easily delivered. "Two things will always happen when technology like this gets developed: We will find clever and creative and exciting ways to use it, and we will find horrific and awful ways to abuse it," said Hany Farid, a professor of computer science at the University of California, Berkeley. "What's frustrating is that this is not a surprise." Digitally generated threats have been possible for at least a few years. A judge in Florida was sent a video in 2023, most likely made using a character customization tool in the Grand Theft Auto 5 video game, that featured an avatar who looked and walked like her being hacked and shot to death. But threatening images are rapidly becoming easier to make, and more persuasive. One YouTube page had more than 40 realistic videos -- most likely made using A.I., according to experts who reviewed the channel -- each showing a woman being shot. (YouTube, after The New York Times contacted it, said it had terminated the channel for "multiple violations" of its guidelines.) A deepfake video of a student carrying a gun sent a high school into lockdown this spring. In July, a lawyer in Minneapolis said xAI's Grok chatbot had provided an anonymous social media user with detailed instructions on breaking into his house, sexually assaulting him and disposing of his body. Until recently, artificial intelligence could replicate real people only if they had a huge online presence, such as film stars with throngs of publicly accessible photos. Now, a single profile image will suffice, said Dr. Farid, who co-founded GetReal Security, a service that identifies malicious digital content. (Ms. Roper said she had worn the blue floral dress in a photo published a few years ago in an Australian newspaper.) The same is true of voices -- what once took hours of example data to clone now requires less than a minute. "The concern is that now, almost anyone with no skills but with motive or lack of scruples can easily use these tools to do damage," said Jane Bambauer, a professor who teaches about A.I. and the law at the University of Florida. Worries about A.I.-assisted threats and extortion intensified with the introduction this month of Sora, a text-to-video app from OpenAI. The app, which allows users to upload images of themselves to be incorporated into hyper-realistic scenes, quickly depicted actual people in frightening situations. The Times tested Sora and produced videos that appeared to show a gunman in a bloody classroom and a hooded man stalking a young girl. Grok also readily added a bloody gunshot wound to a photo of a real person. "From the perspective of identity, everyone's vulnerable," Dr. Farid said. An OpenAI spokeswoman said the company relied on multiple defenses, including guardrails to block unsafe content from being created, experiments to uncover previously unknown weaknesses and automated content moderation systems. (The Times sued OpenAI in 2023, claiming copyright infringement of news content related to A.I. systems, an assertion that OpenAI has denied.) Experts in A.I. safety, however, said companies had not done nearly enough. Alice Marwick, director of research at Data & Society, a nonprofit organization, described most guardrails as "more like a lazy traffic cop than a firm barrier -- you can get a model to ignore them and work around them." Ms. Roper said the torrent of online abuse starting this summer -- including hundreds of harassing posts sent specifically to her -- was linked to her work on a campaign to shut down violent video games glorifying rape, incest and sexual torture. On X, where most of the abuse appeared, she said, some harassing images and accounts were taken down. But the company also told her repeatedly that other posts depicting her violent death did not violate the platform's terms of service. In fact, X once included one of her harassers on a list of recommended accounts for her to follow. Some of the harassers also claimed to have used Grok not just to create the images but to research how to find the women at home and at local cafes. Fed up, Ms. Roper decided to post some examples. Soon after, according to screenshots, X told her that she was in breach of its safety policies against gratuitous gore and temporarily locked her account. Neither X nor xAI, the company that owns Grok, responded to requests for comment. A.I. is also making other kinds of threats more convincing. For example: swatting, the practice of placing false emergency calls with the aim of inciting a large response from the police and emergency personnel. A.I. "has significantly intensified the scale, precision and anonymity" of such attacks, the National Association of Attorneys General said this summer. On a lesser scale, a spate of A.I.-generated videos showing supposed home invasions have caused the targeted residents to call police departments around the country. Now, perpetrators of swatting can compile convincing false reports by cloning voices and manipulating images. One serial swatter used simulated gunfire to suggest that a shooter was in the parking lot of a Washington State high school. The campus was locked down for 20 minutes; police officers and federal agents showed up. A.I. was already complicating schools' efforts to protect students, raising concerns about personalized sexual images or rumors spread via fake videos, said Brian Asmus, a former police chief who was working as the senior manager of safety and security for the school district when the swatter called. Now, the technology is adding an extra security challenge, making false alarms harder to distinguish from true emergency calls. "How does law enforcement respond to something that's not real?" Mr. Asmus asked. "I don't think we've really gotten ahead of it yet." Stuart A. Thompson contributed reporting.
[2]
AI is making death threats way more realistic
Even though she was toughened by years spent working in internet activism, Caitlin Roper found herself traumatized by the online threats she received this year. There was the picture of herself hanging from a noose, dead. And another of herself ablaze, screaming. The posts were part of a surge of vitriol directed at Roper and her colleagues at Collective Shout, an Australian activist group, on X and other social media platforms. Some of it, including images of the women flayed, decapitated or fed into a wood chipper, was seemingly enabled -- and given a visceral realism -- by generative artificial intelligence. In some of the videos, Roper was wearing a blue floral dress that she does, in fact, own. "It's these weird little details that make it feel more real and, somehow, a different kind of violation," she said. "These things can go from fantasy to more than fantasy." AI is already raising concerns for its ability to mimic real voices in service of scams or to produce deepfake pornography without a subject's permission. Now the technology is also being used for violent threats -- priming them to maximize fear by making them far more personalized, more convincing and more easily delivered. "Two things will always happen when technology like this gets developed: We will find clever and creative and exciting ways to use it, and we will find horrific and awful ways to abuse it," said Hany Farid, a professor of computer science at the University of California, Berkeley. "What's frustrating is that this is not a surprise." Digitally generated threats have been possible for at least a few years. A judge in Florida was sent a video in 2023, most likely made using a character customization tool in the Grand Theft Auto 5 video game, that featured an avatar who looked and walked like her being hacked and shot to death. But threatening images are rapidly becoming easier to make, and more persuasive. One YouTube page had more than 40 realistic videos -- most likely made using AI, according to experts who reviewed the channel -- each showing a woman being shot. (YouTube, after The New York Times contacted it, said it had terminated the channel for "multiple violations" of its guidelines.) A deepfake video of a student carrying a gun sent a high school into lockdown this spring. In July, a lawyer in Minneapolis said xAI's Grok chatbot had provided an anonymous social media user with detailed instructions on breaking into his house, sexually assaulting him and disposing of his body. Until recently, artificial intelligence could replicate real people only if they had a huge online presence, such as film stars with throngs of publicly accessible photos. Now a single profile image will suffice, said Farid, who co-founded GetReal Security, a service that identifies malicious digital content. (Roper said she had worn the blue floral dress in a photo published a few years ago in an Australian newspaper.) The same is true of voices -- what once took hours of example data to clone now requires less than a minute. "The concern is that now, almost anyone with no skills but with motive or lack of scruples can easily use these tools to do damage," said Jane Bambauer, a professor who teaches about AI and the law at the University of Florida. Worries about AI-assisted threats and extortion intensified with the September introduction of Sora, a text-to-video app from OpenAI. The app, which allows users to upload images of themselves to be incorporated into hyperrealistic scenes, quickly depicted actual people in frightening situations. The Times tested Sora and produced videos that appeared to show a gunman in a bloody classroom and a hooded man stalking a young girl. Grok also readily added a bloody gunshot wound to a photo of a real person. "From the perspective of identity, everyone's vulnerable," Farid said. An OpenAI spokesperson said the company relied on multiple defenses, including guardrails to block unsafe content from being created, experiments to uncover previously unknown weaknesses and automated content moderation systems. (The Times sued OpenAI in 2023, claiming copyright infringement of news content related to AI systems, an assertion that OpenAI has denied.) Experts in AI safety, however, said companies had not done nearly enough. Alice Marwick, director of research at Data & Society, a nonprofit organization, described most guardrails as "more like a lazy traffic cop than a firm barrier; you can get a model to ignore them and work around them." Roper said the torrent of online abuse starting this summer -- including hundreds of harassing posts sent specifically to her -- was linked to her work on a campaign to shut down violent video games glorifying rape, incest and sexual torture. On X, where most of the abuse appeared, she said, some harassing images and accounts were taken down. But the company also told her repeatedly that other posts depicting her violent death did not violate the platform's terms of service. In fact, X once included one of her harassers on a list of recommended accounts for her to follow. Some of the harassers also claimed to have used Grok not just to create the images but to research how to find the women at home and at local cafes. Fed up, Roper decided to post some examples. Soon after, according to screenshots, X told her that she was in breach of its safety policies against gratuitous gore and temporarily locked her account. Neither X nor xAI, the company that owns Grok, responded to requests for comment. AI is also making other kinds of threats more convincing -- for example, swatting, the practice of placing false emergency calls with the aim of inciting a large response from the police and emergency personnel. AI "has significantly intensified the scale, precision and anonymity" of such attacks, the National Association of Attorneys General said this summer. On a lesser scale, a spate of AI-generated videos showing supposed home invasions have caused the targeted residents to call police departments around the country. Now perpetrators of swatting can compile convincing false reports by cloning voices and manipulating images. One serial swatter used simulated gunfire to suggest that a shooter was in the parking lot of a Washington state high school. The campus was locked down for 20 minutes; police officers and federal agents showed up. AI was already complicating schools' efforts to protect students, raising concerns about personalized sexual images or rumors spread via fake videos, said Brian Asmus, a former police chief who was working as the senior manager of safety and security for the school district when the swatter called. Now the technology is adding an extra security challenge, making false alarms harder to distinguish from true emergency calls. "How does law enforcement respond to something that's not real?" Asmus asked. "I don't think we've really gotten ahead of it yet."
[3]
AI is making death threats way more realistic
Caitlin Roper, an Australian activist, was traumatised by AI-generated death threats and violent images of herself shared online. The abuse targeted her and colleagues from Collective Shout, showing disturbingly realistic scenes of harm. Details like her real clothing made the threats feel personal, deepening the emotional impact of the attacks. Even though she was toughened by years spent working in internet activism, Caitlin Roper found herself traumatised by the online threats she received this year. There was the picture of herself hanging from a noose, dead. And another of herself ablaze, screaming. The posts were part of a surge of vitriol directed at Roper and her colleagues at Collective Shout, an Australian activist group, on X and other social media platforms. Some of it, including images of the women flayed, decapitated or fed into a wood chipper, was seemingly enabled -- and given a visceral realism -- by generative artificial intelligence. In some of the images, Roper was wearing a blue floral dress that she does, in fact, own. "It's these weird little details that make it feel more real and, somehow, a different kind of violation," she said. "These things can go from fantasy to more than fantasy." AI is already raising concerns for its ability to mimic real voices in service of scams or to produce deepfake pornography without a subject's permission. Now the technology is also being used for violent threats -- priming them to maximize fear by making them far more personalized, more convincing and more easily delivered. "Two things will always happen when technology like this gets developed: We will find clever and creative and exciting ways to use it, and we will find horrific and awful ways to abuse it," said Hany Farid, a professor of computer science at the University of California, Berkeley. "What's frustrating is that this is not a surprise." Digitally generated threats have been possible for at least a few years. A judge in Florida was sent a video in 2023, most likely made using a character customization tool in the Grand Theft Auto 5 video game, that featured an avatar who looked and walked like her being hacked and shot to death. But threatening images are rapidly becoming easier to make, and more persuasive. One YouTube page had more than 40 realistic videos -- most likely made using AI, according to experts who reviewed the channel -- each showing a woman being shot. (YouTube, after The New York Times contacted it, said it had terminated the channel for "multiple violations" of its guidelines.) A deepfake video of a student carrying a gun sent a high school into lockdown this spring. In July, a lawyer in Minneapolis said xAI's Grok chatbot had provided an anonymous social media user with detailed instructions on breaking into his house, sexually assaulting him and disposing of his body. Until recently, artificial intelligence could replicate real people only if they had a huge online presence, such as film stars with throngs of publicly accessible photos. Now a single profile image will suffice, said Farid, who co-founded GetReal Security, a service that identifies malicious digital content. (Roper said she had worn the blue floral dress in a photo published a few years ago in an Australian newspaper.) The same is true of voices -- what once took hours of example data to clone now requires less than a minute. "The concern is that now, almost anyone with no skills but with motive or lack of scruples can easily use these tools to do damage," said Jane Bambauer, a professor who teaches about AI and the law at the University of Florida. Worries about AI-assisted threats and extortion intensified with the September introduction of Sora, a text-to-video app from OpenAI. The app, which allows users to upload images of themselves to be incorporated into hyperrealistic scenes, quickly depicted actual people in frightening situations. The Times tested Sora and produced videos that appeared to show a gunman in a bloody classroom and a hooded man stalking a young girl. Grok also readily added a bloody gunshot wound to a photo of a real person. "From the perspective of identity, everyone's vulnerable," Farid said. An OpenAI spokesperson said the company relied on multiple defenses, including guardrails to block unsafe content from being created, experiments to uncover previously unknown weaknesses and automated content moderation systems. (The Times sued OpenAI in 2023, claiming copyright infringement of news content related to AI systems, an assertion that OpenAI has denied.) Experts in AI safety, however, said companies had not done nearly enough. Alice Marwick, director of research at Data undefined you can get a model to ignore them and work around them." Roper said the torrent of online abuse starting this summer -- including hundreds of harassing posts sent specifically to her -- was linked to her work on a campaign to shut down violent video games glorifying rape, incest and sexual torture. On X, where most of the abuse appeared, she said, some harassing images and accounts were taken down. But the company also told her repeatedly that other posts depicting her violent death did not violate the platform's terms of service. In fact, X once included one of her harassers on a list of recommended accounts for her to follow. Some of the harassers also claimed to have used Grok not just to create the images but to research how to find the women at home and at local cafes. Fed up, Roper decided to post some examples. Soon after, according to screenshots, X told her that she was in breach of its safety policies against gratuitous gore and temporarily locked her account. Neither X nor xAI, the company that owns Grok, responded to requests for comment. AI is also making other kinds of threats more convincing -- for example, swatting, the practice of placing false emergency calls with the aim of inciting a large response from the police and emergency personnel. AI "has significantly intensified the scale, precision and anonymity" of such attacks, the National Association of Attorneys General said this summer. On a lesser scale, a spate of AI-generated videos showing supposed home invasions have caused the targeted residents to call police departments around the country. Now perpetrators of swatting can compile convincing false reports by cloning voices and manipulating images. One serial swatter used simulated gunfire to suggest that a shooter was in the parking lot of a Washington state high school. The campus was locked down for 20 minutes; police officers and federal agents showed up. AI was already complicating schools' efforts to protect students, raising concerns about personalized sexual images or rumors spread via fake videos, said Brian Asmus, a former police chief who was working as the senior manager of safety and security for the school district when the swatter called. Now the technology is adding an extra security challenge, making false alarms harder to distinguish from true emergency calls. "How does law enforcement respond to something that's not real?" Asmus asked. "I don't think we've really gotten ahead of it yet."
Share
Share
Copy Link
Artificial intelligence is being weaponized to create hyper-realistic death threats and violent imagery, with activists and public figures facing increasingly personalized and convincing digital harassment that can be generated from a single photo.
Artificial intelligence has entered a disturbing new phase of misuse, transforming online death threats from crude text messages into hyper-realistic, personalized attacks that can traumatize victims in unprecedented ways. Australian activist Caitlin Roper experienced this firsthand when she became the target of AI-generated imagery showing herself hanging from a noose, burning alive, and subjected to other forms of graphic violence
1
.
Source: Seattle Times
The attacks against Roper and her colleagues at Collective Shout included disturbingly accurate details that made the threats feel more real and personally violating. In some videos, she was depicted wearing a blue floral dress that she actually owns, based on a photograph published years earlier in an Australian newspaper
2
.The ease with which these threatening images can now be created represents a fundamental shift in the landscape of digital harassment. Until recently, artificial intelligence could only replicate individuals with extensive online presence, such as celebrities with thousands of publicly available photographs. Today, a single profile image suffices to generate convincing deepfakes, according to Hany Farid, a computer science professor at UC Berkeley who co-founded GetReal Security .
Voice cloning technology has undergone similar advancement. What previously required hours of sample audio can now be accomplished with less than a minute of voice data. This dramatic reduction in technical barriers means that "almost anyone with no skills but with motive or lack of scruples can easily use these tools to do damage," warns Jane Bambauer, a University of Florida professor specializing in AI and law
1
.The impact of AI-generated threats extends far beyond individual harassment cases. A deepfake video showing a student with a gun forced a high school into lockdown this spring, demonstrating how these technologies can trigger real-world emergency responses
2
. In another incident, a Minneapolis lawyer reported that xAI's Grok chatbot provided detailed instructions to an anonymous user on breaking into his home, sexually assaulting him, and disposing of his body .The introduction of OpenAI's Sora text-to-video application has intensified concerns about AI-assisted threats. The platform allows users to upload personal images and incorporate them into hyper-realistic scenes, quickly enabling the creation of frightening scenarios featuring real people
1
.Related Stories
Despite growing awareness of these risks, experts argue that technology companies have failed to implement adequate safeguards. Alice Marwick, director of research at Data & Society, characterizes most current guardrails as "more like a lazy traffic cop than a firm barrier," noting that users can easily circumvent these protections
2
.OpenAI maintains that it employs multiple defensive strategies, including content blocking guardrails, vulnerability testing, and automated moderation systems. However, testing by The New York Times revealed that both Sora and Grok could readily produce disturbing content, including videos of gunmen in bloody classrooms and graphic wounds added to photographs of real people .
Summarized by
Navi
[2]
1
Business and Economy

2
Technology

3
Policy and Regulation
