3 Sources
3 Sources
[1]
Police Issue Warning About "AI Homeless Man" Prank
Unsurprisingly, teens have been using the tech to prank their friends and family. The latest hoax involves kids sending their parents AI-manipulated pictures of them welcoming homeless men into their houses -- prompting widespread alarm and even 911 calls, as NBC News reports. The trend shows how widespread highly sophisticated generative AI tools have become, foreshadowing a future in which you can't believe even imagery from friends and family. Case in point, countless videos on social media involve youngsters boasting about how they terrified their parents or friends with the help of AI. "No, I don't know him," one disgruntled father messaged their son in one TikTok video that got millions of views. "What does he want?" "He said you guys went to school together, I invited him in," the son replied, posting an AI-edited image of a man sitting on presumably the family's couch. "JOE PICK UP THE PHONE," the alarmed parent replied. "I DON'T KNOW HIM!!!!!" The prank has gained so much momentum that law enforcement is now issuing warnings, NBC News reports. "Besides being in bad taste, there are many reasons why this prank is, to put it bluntly, stupid and potentially dangerous," the Salem, Massachusetts police department wrote in a statement. "This prank dehumanizes the homeless, causes the distressed recipient to panic and wastes police resources." A press release by the Oak Harbor police department in Washington also warned of "safety concerns" related to an "'AI homeless man' prank." Circulating images made it look like a "homeless individual was present on the Oak Harbor High School Campus." "AI tools can create highly convincing images, and misinformation can spread quickly, causing unnecessary fear or diverting public safety resources," the statement reads. Some are warning that the prank may go far beyond wasting police resources as well. "We want to be clear: this behavior is not a 'prank' -- it is a crime," the Brown County, Ohio sheriff's office wrote in a post on Facebook, following a separate incident. "Both juveniles involved have been criminally charged for their roles in these incidents." It's not just the United States, either. British teens are also using AI tools to prank their parents or friends, according to the BBC, with local law enforcement warning to check if distressing pictures are a prank before calling the cops. "You know, pranks, even though they can be innocent, can have unintended consequences," Round Rock, Texas, police department commander Andy McKinney told NBC. "And oftentimes young people don't think about those unattended consequences and the families they may impact, maybe their neighbors, and so a real-life incident could be happening with one of their neighbors, and they're draining resources, thinking this is going to be fun."
[2]
Relax, That's Not a Stranger in Your House -- It's Just an AI Prank - Decrypt
Critics say the prank dehumanizes unhoused people while exposing risks of AI-generated deception. Police departments from Massachusetts to Texas are warning residents about a viral TikTok prank that uses AI-generated images to make it appear that a homeless man has entered someone's home, prompting panicked calls to 911, "Besides being in bad taste, there are many reasons why this prank is, to put it bluntly, stupid and potentially dangerous," said the Salem, Mass. Police Department, in a statement. That police department issued a public alert and detailed cases in which recipients "sincerely believed that there was an actual intruder" and dialed 911, necessitating immediate response by officers. In Texas, police have also confronted the fallout from the viral prank. The Houston Chronicle reported that officers in Round Rock, a suburb of Austin, responded to multiple 911 calls after residents were shown AI-generated photos depicting a "homeless man" inside their homes. Investigators determined the images were fabricated as part of the TikTok trend. The Round Rock Police Department warned that such hoaxes "tie up emergency resources and create unnecessary fear," according to FOX 7 Austin. Local authorities said they are reviewing whether those who knowingly share the doctored images to provoke panic could face false-reporting charges under Texas law. The trend, known on social media as the "AI homeless man prank," has been documented by tech and local outlets as it spread across TikTok and Snapchat. The Verge first reported that teens are generating realistic images of a stranger in kitchens and hallways and sending them to parents to capture their reactions, with some videos drawing millions of views. Broadcasters and local newsrooms have echoed law-enforcement concerns: ABC's Good Morning America highlighted warnings from departments that the prank wastes emergency resources and can lead to dangerous misunderstandings; stations in Michigan and Minnesota reported similar advisories. TikTok said it had added labels to videos NBC had flagged to clarify that they are AI-generated. The company's transparency filings to California under AB 587 also outline enforcement steps, including content removal and account bans when posts violate those rules. Celebrity attention has amplified the trend's reach. People chronicled examples of viral posts, including a father who called his son 21 times after receiving a doctored image, and noted that GMA co-host Michael Strahan said he briefly "freaked out" when an assistant sent him a manipulated photo.
[3]
What is the 'AI homeless man prank'? Police says it's dangerous
BIG RAPIDS, Mich. (WOOD) - An AI-driven TikTok trend is resulting in 911 calls by panicked people who think a man has broken into their homes. The prank uses artificial intelligence to create a picture or video of a "homeless man" entering a person's home, going through their fridge, or lying in their bed. The prankster sends the fake video to a loved one, who thinks the convincing images are real. Police departments in at least four states have received calls for reported home intrusions only to find out the "intruder" was an AI-generated person, The New York Times reports. The West Bloomfield Police near Detroit, Michigan, said it has received reports of people being fooled by the videos. They warn the "AI homeless man prank" wastes emergency responders' resources. "Here's the problem: officers are responding FAST using lights-and-sirens to what sounds like a call of a real intruder -- and only getting called off once everyone realizes it was a joke," said New York's Yonkers Police Department in a Facebook post. "That's not just a waste of resources... it's a real safety risk for officers who are responding and for the family members who are home if our officers get there before the prank is revealed and rush into the home to apprehend this 'intruder' that doesn't exist." "It's frustratingly easy to do," said Greg Gogolin, a professor and the director of cyber security and data science at Ferris State University. He created a program in a couple hours to show how AI technology can manipulate images. "This is a natural language processing machine learning program called a face swapping," Gogolin said. The program was able to make the images look realistic and take features from a person's face and combines that with other images. Once a technology like this is developed, it often gets used in ways the original creators never intended. "They share that out or sell it. ... It's dispersed and that's where the real danger is because people without any technical background can then utilize that the way they wish," Gogolin said. In some cases, there are things you can look for that could indicate an image is AI. "You might generate something and an arm will be off, the elbows are in the wrong place. It used to be you would often see people with like three arms. A long arm, a long leg, the dynamics were not correct. A lot of that has been corrected or at least drastically improved with the newer versions," Gogolin said. Gogolin said investigators and law enforcement also need more advanced training. "There are very few degreed investigators that have a cyber security background, let alone a computer science background particularly at the local level, even at the state level."
Share
Share
Copy Link
A viral TikTok trend using AI to create fake images of intruders in homes is causing panic and wasting police resources. Law enforcement agencies across multiple states are issuing warnings about the dangerous implications of this prank.
A new TikTok trend has emerged, causing alarm among parents and law enforcement agencies across multiple states. Teenagers are using artificial intelligence (AI) tools to generate realistic images of 'homeless men' inside their homes, sending these fabricated pictures to their parents or friends as a prank
1
2
. This trend has gained significant traction on social media platforms, with some videos garnering millions of views.The prank involves using advanced AI technology to create convincing images or videos of strangers in various locations within a home, such as the living room, kitchen, or even bedrooms
3
. These AI-generated images are then sent to unsuspecting recipients, often parents, who believe the intruder to be real. The resulting panic has led to numerous 911 calls and unnecessary deployment of law enforcement resources.Police departments from Massachusetts to Texas have issued warnings about the dangerous implications of this prank
2
. The Salem, Massachusetts police department stated, 'Besides being in bad taste, there are many reasons why this prank is, to put it bluntly, stupid and potentially dangerous'1
. Law enforcement agencies are concerned about the waste of emergency resources and the potential for dangerous misunderstandings.Some authorities are considering the legal ramifications of the prank. The Brown County, Ohio sheriff's office has gone as far as to state, 'We want to be clear: this behavior is not a 'prank' -- it is a crime'
1
. There are also ethical concerns about the dehumanization of homeless individuals and the broader implications of using AI for deception.Related Stories
Greg Gogolin, a professor and director of cybersecurity and data science at Ferris State University, demonstrated how easily such AI-manipulated images can be created. He explained that the technology uses 'natural language processing machine learning program called face swapping'
3
. This highlights the accessibility and potential misuse of advanced AI tools.This trend underscores the growing challenge of distinguishing between real and AI-generated content. As AI technology becomes more sophisticated and widely available, there are concerns about its potential for spreading misinformation and causing real-world harm
2
. The incident also highlights the need for improved digital literacy and more advanced training for law enforcement in dealing with AI-related incidents.Summarized by
Navi
10 Oct 2025•Technology
05 Dec 2024•Technology
31 May 2025•Technology
1
Technology
2
Business and Economy
3
Business and Economy