2 Sources
2 Sources
[1]
Creators use AI to prank family with fake 'homeless intruders'
TikTok's latest trend uses AI to fake home intrusions -- and it's dangerous Credit: TikTok / mmmjoemele / julieandcorey / julieandcorey / There's a new TikTok trend, and it's dangerous, manipulative, and feeds off the dehumanization of people facing housing insecurity. People are using AI to generate false images of "homeless" men entering their houses to trick their parents, roommates, or partners. In one viral video, creator Joe Mele used AI to create an image of someone who looks unhoused standing on the other side of his screened front door. He sent the picture to his dad with the text: "Hey dad there's this guy at the front door, he says he knows you?" "No I don't know him," his dad seemingly said. "What does he want?" "He said you guys went to school together, I invited him in," Mele responded along with another AI-generated photo of the man sitting on his couch. "JOE PICK UP THE PHONE," his dad responds. "I DON'T KNOW HIM!!!!!!!!" Followed by, "Hello???" along with three missed calls. "He said he's hungry, grabbing a quick snack," Mele sent again with another AI-generated photo of the same AI-generated man taking food out of an open refrigerator. "PICK UP THE PHONE," his dad said. "Are you getting my calls?" along with a screenshot of seven missed calls. This goes on for some time, as Mele tells it. Mele sends an AI-generated photo of the man using his dad's toothbrush and sleeping in his dad's bed. The video has wracked up over 10.4 million views, and it's not the only one. There are dozens of videos with thousands of of views all following the same trend, many of which use Google Gemini AI, according to one user. Google recently added its new Nano Banana Ai image tool to Gemini, which makes it easy to edit photos. Of course, Mele's entire video could be some kind of scripted skit, but Mele's hardly the only one making videos like this. Not all parents, roommates, and partners respond with panicked texts and phone calls, as intended. Some respond with an immediate call to the police. The BBC reported that Dorset Police have received calls based on the prank, and asked people to "please attempt to check it isn't a prank before [dialing] 999" if they "receive a message and pictures similar to the above antics from friends or family." The Salem Police Department in Massachusetts also posted a news release about the trend, calling the prank "stupid and potentially dangerous." Not only does the prank involve manipulating loved ones, but it's also a pretty blatant dehumanization of people facing housing insecurity, depicting them as scary, dirty, or invasive -- all harmful stereotypes -- and using them as a prop for a joke. "This prank dehumanizes the homeless, causes the distressed recipient to panic and wastes police resources," the City of Salem Police Department wrote. "Police officers who are called upon to respond do not know this is a prank and treat the call as an actual burglary in progress thus creating a potentially dangerous situation."
[2]
'AI homeless man prank' on social media prompts concern from local authorities
He's gray-haired, bearded, wearing no shoes and standing at Rae Spencer's doorstep. "Babe," the content creator wrote in a text to her husband. "Do you know this man? He says he knows you?" Spencer's husband, who responded immediately with "no," appeared to express shock when she then sent more images of the man pictured inside their home, sitting on their couch and taking a nap on their bed. She said he FaceTimed her, "shaking" in fear. But the man wasn't real. Spencer, based in St. Augustine, Florida, had created the images using an artificial intelligence-based generator. She sent them to her husband and posted their exchange to TikTok as part of a viral trend that some online refer to as the "AI homeless man prank." Over 5 million people have liked Spencer's video on TikTok, where the hashtag #homelessmanprank populated more than 1,200 videos, most of them related to the recent trend. Some have also used the hashtag #homelessman to post their videos, all of which center on the idea of tricking people into believing that there is a stranger inside their home. Several people have also posted tutorials about how to make the images. The trend has also spread to other social media platforms, including Snapchat and Instagram. As the prank gains traction online, local authorities have started issuing warnings to participants -- who they say primarily are teens -- about the dangers of misusing AI to spread false information. "Besides being in bad taste, there are many reasons why this prank is, to put it bluntly, stupid and potentially dangerous," police officials in Salem, Massachusetts, wrote on their website this month. "This prank dehumanizes the homeless, causes the distressed recipient to panic and wastes police resources. Police officers who are called upon to respond do not know this is a prank and treat the call as an actual burglary in progress thus creating a potentially dangerous situation." Even overseas, some local officials have reported false home invasions tied to the trend. In the United Kingdom, Dorset Police issued a warning after the force deployed resources when it received a "call from an extremely concerned parent" last week, only to learn it was a prank, according to the BBC. An Garda Síochána, Ireland's national police department, also wrote a message on its Facebook and X pages, sharing two recent images authorities received that were made using generative AI tools. The prank is the latest example of AI's ability to deceive through fake imagery. The proliferation of photorealistic AI image and video generators in recent years has given rise to an internet full of AI-made "slop": media of fake people and scenarios that -- despite exhibiting telltale signs of AI -- fool many people online, especially older internet users. As the technologies grow more sophisticated, many find it even harder to distinguish between what's real and what's fake. Last year, Katy Perry shared that her own mother was tricked by an AI-generated image of her attending the Met Gala. Even if most such cases don't involve nefarious intent, the pranks underscore how easily AI can potentially manipulate real people. With the recent release of Sora 2, an OpenAI employee touted the video generator's ability to create realistic security video of CEO Sam Altman stealing from Target -- a clip that drew concern from some who worry about how AI might be used to carry out mass manipulation campaigns. AI image and video generators typically put watermarks on their outputs to indicate the use of AI. But users can easily crop them out. It's unclear which specific AI models were used in many of the video pranks. When NBC News asked OpenAI's ChatGPT to "generate an image of a homeless man in my home," the bot replied, "I can't create or edit an image like that -- it would involve depicting a real or implied person in a situation of homelessness, which could be considered exploitative or disrespectful." Asked the same question, Gemini, Google's AI assistant, replied: "Absolutely. Here is the image you requested." OpenAI and Google didn't immediately respond to requests for comment. Representatives for Snap and Meta (which owns Instagram) didn't provide comments. Reached for comment, TikTok said it added labels to videos that NBC News had flagged related to the trend to clarify that they are AI-generated content. TikTok also referred NBC News to its Community Guidelines, which require creators to label AI-generated or significantly edited content that shows realistic-looking scenes or people. Oak Harbor, Washington, police officials warned that "AI tools can create highly convincing images, and misinformation can spread quickly, causing unnecessary fear or diverting public safety resources." The police department issued a statement after a social media post appeared to show "a homeless individual was present on the Oak Harbor High School Campus." The claim turned out to be a hoax, officials said. The police department said it's working with the school district to investigate the incident and "address the dissemination of this fabricated content." No specific laws address that type of AI misuse directly. But in at least one instance, in Brown County, Ohio, charges were brought. "We want to be clear: this behavior is not a 'prank' -- it is a crime," the sheriff's department, which reported two separate recent incidents related to the trend this month, wrote recently on Facebook. "Both juveniles involved have been criminally charged for their roles in these incidents." The sheriff's department didn't say what the suspects were charged with. It didn't respond to a request for comment. In its message in Massachusetts, the Salem Police Department advised "pranksters" to "Think Of The Consequences Before You Prank," citing a state law that penalizes people who engage in "Willful and malicious communication of false information to public safety answering points." In Round Rock, Texas, Andy McKinney, commander of the Patrol Division of the Round Rock Police Department, warned NBC News that such cases "could have consequences" in the future. The department recently responded to two home invasion calls in a single weekend, both which stemmed from prank texts. One of the calls was from a mom who "believed it was real." "You know, pranks, even though they can be innocent, can have unintended consequences," he said. "And oftentimes young people don't think about those unattended consequences and the families they may impact, maybe their neighbors, and so a real-life incident could be happening with one of their neighbors, and they're draining resources, thinking this is going to be fun." For now, Round Rock police are treating such incidents as educational opportunities. "We want to encourage parents and family members to have open conversations and talk about" these things with their kids, McKinney said. "Like: 'Hey, I know things are funny. I know that sometimes online trends are fun, but we need to think about the dangers before we can do them.'"
Share
Share
Copy Link
A new TikTok trend using AI to create fake images of 'homeless intruders' has gone viral, prompting warnings from law enforcement and raising ethical concerns about AI misuse and the dehumanization of homeless individuals.
A new TikTok trend has emerged, causing concern among law enforcement and raising ethical questions about AI misuse. Content creators are using artificial intelligence to generate fake images of 'homeless intruders' in their homes, which they then use to prank family members, roommates, or partners
1
2
.The trend typically involves creators using AI image generators to produce realistic photos of unkempt, shoeless individuals at their doorstep or inside their homes. These images are then sent to unsuspecting family members or friends, often accompanied by text messages claiming the person is an acquaintance or has entered the home
1
.One viral video by creator Joe Mele, which garnered over 10.4 million views, showcased this prank in action. Mele sent his father a series of AI-generated images depicting a 'homeless man' progressively making himself at home, causing his father to panic and repeatedly attempt to call him
1
.The trend has gained significant traction on social media platforms, with the hashtag #homelessmanprank appearing in over 1,200 TikTok videos. Some users have reported using Google's Gemini AI, which recently added the Nano Banana AI image tool, to create these deceptive images
1
2
.The prank has prompted responses from police departments in multiple countries. Authorities in Salem, Massachusetts, Dorset, UK, and Ireland have issued warnings about the trend, citing concerns over wasted police resources and potential dangers
1
2
.The Salem Police Department called the prank "stupid and potentially dangerous," noting that officers responding to these calls treat them as actual burglaries in progress, creating potentially hazardous situations
1
.Related Stories
Beyond the immediate safety concerns, the trend has been criticized for its dehumanizing portrayal of individuals facing housing insecurity. By depicting homeless people as scary, dirty, or invasive, the prank reinforces harmful stereotypes and uses vulnerable populations as props for entertainment
1
.This trend also highlights broader concerns about AI's potential for deception and manipulation. As AI-generated content becomes increasingly realistic and accessible, distinguishing between real and fake imagery grows more challenging, particularly for older internet users
2
.In response to the trend, TikTok has stated that it requires creators to label AI-generated or significantly edited content that shows realistic-looking scenes or people. The platform has also added labels to some videos related to the trend, clarifying that they contain AI-generated content
2
.Summarized by
Navi