2 Sources
2 Sources
[1]
Crims using social media images in virtual kidnapping scams
Criminals are altering social media and other publicly available images of people to use as fake proof of life photos in "virtual kidnapping" and extortion scams, the FBI warned on Friday. In these truly heinous extortion attempts, miscreants contact their victims via text messages and claim to have kidnapped their loved one. Some of these are totally fake, and don't involve any abducted people. However, the FBI's Friday alert also warns about posting real missing person info online, indicating that scammers may also be scraping these images and contacting the missing person's family with fake information. The moves are similar to the age-old grandparent scams, in which fraudsters call seniors and impersonate their children or grandchildren, purporting to be in great physical danger if the grandparent doesn't send needed money ASAP. The FBI classifies this type of fraud as "emergency scams," [PDF] and says it received 357 complaints about them last year, costing victims $2.7 million. This newer version, however, adds a 2025 twist: In addition to sending a text, the criminals typically send what appears to be a real image or video of the "kidnapped" person to show proof of life. Plus, to increase the pressure on the victims to pay, the scammers often "express significant claims of violence towards the loved one if the ransom is not paid immediately," the federal cops said. The FBI didn't immediately respond to The Register's questions, including how many complaints and/or cases of these fake kidnappings it has received. It's easy enough to find photos and videos of people - and connect potential victims to family and friends - via social media, and then use AI tools to doctor this footage, or create entirely new images or videos. However, these proof-of-life images, "upon close inspection often [reveal] inaccuracies when compared to confirmed photos of the loved one," the FBI notes. For example, the supposed kidnapped victim may be missing a tattoo or scar, or the body proportions might be a bit off. "Criminal actors will sometimes purposefully send these photos using timed message features to limit the amount of time victims have to analyze the images," the alert adds. To protect yourself - and your family and friends - from falling victim to these types of scams, the FBI recommends not providing personal information to strangers while traveling, and setting a code word that only you and your loved ones know. Also, screenshot or record proof-of-life scam images if possible, and report any incidents to the FBI's Internet Crime Complaint Center at www.ic3.gov. Include as much information as possible about the interaction including phone numbers, payment information, text and audio communications, and photos or videos. And always attempt to contact the supposed victim before paying any ransom demand. Criminals are also using fake images and videos to scam corporations, typically with an AI boost. The technique is perhaps most prevalent in the ongoing fake IT worker scams that have hit companies across multiple sectors, including one high-profile scheme that generated at least $88 million over about six years, the Department of Justice said last year. These scammers largely originate from North Korea - or at least funnel money back to Pyongyang after fraudulently obtaining a remote worker job, generally in a software development role. Increasingly, they also rely on AI tools to not only write resumes and cover letters, but also to help with video call interviews with software that changes the interviewee's appearance in real time. ®
[2]
AI is helping criminals create virtual kidnapping scams
How it works: In these scams, the FBI says criminals text victims claiming to have kidnapped a loved one and demand payment, often escalating violent threats if the ransom is not paid immediately. * When victims ask for proof, scammers send what appears to be a real photo or video of the loved one, sometimes using timed messaging features so the recipient has limited time to scrutinize it. Yes, but: Upon closer view, many AI-generated photos and videos contain inaccuracies. * Victims should look for missing tattoos or scars and imprecise body proportions with real images of the person, the FBI said. By the numbers: A deepfake attack occurred every five minutes globally in 2024, while digital document forgeries jumped 244% year-over-year, according to the Entrust Cybersecurity Institute. * U.S. losses from fraud that relies on generative AI are projected to reach $40 billion by 2027, according to the Deloitte Center for Financial Services. Zoom out: Anyone targeted by a suspected AI scam should always contact their loved one before agreeing to any terms or payments. Establishing a family or friend safe word can also help distinguish real from fake communications. * The agency notes that criminals act quickly to induce panic, so it's important to pause and question whether the kidnapper's claims are legitimate.
Share
Share
Copy Link
The FBI issued an alert about criminals using AI to alter social media images for virtual kidnapping scams. Scammers text victims claiming to have kidnapped loved ones, then send AI-generated photos as proof. These extortion scams cost victims $2.7 million last year, with deepfake attacks now occurring every five minutes globally and U.S. fraud losses projected to hit $40 billion by 2027.
The FBI issued a stark warning about criminals using social media images in increasingly sophisticated virtual kidnapping scams that leverage AI to create fake proof of life photos
1
. These extortion scams begin when miscreants contact victims via text messages, claiming to have kidnapped a loved one and demanding immediate ransom payment. What makes these schemes particularly alarming is how AI is helping criminals fabricate seemingly authentic images and videos of supposed victims to pressure families into paying quickly.
Source: The Register
The scammers typically express significant claims of violence toward the loved one if the ransom is not paid immediately, creating intense psychological pressure on victims
1
. When victims request proof, criminals send what appears to be a real photo or video of the person, sometimes using timed messaging features so recipients have limited time to scrutinize the content2
. These AI-generated images are created by scraping publicly available photos from social media platforms and manipulating them with AI tools to show the person in distress or captivity.The FBI classifies these schemes as emergency scams and received 357 complaints about them last year, costing victims $2.7 million
1
. However, the broader threat landscape is expanding rapidly. Deepfake attacks occurred every five minutes globally in 2024, while digital document forgeries jumped 244% year-over-year, according to the Entrust Cybersecurity Institute2
. The Deloitte Center for Financial Services projects that U.S. losses from fraud relying on generative AI will reach $40 billion by 2027, signaling a dramatic escalation in AI-powered criminal activity2
.These virtual kidnapping scams share similarities with grandparent scams, where fraudsters call seniors impersonating children or grandchildren in danger. The newer version, however, adds AI-altered images as a critical component that makes the deception more convincing and harder to detect immediately
1
.Despite their sophistication, fake proof of life photos often reveal inaccuracies upon close inspection when compared to confirmed photos of the loved one, the FBI notes
1
. Victims should look for missing tattoos or scars and imprecise body proportions with real images of the person2
. Criminal actors deliberately send these photos using timed message features to limit the amount of time victims have to analyze the images, exploiting panic to prevent careful scrutiny1
.
Source: Axios
The Internet Crime Complaint Center recommends that anyone targeted should screenshot or record proof-of-life scam images if possible and report incidents to www.ic3.gov, including phone numbers, payment information, text and audio communications, and photos or videos
1
.Related Stories
To protect against these deepfake-enabled schemes, the FBI recommends several precautions. Establishing a code word or safe word that only you and your loved ones know can help distinguish real from fake communications
1
2
. Always attempt to contact the supposed victim before paying any ransom demand, and avoid providing personal information to strangers while traveling1
.The agency emphasizes that criminals act quickly to induce panic, so it's critical to pause and question whether the kidnapper's claims are legitimate
2
. As AI tools become more accessible and sophisticated, experts anticipate these extortion scams will become harder to detect, making public awareness and preventive measures increasingly important for families and individuals who maintain active social media profiles.Summarized by
Navi
[1]