3 Sources
3 Sources
[1]
Crims using social media images in virtual kidnapping scams
Criminals are altering social media and other publicly available images of people to use as fake proof of life photos in "virtual kidnapping" and extortion scams, the FBI warned on Friday. In these truly heinous extortion attempts, miscreants contact their victims via text messages and claim to have kidnapped their loved one. Some of these are totally fake, and don't involve any abducted people. However, the FBI's Friday alert also warns about posting real missing person info online, indicating that scammers may also be scraping these images and contacting the missing person's family with fake information. The moves are similar to the age-old grandparent scams, in which fraudsters call seniors and impersonate their children or grandchildren, purporting to be in great physical danger if the grandparent doesn't send needed money ASAP. The FBI classifies this type of fraud as "emergency scams," [PDF] and says it received 357 complaints about them last year, costing victims $2.7 million. This newer version, however, adds a 2025 twist: In addition to sending a text, the criminals typically send what appears to be a real image or video of the "kidnapped" person to show proof of life. Plus, to increase the pressure on the victims to pay, the scammers often "express significant claims of violence towards the loved one if the ransom is not paid immediately," the federal cops said. The FBI didn't immediately respond to The Register's questions, including how many complaints and/or cases of these fake kidnappings it has received. It's easy enough to find photos and videos of people - and connect potential victims to family and friends - via social media, and then use AI tools to doctor this footage, or create entirely new images or videos. However, these proof-of-life images, "upon close inspection often [reveal] inaccuracies when compared to confirmed photos of the loved one," the FBI notes. For example, the supposed kidnapped victim may be missing a tattoo or scar, or the body proportions might be a bit off. "Criminal actors will sometimes purposefully send these photos using timed message features to limit the amount of time victims have to analyze the images," the alert adds. To protect yourself - and your family and friends - from falling victim to these types of scams, the FBI recommends not providing personal information to strangers while traveling, and setting a code word that only you and your loved ones know. Also, screenshot or record proof-of-life scam images if possible, and report any incidents to the FBI's Internet Crime Complaint Center at www.ic3.gov. Include as much information as possible about the interaction including phone numbers, payment information, text and audio communications, and photos or videos. And always attempt to contact the supposed victim before paying any ransom demand. Criminals are also using fake images and videos to scam corporations, typically with an AI boost. The technique is perhaps most prevalent in the ongoing fake IT worker scams that have hit companies across multiple sectors, including one high-profile scheme that generated at least $88 million over about six years, the Department of Justice said last year. These scammers largely originate from North Korea - or at least funnel money back to Pyongyang after fraudulently obtaining a remote worker job, generally in a software development role. Increasingly, they also rely on AI tools to not only write resumes and cover letters, but also to help with video call interviews with software that changes the interviewee's appearance in real time. ®
[2]
FBI warns of kidnapping scams as hackers turn to AI to provide 'proof of life'
Citizens advised to limit online exposure, set family code words, and verify loved ones before paying Hackers are using Generative Artificial Intelligence (GenAI) to create convincing deepfake videos which are then used as proof of life in kidnapping and extortion scams. This is according to the US Federal Bureau of Investigation (FBI) which recently released a new Public Service Announcement (PSA), warning citizens not to fall for the trick. Here is how the scam works: the criminals will pick a target and scour social media and other sources for images and videos. If they find enough information, they will source it into an AI tool to create videos and images depicting their targets' loved ones as kidnapped. Then, they will reach out to the victims and demand an immediate ransom payment in order to "release" their hostage. The scam might not be that widespread, but it's been around for a little while. The Guardian reported on it two years ago. Still, with AI getting better by the minute, it's safe to assume these scams are getting more common, prompting a reaction from the FBI. The FBI also said that these photos and videos are not perfect. With a little pixel hunting, they can be identified as fake. However, crooks know this too, so the messages they send are usually timed and expire before any meaningful analysis can be done: "Examples of these inaccuracies include missing tattoos or scars and inaccurate body proportions," the PSA reads. "Criminal actors will sometimes purposefully send these photos using timed message features to limit the amount of time victims have to analyze the images." To defend against these attacks, the FBI first suggests citizens be more mindful about their privacy: when posting photos online, or when providing personal information to strangers while traveling. Then, they suggest they establish a code word only they know and, most importantly - try to contact the loved ones before making any payments.
[3]
AI is helping criminals create virtual kidnapping scams
How it works: In these scams, the FBI says criminals text victims claiming to have kidnapped a loved one and demand payment, often escalating violent threats if the ransom is not paid immediately. * When victims ask for proof, scammers send what appears to be a real photo or video of the loved one, sometimes using timed messaging features so the recipient has limited time to scrutinize it. Yes, but: Upon closer view, many AI-generated photos and videos contain inaccuracies. * Victims should look for missing tattoos or scars and imprecise body proportions with real images of the person, the FBI said. By the numbers: A deepfake attack occurred every five minutes globally in 2024, while digital document forgeries jumped 244% year-over-year, according to the Entrust Cybersecurity Institute. * U.S. losses from fraud that relies on generative AI are projected to reach $40 billion by 2027, according to the Deloitte Center for Financial Services. Zoom out: Anyone targeted by a suspected AI scam should always contact their loved one before agreeing to any terms or payments. Establishing a family or friend safe word can also help distinguish real from fake communications. * The agency notes that criminals act quickly to induce panic, so it's important to pause and question whether the kidnapper's claims are legitimate.
Share
Share
Copy Link
The FBI issued a public alert about virtual kidnapping scams where criminals use AI to doctor social media images into fake proof of life photos. Fraudsters text victims claiming to have kidnapped loved ones and send AI-generated images to extort ransom payments. With deepfake attacks occurring every five minutes globally and U.S. fraud losses projected to hit $40 billion by 2027, the agency urges families to establish code words and verify claims before paying.
The FBI released a public service announcement warning that criminals are exploiting AI to orchestrate virtual kidnapping scams, using doctored social media images as fake proof of life to extort victims
1
. These extortion scams involve fraudsters contacting victims via text messages, claiming to have kidnapped a family member and demanding immediate ransom payment2
. What makes these schemes particularly insidious is the use of Generative Artificial Intelligence to create convincing deepfake videos and images that appear to show the supposed victim in captivity3
.
Source: TechRadar
The scammers typically source photographs and videos from social media platforms and other publicly available sources, then feed this material into AI tools to generate fabricated proof of life content. To amplify pressure on victims, the criminals often express significant threats of violence toward loved ones if the ransom demand is not met immediately
1
. These extortion attempts represent a modern evolution of traditional emergency scams, which the FBI reported cost victims $2.7 million across 357 complaints last year1
.
Source: The Register
The threat landscape is expanding rapidly. Deepfake attacks occurred every five minutes globally in 2024, while digital document forgeries jumped 244% year-over-year, according to the Entrust Cybersecurity Institute
3
. Even more alarming, U.S. financial losses from fraud leveraging generative AI are projected to reach $40 billion by 2027, according to the Deloitte Center for Financial Services3
. These figures underscore the urgency for individuals and families to understand how hackers are weaponizing AI technology.The FBI's alert also highlights that scammers may scrape real missing person information posted online, then contact families with fabricated updates to extract money
1
. This tactic preys on the most vulnerable moments when families are desperately seeking information about their loved ones.While AI-generated images have become increasingly sophisticated, they still contain detectable image inaccuracies upon close examination. The FBI notes that these fabricated photos often reveal discrepancies when compared to confirmed images of the supposed victim, such as missing tattoos or scars, or inaccurate body proportions
1
2
. However, fraudsters have adapted to this vulnerability by purposefully sending these photos using timed message features that limit the amount of time victims have to analyze the images1
2
.This tactical choice exploits the panic and urgency these situations naturally create, preventing victims from conducting thorough scrutiny that might reveal the deception. The psychological manipulation is deliberate—criminals act quickly to induce panic, making it critical for potential victims to pause and question whether the kidnapper's claims are legitimate
3
.Related Stories
The FBI recommends several protective measures to defend against these schemes. First, individuals should be more mindful about online privacy, limiting what personal information they share on social media and avoiding providing details to strangers while traveling
1
2
. Establishing family code words that only you and your loved ones know can serve as an authentication method to distinguish real emergencies from fabricated ones1
3
.
Source: Axios
Most importantly, always attempt to contact the supposed victim directly before making any ransom payment
1
3
. If possible, screenshot or record the scam images and report incidents to the FBI's Internet Crime Complaint Center at www.ic3.gov, including phone numbers, payment information, text and audio communications, and any photos or videos received1
.The same AI technology enabling virtual kidnapping scams is being deployed in corporate scams, particularly in fake IT worker schemes. These operations have hit companies across multiple sectors, with one high-profile scheme generating at least $88 million over approximately six years, according to the Department of Justice
1
. Scammers, many originating from North Korea, increasingly rely on AI tools not only to write resumes and cover letters but also to manipulate video call interviews with software that changes the interviewee's appearance in real time1
. As AI capabilities advance, both individuals and organizations should expect scammers to develop more sophisticated methods that blur the line between authentic and fabricated content, making vigilance and verification protocols essential defenses against these evolving threats.🟡 cautious=🟡The FBI has issued an alert regarding AI-powered virtual kidnapping scams. These scams involve criminals using doctored social media images as fake proof of life to extort victims. They contact individuals via text, claiming to have kidnapped a family member and demanding immediate ransom. Generative AI is used to create convincing deepfake videos and images of the supposed victim.Scammers source photos and videos from social media, feeding them into AI tools to generate fabricated content. They also make threats of violence to loved ones if ransom is not paid. These incidents are an evolution of traditional emergency scams, which cost victims millions last year.
The threat of deepfake attacks is growing rapidly, with one occurring every five minutes globally in 2024. Digital document forgeries also saw a significant increase. U.S. financial losses from fraud leveraging generative AI are projected to reach $40 billion by 2027. The FBI also notes that scammers may use real missing person information to extract money from families.
While AI-generated images are sophisticated, they can contain inaccuracies like missing tattoos or scars, or inaccurate body proportions. However, fraudsters often send these photos using timed messages to limit scrutiny. This tactic exploits panic, making it crucial for victims to pause and question the claims.
The FBI recommends several protective measures: be mindful of online privacy, limit shared personal information, and avoid giving details to strangers while traveling. Establishing family code words can help authenticate emergencies. Most importantly, always attempt to contact the supposed victim directly before making any ransom payment. Report incidents to the FBI's Internet Crime Complaint Center, including all communication details.
AI technology is also being used in corporate scams, such as fake IT worker schemes that have cost companies millions. Scammers are using AI tools to write resumes, cover letters, and manipulate video call interviews. As AI advances, both individuals and organizations must be vigilant and use verification protocols against these evolving threats.
Summarized by
Navi
[1]
1
Policy and Regulation

2
Technology
3
Technology
