FBI warns AI is helping criminals create virtual kidnapping scams with fake proof of life photos

Reviewed byNidhi Govil

2 Sources

Share

The FBI issued an alert about criminals using AI to alter social media images for virtual kidnapping scams. Scammers text victims claiming to have kidnapped loved ones, then send AI-generated photos as proof. These extortion scams cost victims $2.7 million last year, with deepfake attacks now occurring every five minutes globally and U.S. fraud losses projected to hit $40 billion by 2027.

FBI Issues Alert on AI-Powered Virtual Kidnapping Scams

The FBI issued a stark warning about criminals using social media images in increasingly sophisticated virtual kidnapping scams that leverage AI to create fake proof of life photos

1

. These extortion scams begin when miscreants contact victims via text messages, claiming to have kidnapped a loved one and demanding immediate ransom payment. What makes these schemes particularly alarming is how AI is helping criminals fabricate seemingly authentic images and videos of supposed victims to pressure families into paying quickly.

Source: The Register

Source: The Register

The scammers typically express significant claims of violence toward the loved one if the ransom is not paid immediately, creating intense psychological pressure on victims

1

. When victims request proof, criminals send what appears to be a real photo or video of the person, sometimes using timed messaging features so recipients have limited time to scrutinize the content

2

. These AI-generated images are created by scraping publicly available photos from social media platforms and manipulating them with AI tools to show the person in distress or captivity.

The Scale and Financial Impact of AI-Driven Fraud

The FBI classifies these schemes as emergency scams and received 357 complaints about them last year, costing victims $2.7 million

1

. However, the broader threat landscape is expanding rapidly. Deepfake attacks occurred every five minutes globally in 2024, while digital document forgeries jumped 244% year-over-year, according to the Entrust Cybersecurity Institute

2

. The Deloitte Center for Financial Services projects that U.S. losses from fraud relying on generative AI will reach $40 billion by 2027, signaling a dramatic escalation in AI-powered criminal activity

2

.

These virtual kidnapping scams share similarities with grandparent scams, where fraudsters call seniors impersonating children or grandchildren in danger. The newer version, however, adds AI-altered images as a critical component that makes the deception more convincing and harder to detect immediately

1

.

How to Identify AI-Generated Photos in Scams

Despite their sophistication, fake proof of life photos often reveal inaccuracies upon close inspection when compared to confirmed photos of the loved one, the FBI notes

1

. Victims should look for missing tattoos or scars and imprecise body proportions with real images of the person

2

. Criminal actors deliberately send these photos using timed message features to limit the amount of time victims have to analyze the images, exploiting panic to prevent careful scrutiny

1

.

Source: Axios

Source: Axios

The Internet Crime Complaint Center recommends that anyone targeted should screenshot or record proof-of-life scam images if possible and report incidents to www.ic3.gov, including phone numbers, payment information, text and audio communications, and photos or videos

1

.

Protection Strategies and What to Watch

To protect against these deepfake-enabled schemes, the FBI recommends several precautions. Establishing a code word or safe word that only you and your loved ones know can help distinguish real from fake communications

1

2

. Always attempt to contact the supposed victim before paying any ransom demand, and avoid providing personal information to strangers while traveling

1

.

The agency emphasizes that criminals act quickly to induce panic, so it's critical to pause and question whether the kidnapper's claims are legitimate

2

. As AI tools become more accessible and sophisticated, experts anticipate these extortion scams will become harder to detect, making public awareness and preventive measures increasingly important for families and individuals who maintain active social media profiles.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo