Neal O'Farrell is considered one of the world's longest serving cybersecurity and fraud experts, more than 40 years and counting. As head of the Identity Theft Council, he worked with law enforcement agencies across the country and around the world, counseled thousands of victims of fraud, and interviewed professional identity thieves and scammers. He served as a member of the Federal Communications Commission's cybersecurity roundtable, and as security and privacy advisor to President Barack Obama's STOCK Act panel, an effort to prevent insider trading by members of Congress and senior Federal employees.
Artificial intelligence helps us get information faster, but it's also making it easier for cybercriminals to get a hold of your personal data. If you've been targeted by any type of cybercrime, scam or fraud attempt in the last 12 months, chances are AI played a role in it.
AI-generated content contributed to more than $12 billion in fraud losses in 2023, according to a Deloitte digital fraud study. That number could triple to more than $40 billion in the US by 2027.
Artificial intelligence has already been woven into our everyday lives. Need a recipe? Ask ChatGPT. Used a Google search? You were probably served an AI summary to your question. Even financial apps are starting to leverage AI to help you stick to your budget or find ways to save.
With some of the most advanced AI technologies in the world now in the hands of cybercriminals, we're witnessing a completely new world of digital fraud... a world we all need to be better prepared for.
Cybercriminals have only so many hours in their day, and only so many people they can hire to help carry out their schemes. AI helps to solve both of those problems.
Now, a handful of coded instructions is often all it takes to create a global phishing attack that can be translated into multiple languages and eliminate many telltale giveaways that the message you're reading is a scam. AI can fix bad grammar, correct spelling mistakes and re-write awkward greetings to make phishing messages seem more legitimate.
AI can also help cybercriminals better orchestrate phishing attacks on a specific industry or company, or around a specific event, like a conference, a trade show, or a national holiday.
Researchers at the University of Illinois Urbana-Champaign recently used voice-enabled AI bots to pull off some of the most common scams reported to the federal government before safely returning money back to victims.
In some of the scams the bots not only had a success rate of more than 60%, but they were also able to pull off the scam in a matter of seconds.
AI helps criminals sift through trillions of data points more quickly, whereas before they had a harder time working through troves of data (think billions upon billions of personal records) stolen in data breaches or purchased on the dark web.
Scammers can now use AI to decipher patterns in data and other valuable information in large data sets they can exploit, not to mention help orchestrate attacks. AI is also helping to bolster other forms of fraud.
Synthetic identity theft involves stealing a Social Security number -- usually from a child, the elderly or the homeless -- and combining it with other stolen or falsified information such as names and birthdates to create a new, false identity.
Hackers then use this falsified identity to apply for credit, leaving the SSN's original owner with the bill.
AI helps facilitate this popular form of fraud by making it much easier to create highly realistic forged identity documents and synthetic imagery that mimics real faces and can bypass biometric verification systems like those found on an iPhone.
An AI-assisted deep fake scam occurs every 5 minutes in 2024, according to an estimate from security firm, Entrust.
There are countless stories of fraudsters using AI to successfully scam businesses and everyday people out of millions of dollars. Bad actors use very realistic but completely fake videos and voices of people victims know that can fool even the most cautious among us.
Less than a year ago, an employee at Arup, a British design and engineering company, was tricked into transferring $25 million to scammers who used a deepfake video impersonating a CFO.
AI isn't just cloning voices and faces though -- it's also capable of copying human personalities, according to a recent study from researchers at Stanford University and Google DeepMind.
With little information about their subjects, the artificial personalities were able to mimic political beliefs, personality traits and likely responses to questions in an effort to fool victims, the study found.
These results -- coupled with advancements in deepfake video and voice cloning already in use by cybercriminals -- could make it even more difficult to tell if the person you're speaking to online or on the phone is real or an AI doppelganger.
Despite the world's reliance on technology, physical documentation is still predominantly used to verify your identity.
AI has become skilled at creating believable versions of passports, driver's licenses, birth certificates and more, leading businesses and governments scrambling to find better ways to confirm identities in the future.
Following the tips to protect yourself from a human scam can also help protect you from AI-assisted scams. That means being observant, protecting your bank accounts with multiple layers of security, embracing multifactor authentication, freezing and monitoring your credit report and enrolling in identity theft protection.
Here are other tips to stay safe:
AI-assisted scams will only become more convincing as the technology advances. Staying aware of common scam tactics and using your common sense and caution remain your best defense against these attempts.