Curated by THEOUTPOST
On Wed, 18 Dec, 4:03 PM UTC
7 Sources
[1]
How to protect yourself from AI scams this holiday season
Scammers are using generative artificial intelligence tools to create more convincing fake text and voices to commit fraud, according to a recent FBI warning to the public. Olivier Morin/AFP via Getty Images hide caption Don't be duped by a scam made with artificial intelligence tools this holiday season. The FBI issued a public service announcement earlier this month, warning criminals are exploiting AI to run bigger frauds in more believable ways. While AI tools can be helpful in our personal and professional lives, they can also be used against us, said Shaila Rana, a professor at Purdue Global who teaches cybersecurity. "[AI tools are] becoming cheaper [and] easier to use. It's lowering the barrier of entry for attackers so scammers can create really highly convincing scams." There are some best practices for protecting yourself against scams in general, but with the rise of generative AI, here are five specific tips to consider. The most common AI-enabled scams are phishing attacks, according to Eman El-Sheikh, associate vice president of the Center for Cybersecurity at the University of West Florida. Phishing is when bad actors attempt to obtain sensitive information to commit crimes or fraud. "[Scammers are using] generative AI to create content that looks or seems authentic but in fact is not," said El-Sheikh. "Before we would tell people, 'look for grammatical errors, look for misspellings, look for something that just doesn't sound right.' But now with the use of AI ... it can be extremely convincing," Rana told NPR. However, you should still check for subtle tells that an email or text message could be fraudulent. Check for misspellings in the domain name of email addresses and look for variations in the logo of the company. "It's very important to pay attention to those details," said El-Sheikh. AI-cloned voice scams are on the rise, Rana told NPR. "Scammers just need a few seconds of your voice from social media to create a clone," she said. Combined with personal details found online, scammers can convince targets that they are their loved ones. Family emergency scams or "grandparent scams" involve calling a target, creating an extreme sense of urgency by pretending to be a loved one in distress, and asking for money to get them out of a bad situation. One common scheme is telling the target their loved one is in jail and needs bail money. Rana recommends coming up with a secret code word to use with your family. "So if someone calls claiming to be in trouble or they're unsafe, ask for the code word and then [hang up and] call their real number [back] to verify," she said. You can also buffer yourself against these types of scams by screening your calls. "If someone's calling you from a number that you don't recognize that is not in your contacts, you can go ahead and automatically send it to voicemail," says Michael Bruemmer, head of the global data breach resolution group at the credit reporting company Experian. "Social media accounts can be copied or screen scraped," warned Bruemmer. To prevent impersonation, reduce your digital footprint. "Set social media accounts to private, remove phone numbers from public profiles. And just be careful and limit what personal information you share publicly," said Sana. Leaving your social media profiles public "makes it easier for scammers to get a better picture of who you are, [and] they can use [that] against you," she said. Scammers can use AI to make fake websites that seem legitimate. The FBI notes AI can be used to generate content for fraudulent websites for cryptocurrency scams and other types of investment schemes. Scammers have also been reported to embed AI-powered chatbots in these websites, in an effort to prompt people to click on malicious links. "You should always check your browser window ... and make sure that [you're on] an encrypted site. It [will start] with https://," said Bruemmer. He also said to make sure the website domain is spelled correctly, "[fraudulent websites] can have a URL that is just one letter or character off." If you're still on the fence about whether the website you're using is legit, you can try looking up the age of a site by searching WhoIs domain lookup databases. Rana said to be extremely wary of websites that were only recently created. Amazon, for example, was founded in 1994. If the WhoIs database says the "Amazon" site you're looking up was created this millennium, you know you're in the wrong place. The FBI warns generative AI tools have been used to create images of natural disasters and global conflict in an attempt to secure donations for fraudulent charities. They have also been used to create deepfake images or videos of famous people promoting investment schemes and non-existent or counterfeit products. When you come across a photo or video prompting you to spend money, use caution before engaging. Look for common telltale signs that a piece of media could be a deepfake. As Shannon Bond reported for NPR in 2023, when it comes to creating photos, AI generators "can struggle with creating realistic hands, teeth and accessories like glasses and jewelry." AI-generated videos often have tells of their own, "like slight mismatches between sound and motion and distorted mouths. They often lack facial expressions or subtle body movements that real people make," Bond wrote. "It's very important for all of us to be responsible in a digital AI-enabled world and do that on a daily basis ... especially now around the holidays when there's an uptick in such crimes and scams," said El-Sheikh.
[2]
AI impersonators will wreck online security in 2025. Here's what to watch out for
Picture this: You receive an audio message from your sister. She says she's lost her wallet and asks if you can send some cash so she can pay a bill on time. You're scrolling through social media. A video appears from a celebrity you follow. In it, they ask for contributions toward their latest project. You receive a video of yourself, showing you in a physically intimate situation. Just a few years ago, these situations would be likely genuine. But now, thanks to artificial intelligence, a scammer could be contacting you and if you don't have the ability to tell real from fake, you may easily fall for a plea for cash or a blackmail threat. For 2025, experts are sounding the alarm about AI and its effect on online security. The technology is supercharging the speed and sophistication of attacks -- and in particular, it's making scamming others using likenesses of both famous people and everyday citizens far, far easier. Worse, security groups say this trend will continue to accelerate. Further reading: Top 9 phishing scams to watch out for in 2024 Here's what to watch out for, why the landscape is changing, and how to protect yourself until more help arrives. Digital impersonation used to be hard to do. Scammers needed skill or large computational resources pull such a feat off even moderately well, so your likelihood of encountering fraud in this vein was small. AI has changed the game. Models have been specifically developed to replicate how a person writes, speaks, or looks -- which can then be used mimic you and others through email, audio messages, or rendered physical appearance. It's more complex and polished than most people expect, especially if you think of online scams as foreign princes asking for money, or clicking on a link to reroute a misdirected package. This style of copycatting is known as deepfakes, though the term is most commonly used to describe the video and image variations. You may have already heard about well-known or popular individuals being victims of these attacks, but now the scope has spread. Anyone can be a target. Deepfakes make scam and fraud attempts much more convincing -- whether you're the one encountering them or unknowingly providing your likeness to a scam. AI-generated messages, audio clips, images, and videos can show up in these common schemes: The result is an online world where you can't as easily trust what you see -- and to stay safe, you can't act as immediately on information. For the coming months, cybersecurity experts echo one another in their advice: Trust more slowly. Or put another way, you should question the intent of messages and media that you encounter, as Siggi Stefnission, Cyber Safety CTO of Gen Digital (parent company of Norton, Avast, and other antivirus software) says. Think of it as an evolution of wondering if an image is photoshopped -- now you should ask yourself if it's fully generated and for what purpose. Same goes for text-based communications. In Stefnission's words: "Is it benign or trying to manipulate?" But beyond becoming generally wary of online content, you can still also use existing indicators of a scam to evaluate the situation. Ultimately, verify before you trust. You may not be able to control if a scammer uses your likeness (much less someone else's), but a cool head will let you deftly navigate even high pressure situations. What does that look like pragmatically? Being ready and committed to continuing interactions on your own terms, rather than the other party's. For example: You can do this gracefully -- and even go one step further and take proactive measures. it can save you a world of panic and headache, as this family discovered when targeted by an AI phone scam. This is particular true if you actively post video, photos, or even audio clips of yourself -- or on the flip side, if someone you interact with often does: These suggested steps help defend against impersonation fraud by requiring authentication (code word), putting up stronger barriers to easy account access (passkeys, passwords + 2FA, email masks), and getting outside help for evaluation of communication and media (antivirus apps). You'll also prepare yourself better against "hyperpersonal" scams, as Gen Digital's Steffnission calls them -- where an attacker can take the increasing amount of leaked or stolen data available, and then feed that to AI to craft highly personalized scam campaigns. Minimizing the data others have about you reduces the ability to target you with such precision. If that fails, you can guard your blind spots more easily if you know what they are. AI impersonation isn't the only online threat to expect in the coming year and beyond. AI also is fueling increased attacks on organizations -- we individuals can't control resulting data leaks and breaches for unprepared groups. Likewise, we can't influence whether bad actors manipulate AI to behave in malicious ways (e.g., autonomous spread of malware), or feed bad data to AI to weaken security defenses. However, AI cuts two ways -- like any tool, it can be helpful or harmful. Security software developers are also putting it to use, specifically to combat the rising attacks. For example, leaning on the Neural Processing Units (NPUs) to help consumers detect audio deepfakes. Meanwhile, on an enterprise level, AI is being used to aid corporate IT and security teams in identifying threats. (Like a holiday uptick in scam emails sent to Gmail addresses.) Consumers may see the results as, say, better phishing detection via their antivirus software. Outside of these more direct responses, industry executives have visions for 2025 (and beyond) that tackle problems like the millions of social security numbers on the dark web. In Experian's 2025 data breach forecast, the company suggests dynamic personally identifying information to replace fixed identifiers like driver's licenses and social security cards. (The forecast comes from the Experian Data Breach group, which offers mitigation services to companies experiencing a data breach). AI would likely be part of such a solution. But as you can see, this kind of help is still in earlier stages -- audio deepfake detection only gets partway there, for example. And the software won't ever be able to do all the work. Vigilance will always be needed -- especially so as we wait for the good guys to better duke it out with the bad guys.
[3]
How to avoid the latest generation of scams this holiday season
Imagine this: Two days before your family holiday party, you get a text about an online order you placed a week ago, saying the package is at your door. It comes with a photo - of someone else's door. When you click the attached link, it takes you to the online store, where you enter your username and password. Somehow that doesn't work, even though you answered your security questions. Frustrated, you call customer service. They tell you not to worry since your package is still on the way. You receive your package a day later and forget all about the earlier hassle. In the end, it was just a mistake. You are unaware of the terrifying thing happening in the background. You've fallen for a classic package-delivery scam, and a form of "smishing," or SMS phishing. And you're not alone. One in three Americans have fallen victim to cybercrime, according to a 2023 poll. That's up from 1 in 4 in 2018. As cybersecurity researchers, we want to spread the word to help people protect themselves. Old-fashioned threats haven't disappeared - identity thieves still steal wallets, dumpster dive for personal information and skim cards at ATMs - but the internet has made scamming easier than ever. Digital threats include phishing attacks that use fake emails and websites, data breaches at major companies, malware that steals your information, and unsecured Wi-Fi networks in public places. A whole new world of scams Generative AI - which refers to artificial intelligence that generates text, images and other things - has improved dramatically over the past few years. That's been great for scammers trying to make a buck during the holiday season. Consider online shopping. In some cases, scammers craft deepfake videos of fake testimonials from satisfied "customers" to trick unsuspecting shoppers. Scam victims can encounter these videos on cloned versions of legitimate sites, social media platforms, messaging apps and forums. Scammers also generate AI-cloned voices of social media influencers appearing to endorse counterfeit products and create convincing but fraudulent shopping websites populated with AI-generated product photos and reviews. Some scammers use AI to impersonate legitimate brands through personalized phishing emails and fake customer service interactions. Since AI-generated content can appear remarkably authentic, it's become harder for consumers to distinguish legitimate online stores from sophisticated scam operations. But it doesn't stop there. "Family emergency scams" exploit people's emotional vulnerability through deepfake technology. Scammers use AI to clone the voices of family members, especially children, and then make panic-inducing calls to relatives where they claim to be in serious trouble and need immediate financial help. Some scammers combine voice deepfakes with AI-generated video clips showing the "loved one" in apparent distress. These manufactured emergency scenarios often involve hospital bills, bail money or ransom demands that must be paid immediately. The scammer may also use AI to impersonate authority figures like doctors, police officers and lawyers to add credibility to the scheme. Since the voice sounds authentic and the emotional manipulation is intense, even cautious people can be caught off guard and make rushed decisions. How to protect yourself Protecting yourself against scams requires a multilayered defense strategy. When shopping, verify retailers through official websites by checking the URL carefully - it should start with the letters "HTTPS" - and closely examining the site design and its content. Since fake websites often provide fake contact information, checking the "Contact Us" section can be a good idea. Before making purchases from unfamiliar sites, cross-reference the business on legitimate review platforms and verify their physical address. It's essential to keep all software updated, including your operating system, browser, apps and antivirus software. Updates often include security patches that fix vulnerabilities hackers could exploit. For more information on the importance of software updates and how to manage them, check out resources like StaySafeOnline or your device manufacturer's official website. Regular updates are a crucial step in maintaining a secure online shopping experience. Make sure you only provide necessary information for purchases - remember, no one needs your Social Security number to sell you a sweater. And keeping an eye on your bank statements will help you catch any unauthorized activity early. It may seem like another chore, and it probably is, but this is the reality of our digital world. To protect against family emergency scams, establish family verification codes, or a safe word, or security questions that only real family members would know. If you do get a distressed call from loved ones, remain calm and take time to verify the situation by contacting family members directly through known and trusted phone numbers. Educate your relatives about these scams and encourage them to never send money without first confirming the emergency with other family members or authorities through verified channels. If you discover that your identity has been stolen, time is critical. Your first steps should be to immediately contact your banks and credit card companies, place a fraud alert with the credit bureaus, and file a report with the Federal Trade Commission and your local police. In the following days, you'll need to change all passwords, review your credit reports, consider a credit freeze, and document everything. While this process can be overwhelming - and extremely cumbersome - taking quick action can significantly limit the damage. Staying informed about AI scam tactics through reputable cybersecurity resources is essential. Reporting suspected scams to relevant authorities not only protects you, but it also helps safeguard others. A key takeaway is that staying vigilant is critical to defending against these threats. Awareness helps communities push back against digital threats. More importantly, it's key to understand how today's scams aren't like yesteryear's. Recognizing the signs of scams can provide stronger defense during this holiday season. And as you develop your threat identification techniques, don't forget to share with your family and friends. Who knows? You could save someone from becoming a victim.
[4]
Scammers now can use voice-cloning AI to impersonate us or others and steal money
With the emergence of AI and other new technologies, people have become more susceptible to scams online, especially older people. Just as you're ready to mingle and jingle, it's time for a warning about how a holiday-themed TikTok or Facebook reel that you post now could end up being used by scammers with AI-cloning tools to steal money from Grandma. Even more scary, the same could be said about that friendly message you're leaving on your voicemail. Yep, we're now being told that it's wise to ditch the "Hi this is Sally, can't come to the phone right now" custom message and go with the boring, pre-recorded default greeting offered on your cell phone that uses a voice that isn't yours. It's not exactly the cheery kind of stuff we want to hear as the calendar moves closer into 2025. But it's not exactly the kind of message we can afford to ignore, either. Cyber criminals have a few new tools that experts say will open up the door for even more fraud in the next few years -- AI-powered voice and video cloning techniques. Scammers want our voices and videos so that they can do a more convincing job of impersonating us when they're out to steal money. Such cloning can be wrongly used when crooks make a call pretending to be a grandson who claims to need money to get out of jail, a boss who wants you to pay some mysterious invoice, a romantic interest met on social media and a host of others. The FBI is warning that artificial intelligence tools pose an escalating threat to consumers and businesses as cyber criminals using AI to conduct sophisticated phishing and social engineering attacks. Michigan Attorney General Dana Nessel in early December warned residents that rapid advancements in AI are being misused to create "deepfake audio and video scams so realistic that they can even fool those who know us best." We're not hearing from local law enforcement about a ton of such voice-impersonation scams taking place yet. But experts say people need to be prepared for an onslaught of activity and take precautions. Those operating sophisticated fraud rings only need roughly three seconds of your voice to duplicate who you are -- replicating the pitch of your voice, your tone, the pace at which you speak -- when the crooks use some readily available, low-cost AI tools, according to Greg Bohl, chief data officer for Transaction Network Services. The company provides services to the telecommunications industry, including cell phone companies. Bohl's work focuses on developing AI technologies that can be used to combat fraud. Many times, Bohl said, criminals will take information that's already readily available on social media or elsewhere, such as your cell phone, to clone a voice. "The longer the greeting, the more accurate they can be with that voice replication," Bohl told me via a video conference call. He called a 30-second snippet on a voicemail or a social media post a "gold mine for bad actors." Many scams already spoof a legitimate phone number to make it appear like the call is coming from a well-known business or government agency. Often, real names are even used to make it seem like you're really hearing from someone who works at that agency or business. But this new AI-cloning development will take scams to an entirely new level, making it harder for consumers to spot fraudulent robocalls and texts. The Federal Communications Commission warns that AI can be used to "make it sound like celebrities, elected officials, or even your own friends and family are calling." The FCC has been working, along with state attorney generals, to shut down illegal AI voices and texts. People unknowingly make the problem worse with social media posts by identifying family members -- say your son Leo or your daughter Kate -- in videos or photos. The crooks, of course, need to know who cares about you enough to try to help you in an emergency. So, the scammers first must identify who they might target among your real friends and family before staging a crisis call to ask for money. During the holidays, Bohl said, anything you do on social media to connect with families and friends can trigger some risk and make you more open to fraud. Scam calls will sound even more real using replicated voices of those we know, experts say. So, we will want to be able to calmly figure out if we're talking to a crook. You want a safe word or security question in place long before any of these calls start. Questions can help, such as: What five tricks can the dog do in the morning? What was your favorite memory as a child? What was the worst golf score you ever posted? You want something that a scammer won't be able to easily guess -- or quickly look up online. (And if you don't have a dog or play golf, well, you might have a good trick question there.) "We can expect a significant uptick in AI-powered fraudulent activities by 2025," said Katalin Parti, an associate professor of sociology and a cybercrime expert at Virginia Tech. The combination of social media and generative AI will create more sophisticated and dangerous attacks, she said. As part of the fraud, she said, scammers also can create robocalls to collect voice samples from potential victims. It can be best not to engage in these types of calls, even by responding with a simple "hello." Parti gives more tips: Don't contact any telephone number received via pop-up, text or email. Do not answer cold calls, even if you see a local area code. If you do not recognize the caller but you decide to answer the call anyhow, let the caller talk first. AI voice-cloning is a significant threat as part of financial scams targeting older adults, as well as for misinformation in political campaigns, according to Siwei Lyu, professor of computer science and engineering at the University of Buffalo and director of the UB Media Forensic Lab. What's troubling, he said, is that AI-generated voices can be extremely difficult to detect, especially when they are played over the phone and when the message can elicit emotional reactions such as when you think a close family member is hurt. Take time to step back and doublecheck if the call is real, Lyu said, and listen carefully to other clues to detect an AI-generated sound. "Pay attention to abnormal characteristics, such as overly quiet background, lack of emotional tone in the voice or even the lack of breathing in between utterances," he said. But remember, new technology is evolving. Today, more types of phishing emails and texts look legitimate, thanks to AI. The old saw, for example, which suggests you just need to look for bad grammar or spelling mistakes to spot a fake email or text could prove useless one day, as AI tools assist foreign criminals in translating the phrases they're using to target U.S. businesses and consumers. Among other things, the FBI warned that cyber crooks could: Many times, we cannot even imagine how cyber criminals thousands of miles away could know how our voices sound. But much is out there already -- more than even a simple voicemail message. School events are streamed. Business conferences are available online. Sometimes, our jobs require that we post information online to market the brand. And "there's growing concern that bad guys can hack into voicemail systems or even phone companies to steal voicemail messages that might be left with a doctor's office or financial advisor," said Teresa Murray, who directs the Consumer Watchdog office for U.S. PIRG, a nonprofit advocacy group. Such threats become more real, she said, in light of incidents such as the massive data breach suffered by National Public Data, which aggregates data to provide background checks. The breach was announced in August. Yep, it's downright sickening. Murray said the proliferation of scams makes it essential to have conversations with our loved ones to make sure everyone understands that computers can impersonate voices of people we know. Talk about how you cannot trust Caller ID to show that a legitimate government agency is calling you, too. Michigan Attorney General Nessel's alert about potential holiday scams using artificial intelligence recommended that:
[5]
Five cybersecurity tips to protect yourself from scams and deepfakes
In an age when misinformation and deepfakes blur the lines between fact and fiction, identifying scams has never been more challenging. Falling for a scam can have devastating social, financial, and personal consequences. Over the past year, victims of cybercrime reported losing an average of $30,700 per incident. As Christmas and Boxing Day approach, shoppers face heightened risks, particularly millennials and Gen Z consumers. In the U.S., one in five people have unknowingly purchased a product promoted by deepfake celebrity endorsements. This figure climbs to one in three among those aged 18-34. Sharif Abuadbba, deepfake expert in our Data61 team, highlighted how technology like AI has made deception easier than ever. "Scammers can quickly and easily create imitations of popular social media influencers. Deepfakes can manipulate a person's voice, gaze, mouth, expressions, pauses -- basically putting words in their mouth that they've never said," Sharif said. "On social media, attackers rely on the viewers believing fake content and sharing it widely," he added. You might think you have nothing valuable for a hacker to steal. However, cybercriminals often exploit individuals as gateways to larger targets, including family members, friends or organizations. Identity fraud can also severely damage your professional relationships and reputation with financial services. As technology becomes more integral to our daily lives, how can we protect ourselves and those we care about from these cyber threats? Here are five expert tips: 1. Have a family safe word Scammers are increasingly using texts, calls and even video to impersonate loved ones and request money. With AI voice cloning on the rise, these schemes are becoming more and more believable. Jamie Rossato, our Chief Information Security Officer, advises setting up a pre-agreed safe word to verify who you're speaking to. This word should remain private and not be easily discovered through social media or other online sources. "Use this proactively, rather than waiting until you are suspicious," Jamie said. "If my children asked me for money, unless they said our special safe word, I would never transfer funds to them." 2. Don't be afraid to hang up With advances in voice-spoofing technology, fraudsters can convincingly mimic organizations like banks to steal money. Lauren Ferro, Human-centric Security Research Scientist with our Data61 team, recommends verifying caller identities before sharing any information. "If something seems a bit off, hang up and call the organization directly using their official number, or go and visit them in person," Lauren advised. "They would prefer you to be cautious. It's far easier to address concerns up front than to recover stolen money or repair reputational damage later." 3. Enable multi-factor authentication Identity fraud is the most common self-reported cybercrime this year, making it crucial to protect your personal data online. For example, private or sensitive information stored with Medicare and government accounts. One effective method to protect your account is enabling multi-factor authentication (MFA) to log in. MFA requires a password and a one-time verification code. Often, this is sent as a text message, but Jamie suggests using authentication apps like Microsoft Authenticator for added security. "One of the benefits of app-based authenticators is they often use biometric controls, such as face ID or thumbprints to get into the app, before you get to the actual code itself," Jamie explained. "This creates an extra layer of protection beyond SMS codes." 4. Turn on banking push notifications With most people using card and online payments, staying informed about your transactions can help you detect scams. While banks monitor suspicious activity, scammers can bypass these measures by mimicking your usual spending patterns. Enabling real-time notifications through your banking app allows you to track transactions immediately, adding another layer of security. 5. Be aware of what you are sharing online Most of us have an online and social media presence, but the photos, videos and information we share can be exploited. These assets can train deepfakes, which, once created and shared, are difficult to detect and remove. Liming Zhu, Research Director in our Data61 team, stresses the importance of being mindful of what we share online and who can access it. This is especially critical for children. Education is your best form of protection Ultimately, awareness and proactive protection are key to staying safe online. Educating yourself about cybersecurity is your first line of defense against scams.
[6]
Get ready for these scams in 2025
Scammers are getting more sophisticated in 2025. Credit: Ian Moore / Mashable Composite; Rakdee / DigitalVision Vectors / AndreyPopov / iStock / Getty Hari Ravichadran, founder and CEO of the online safety product Aura, recently got a front-row seat to a sophisticated scam designed to turn him into a victim of fraud. He and his team are regularly targeted by scammers, but this scheme was so well-conducted that it gave the normally skeptical Ravichadran pause. The play went like this: A scammer seemingly stole someone's identity and used it to share a heart-wrenching story. The pretend victim claimed that the bank mistakenly wired Ravichadran her alleged family's down payment for their new home. They urgently wanted Ravichadran to wire back the money he supposedly received. The would-be thief contacted not just Ravichadran, but also multiple company executives. The scammer communicated through what looked like a legitimate email address and social media profiles, and had accurate information for Ravichadran. They even invited him and his legal counsel to meet on Zoom with a bank employee in order to sort things out. Ravichadran knew better than to reflexively trust what he'd been told. But, like most people, he also worried what might happen to the woman if she was telling the truth. Of course, things quickly fell apart when his lawyer joined a Zoom call with the purported bank employee. The individual appeared on camera for a split second and the conversation eventually became nonsensical. The lawyer suspected they were speaking to an AI-powered deepfake. Ravichadran says the attempt demonstrates just how savvy scammers have become. Unlike past generic efforts, schemes are now often very personal. That's because scammers may have access to information from data breaches as well as details available on social media or other public platforms, including where you work and the identity of your friends. Scammers are also leveraging technology to reach people faster and more efficiently. They can now use auto-dialling software connected to an AI chatbot, complete with local or regional accent, to call your phone number. In general, Ravichadran advises consumers to assume they're not being told the truth when assessing inquiries like the one he received. "I think you go to it from a place of distrust," Ravichadran says. "If you go from a place of, 'Hey, this is probably true, let me see how to make it work,' you're going to get taken." Ravichadran says technological advancements will be one of the defining features of how people are scammed in 2025. But bad actors have focused on certain types of fraud that are likely to become even more prevalent next year. Here's what you need to know: If there's a way to steal people's money through a cryptocurrency scheme, thieves will find it. That's increasingly true as crypto becomes more mainstream and hits milestones, like Bitcoin passing the $100,000 mark, Nick Biasini, head of outreach for Cisco Talos Intelligence Group, says. One well-established con is so-called "pig butchering," in which a scammer grooms someone digitally over a period of time and then asks them for crypto. (Interpol recently recommended abandoning that term and instead adopting the phrase "romance baiting," which carries less stigma.) The scammers' alleged purposes vary from helping the victim invest in crypto to helping the scammer pay for fictitious costs, like medical care. These bad actors are typically looking to score a windfall over time or all at once; average losses are hundreds of thousands of dollars, according to the Internal Revenue Service. Less well-known crypto scams revolve around confusion about the currency, says Biasini. Newcomers to buying or investing in crypto might fall for a scam that starts on social media, when that individual asks for help learning more. Scammers are waiting for posts just like these and will reply with friendly offers that end in financial losses. Bad actors also take advantage of people who've lost their money by posing as experts who can help them get it back. Their end game, though, is just the same: make off with more of the victim's cash. Celebrity-backed crypto can be a dangerous investment, too. The recent Hawk Tuah memecoin pump-and-dump scheme demonstrated what happens when a famous person encourages their followers to purchase their memecoin. Insiders close to the celebrity, who bought the memecoin privately for less, sell it as soon as its price spikes, ultimately crashing the value. Since crypto is rife with scam risk, Biasini recommends exploring the currency with the help of a certified professional who can help you safely invest in it. In general, it's best to stick with well-known exchanges, and avoid social media discussions about crypto in which you share any personal information or data. Multifactor authentication is a security measure designed to provide consumers with greater protection for their personal accounts. But Cisco Talos' Security Intelligence & Research team has noted more attempts to fraudulently bypass that security step. Some criminals are attempting to do that by stealing cookies, or data sent by a website to your computer, that contain their login credentials and allow them to access a victim's email, according to a recent warning from the FBI Atlanta division. Once the thieves are able to view the email account, they can try logging into that victim's various other online services, including bank and shopping accounts. When the services send the multifactor authentication code via email, the criminal will be able to use it. The FBI Atlanta division encourages people to regularly clear their Internet browser cookies, consider the risks of checking "remember me" when logging into a website, and only visiting sites with a secure connection in order to prevent your data from being intercepted. Criminals use other methods, including phishing, to relay mutlifactor authentication codes to themselves in order to access victims' financial and consumer accounts. Beware of digital messages and phone calls that ask you to provide critical login information that you would otherwise enter yourself. Scams that target workplaces are on the rise, according to both Ravichadran and Biasini. Typically these efforts focus on higher-level employees, like the CEO or CFO. Much like Ravichadran experienced, the fraudulent requests involve urgently wiring money into the bad actor's account. Biasini says that the emergence of large language models (LLMs), like the kind that power OpenAI's ChatGPT, have made it easier for scammers to create prompts that sound very convincing. Ravichadran notes that these scams often leverage multiple channels of communication, like email and social media messaging. They may use stolen accounts, so that the bad actor appears to be legitimate. They've also typically collected enough information about their target that they're able to demonstrate some level of familiarity -- and credibility. These tactics are becoming widespread, which means employees have to be on guard for suspicious messages, and quickly report them to their information security teams. LLMs and deepfake technology have given scammers frighteningly powerful tools. With access to software that can essentially write scam scripts in seconds or minutes, and then can conduct a conversation as a chatbot with a victim in real-time, bad actors can rapidly scale their schemes to reach far more people than they could in the past. Ravichadran says scammers can even program a chatbot to use a regional accent, a detail that could likely persuade a potential victim into handing over their personal data. Bad actors can also use deepfake technology to create a vocal or visual clone of someone. If you think you're speaking to someone you know, or could look up online, it might be very difficult to remain skeptical of a scammer's story or their requests. As this technology becomes more widely available, and easier to use, it'll make scams that much easier to execute. In addition to routinely approaching interactions involving your money and data with skepticism, Ravichadran recommends protecting yourself with basic steps, like changing your password if you know it's been breached, and taking advantage of a password manager in order to use complex phrases that you don't need to remember. Ravichadran also suggests more sophisticated strategies, including using services that monitor your financial accounts and credit for signs of fraud and identity theft. He adds that anyone can become a scam victim, despite the perception that bad actors typically target certain people, like seniors. Ravichadran has spoken to people with advanced degrees who are shocked that they were duped by a scammer. Though many victims feel embarrassed and ashamed, he encourages people to share their experiences with others, and certainly report them to authorities so that investigators can pursue the criminals. The FBI recommends reporting scams to law enforcement and to the FBI's Internet Crime Complaint Center. If you think you've been scammed by a registered business, you can also report suspected fraud to your state attorney general, the state in which the company is listed, and to the Federal Trade Commission.
[7]
The scams you'll regret ignoring in 2025
Scammers are becoming increasingly sophisticated, with new tactics expected to surface in 2025. Recent reports indicate that older Americans, particularly those aged 60 and above, experienced significant financial losses due to scams in 2023, totaling over $3.4 billion, a trend that continues to escalate. According to AARP's Fraud Watch Network, scams that entwine emotional manipulation, such as fake distressing messages involving pets, have emerged, leveraging sad or cute imagery to engage victims. Michael Bruemmer, vice president at Experian, warns that the growth of artificial intelligence (AI) will facilitate the repurposing of established scams, particularly impersonation frauds and the use of deepfake technology. In 2024, employment scams surged, with numerous organizations, including the Better Business Bureau (BBB) and Identity Theft Resource Center (ITRC), reporting a rise in fraudulent job ads and recruitment pitches. Scammers often lure individuals seeking employment by requesting personal information, often convincing them of a job offer that requires sensitive data like Social Security numbers. More elaborate variations involve charging applicants for training or equipment shipments. Cryptocurrency scams remain a significant threat, particularly as Bitcoin recently surpassed the $100,000 mark. The FBI reported nearly 17,000 cryptocurrency complaints from people aged 60 and older in 2023, with losses totaling $1.6 billion. Scammers often employ tactics such as "pig butchering," where they build trust with their victims over time before soliciting large investments. This manipulation is often compounded by misleading profit reports and hefty withdrawal fees. Rising along with cryptocurrency scams are celebrity impostor scams, where criminals pose as well-known figures to extract money from victims. AARP notes that these scams are frequently aimed at individuals who may be emotionally vulnerable, exploiting personal connections with phony love stories or urgent financial needs. The FTC has introduced new rules to combat these misleading endorsements, so consumers are urged to conduct thorough research on claims made on social media. How scammers used Dubai Police branding to launch phishing attacks Tech support scams are also prevalent, particularly among individuals aged 60 and over, who are five times more at risk than younger people. The FTC reported that seniors lost over $175 million to these frauds in 2023. Scammers often initiate contact through pop-up messages claiming computer issues, then exploit victims by gaining remote access to personal devices. Legitimate tech companies will not reach out unannounced, so caution is advised. Another concerning trend is the rise of card-declined scams, where consumers receive false notifications during online transactions that indicate card issues. Many victims are unaware that their card may have been charged despite seeing these messages. BBB's Scam Tracker has flagged an increase in such incidents, where consumers unknowingly expose themselves to fraud while attempting to address transaction errors on counterfeit sites. Law enforcement faces challenges with iPhones' automatic rebooting Scammers now frequently employ advanced technology to execute business phishing and impersonation scams, targeting high-level employees within organizations. Using information gathered from social media or data breaches, criminals impersonate trusted contacts to request urgent fund transfers. This method has gained traction as it blends sophisticated social engineering tactics with impressive AI-driven scripts that enhance the deception. As AI continues to evolve, its applications in scam development grow correspondingly. This includes creating convincing chatbots that can mimic local accents and manipulate conversations to extract sensitive information from victims. Furthermore, the FBI warns of cookie theft schemes that bypass multifactor authentication, allowing criminals access to sensitive online accounts. In light of these increasing threats, experts recommend several protective measures. Individuals should regularly clear browser cookies, utilize strong passwords, and consider employing password managers for additional security. Future-proofing online conduct -- approaching communications with skepticism and thoroughly investigating unsolicited requests -- remains paramount.
Share
Share
Copy Link
As AI technology advances, scammers are using sophisticated tools to create more convincing frauds. Learn about the latest AI-enabled scams and how to safeguard yourself during the holidays.
As the holiday season approaches, cybersecurity experts are sounding the alarm about a new generation of scams powered by artificial intelligence (AI). The FBI has warned that AI tools pose an escalating threat to consumers and businesses, with criminals using these technologies to conduct sophisticated phishing and social engineering attacks 12.
Generative AI has dramatically improved over the past few years, making it easier for scammers to create convincing frauds. Some of the AI-enabled scams to watch out for include:
Deepfake videos and images: Scammers can create fake testimonials from "satisfied customers" or impersonate celebrities to endorse counterfeit products 3.
AI-cloned voices: Criminals can now replicate a person's voice with just a few seconds of audio, enabling more convincing impersonation scams 4.
Personalized phishing: AI is being used to craft highly personalized phishing emails and fake customer service interactions 3.
Family emergency scams: Scammers use AI to clone the voices of family members, creating panic-inducing calls requesting immediate financial help 35.
To stay safe in this evolving threat landscape, experts recommend several strategies:
Verify before trusting: Always double-check the authenticity of messages, calls, or websites before taking action 12.
Use family verification codes: Establish a safe word or security questions that only real family members would know 34.
Be cautious with personal information: Limit what you share publicly on social media and be wary of unsolicited requests for information 25.
Enable multi-factor authentication: Use app-based authenticators for added security on your accounts 5.
Stay informed: Keep up-to-date with the latest AI scam tactics through reputable cybersecurity resources 3.
The consequences of falling for these scams can be severe. Over the past year, victims of cybercrime reported losing an average of $30,700 per incident 5. Young consumers are particularly vulnerable, with one in three people aged 18-34 having unknowingly purchased products promoted by deepfake celebrity endorsements 5.
Experts predict a significant uptick in AI-powered fraudulent activities by 2025 4. The combination of social media and generative AI is expected to create more sophisticated and dangerous attacks, making it crucial for individuals to remain vigilant and adopt robust cybersecurity practices 45.
As AI technology continues to advance, the line between genuine and fraudulent content is becoming increasingly blurred. By staying informed, adopting best practices, and maintaining a healthy skepticism towards online interactions, individuals can better protect themselves from the growing threat of AI-powered scams during the holiday season and beyond.
Reference
[3]
The FBI has issued an alert about the increasing use of generative AI by criminals to enhance fraud schemes, urging the public to adopt new protective measures against these sophisticated threats.
4 Sources
4 Sources
Cybersecurity experts warn of the increasing use of generative AI by hackers to create more effective malware, bypass security systems, and conduct personalized phishing attacks, posing significant threats to individuals and organizations.
2 Sources
2 Sources
AI-generated phishing emails are becoming increasingly sophisticated, targeting executives and individuals with hyper-personalized content. This new wave of cyber attacks poses significant challenges for email security systems and users alike.
9 Sources
9 Sources
AI-powered voice cloning scams are becoming increasingly prevalent, with 28% of adults falling victim. Banks and experts warn of the sophisticated techniques used by scammers to exploit social media content and empty bank accounts.
6 Sources
6 Sources
A sophisticated AI-powered scam targeting Gmail users combines deepfake robocalls and fake emails to gain access to accounts, highlighting the growing threat of AI in cybercrime.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved