4 Sources
[1]
AI tools are making social engineering attacks even more convincing, and I fear that this is only the beginning
Wallace and Gromit meet deepfake deception in this sharp take on AI-driven scams Nick Park's Wallace and Gromit were brought crashing into the 21st century in December 2024 with their latest adventure, Vengeance Most Fowl. The film challenges our growing dependence on smart technology in the form of a robotic garden gnome, built by Wallace to support his gardening business, which is then hacked by the Kubrick-esque Feathers McGraw for his own nefarious purposes. One of the more interesting but less commented on parts of the film shows Gromit cautiously entering his house and being greeted by what he thinks is Wallace's reassuring voice, only to be confronted with Feathers and the robotic gnome. Technology's ability to mimic linguistic patterns, to clone a person's voice and understand and respond to questions has developed dramatically in the last few years. This has not gone unnoticed by the world's criminals and scammers, with the result that social engineering attacks are not only on the rise but are more sophisticated and targeted than ever. Cybercriminal social engineering manipulates a target by creating a false narrative that exploits the victim's vulnerability (whether that is their willingness to trust people, their financial worries or their emotional insecurity). The result is that the victim unwittingly but willingly hands over money and/or information to the perpetrator. Most social engineering scams consist of the following stages: (1) making connection with the victim ("the means"), (2) building a false narrative (usually with a sense of urgency or time limitation) ("the lie") and (3) persuading the target to take the suggested action (e.g. transferring money or providing personal details) ("the ask"). Usually, stage 2 (the lie) is where most people spot the scam for what it is, as it is difficult to build and sustain a convincing narrative without messing up eventually. We have all received text messages, emails or social media messages from people purporting to be our friends, long-lost relations in countries we have never been to, or our banks, asking us to provide them with personal information, passwords or money. Historically, such communications were easy to spot, as they bore the hallmarks of a scam: generic greetings and signatures, spelling mistakes, poor or unusual grammar and syntax, inconsistent formatting or suspicious addresses. However, the rapid sophistication of generative AI tools means that it is increasingly easy for criminals to craft and sustain plausible false narratives to ensnare their victims; the "lie" or stage 2 in the social engineering scam. Companies and law enforcement agencies are scrambling to stay ahead of the technological advances and are working hard to predict developments which will be used for social engineering. One potential use case for generative AI in this area is a dynamic lie system, which would automatically contact and interact with potential victims to earn their trust before moving to stage 3 (the ask). This would be particularly useful for "advance-fee" or "419" scams. These scams work by promising the victim a large share in a huge amount of money in return for a small upfront payment, which the fraudster claims will be used to obtain the large sum. The AI-based dynamic lie system could automate the first wave of scam emails to discern whether the potential victims are likely to 'take the bait'. Once the system identifies an engaged individual who appears persuaded by the communication , it can then pass the control to the human operator to finish the job. Another development which has already gained traction is the use of AI to clone human speech and audio to carry out advanced types of voice phishing attacks, known as "vishing". In the United States, the Federal Trade Commission has warned about scammers using AI voice cloning technology to impersonate family members and con victims into transferring money on the pretext of a family emergency. Current technologies allow voices to be cloned in a matter of seconds, and there is no doubt that with advancements in deep learning, these tools will only become more sophisticated. It would appear this form of social engineering is here to stay. "If there's one job that generative AI can't steal, it's con artist." So said Stephanie Carruthers, Global Lead of Cyber Range and Chief People Hacker at IBM in 2022. Fast forward 3 years and Carruthers has changed her position. Our concerns about AI are not just limited to the impact on the workforce but have now expanded to include AI-based bots which can craft tailored social engineering attacks to specific targets. As Carruthers notes, "with very few prompts, an AI model can write a phishing message meant just for me. That's terrifying." Currently AI is being used by threat actors as an office intern or trainee to speed up completing the basic tasks required to carry out social engineering attacks. Carruthers and team did some experiments and found that generative AI can write an effective phishing email in five minutes. For a team of humans to write a comparable message, it takes about 16 hours, with deep research on targets accounting for much of that time. Furthermore, generative AI can churn out more and more tailored attacks without needing a break, and crucially, without a conscience. Philip K. Dick noted that for his human protagonist, Rick Deckard, "owning and maintaining a fraud had a way of gradually demoralizing one", but in an increasingly digital criminal underworld, maintaining a fraud has never been easier. Increase your awareness of threats and how to tackle them with the best online cybersecurity courses.
[2]
Can you spot an AI-generated scam?
As AI tools become part of everyday life, most people believe they would be better equipped to spot AI-generated scams, but new research reveals a worrying trend: as people get more familiar with AI, they're more likely to fall for these scams. New research finds that the generations most confident in detecting an AI-generated scam are the ones most likely to get duped: 30% of Gen Z have been successfully phished, compared to just 12% of Baby Boomers. Ironically, the same research found that fear of AI-generated scams decreased by 18% year-over-year, with only 61% of people now expressing worry that someone would use AI to defraud them. During the same period, the number of people who admitted to being successfully duped by these scams increased by 62% overall. Traditional scam attempts rely on mass, generic messages hoping to catch a few victims. Someone receives a message from the "lottery" claiming that a recipient won a prize, or a fake business offering someone employment. In exchange for providing their bank account details, the messages would promise money in return. Of course, that was never true, and instead the victim lost money. With AI, scammers are now getting more personalized and specific. A phishing email may no longer be riddled with grammatical errors or sent from an obviously spoofed account. AI also gives scammers more tools at their disposal. For example, voice cloning allows scammers to replicate the voice of a friend or family member with just a three second audio clip. In fact, we're starting to see more people swindled out of money because they believe a message from a family member is asking for ransom, when it's actually from a scammer. This trend harms both businesses and consumers. If a scammer were to gain access to a customer's account information, they could drain an account of loyalty points or make purchases using a stolen payment method. The consumer would need to go through the hassle of reporting the fraud, while the business would ultimately need to refund those purchases (which can lead to significant losses). There's also a long-term impact to this trend: AI-generated scams erode trust in brands and platforms. Imagine a customer receiving an email claiming to be from Amazon or Coinbase support, an unauthorized user was trying to gain access to their account, and that the user should call support immediately to fix the issue. Without obvious red flags, they may not question its legitimacy until it's too late. A customer who falls for a convincing deepfake scam doesn't just suffer a financial loss; their confidence in the brand is forever tarnished. They either become hyper-cautious or opt to take their business elsewhere, leading to further revenue loss and damaged reputations. The reality is that everyone pays the price when scams become more convincing, and if companies fail to take steps to establish trust, they wind up in a vicious cycle. To address this confidence gap, it's important to understand why the divide exists in the first place. Digital natives have spent years developing an intuitive sense for spotting "obvious" scams -- the poorly written emails or suspicious pop-ups offering a free iPod. This exposure creates a dangerous blind spot: when AI-generated scams perfectly mimic legitimate communication, that same intuition fails. Consider how the brain processes a typical workday. You're juggling emails, Slack messages, and phone calls, relying on split-second pattern recognition to separate signal from noise. A message from "your bank" looks right, feels familiar, and arrives at a plausible time. The problem compounds when scammers use AI to perfectly replicate not just logos and language, but entire communication ecosystems. They're not just copying Amazon's email template; they're replicating the timing, context, and behavioral patterns that make legitimate messages feel authentic. When a deepfake voice call sounds exactly like a colleague asking for a quick favor, a pattern-matching brain tends to confirm that interaction as normal. This explains why the most digitally fluent users are paradoxically the most vulnerable. They've trained themselves to navigate digital environments quickly and confidently. But AI-powered scams exploit that very confidence. For companies, addressing this overconfidence problem requires a multi-pronged approach: Inform customers without fear-mongering: Help users understand that AI-powered scams are convincing precisely because they're designed to deceive the most confident, tech-savvy people. The goal isn't to make people stop using AI, rather to help them maintain appropriate skepticism. Educate them on deepfake scams: Focus on identifying the key signs of a legitimate versus fraudulent message (sent from an unknown number, a message with false urgency, a suspicious link or PDF attached). Show current examples of deepfakes and AI-generated phishing, rather than just talking about traditional fraud awareness. Keep communication channels transparent: Establish clear, verified communication channels and educate customers about how your company will and won't contact them. The good news is that many providers, including Google, Apple, and WhatsApp currently or will soon offer branded caller ID services. This means companies can establish a business profile with these apps, adding another layer of verification. That way, when a verified business contacts a customer, their message will clearly show the brand name and a verified badge. Similarly, most brands now authenticate their outbound email to conform with the DMARC delivery standard and qualify for a branded trust mark to show up next to the subject line. Invest in knowledge sharing: If one company is dealing with an influx of scam attempts, other companies are likely facing similar problems. Scammers often collaborate to share tactics and vulnerabilities; companies should do the same. Many companies fight fraud by using technologies that incorporate insight-sharing "consortiums" -- business networks where fraud patterns are shared across companies. By being open about current challenges, companies can better understand the risks and implement the proper safeguards to keep their customers safe. The businesses that will thrive in this environment are those that maintain identity trust -- that is, the ability to recognize a user or interaction within a digital environment -- while effectively combating increasingly sophisticated threats. Fraud prevention is no longer just about protection from losses, it's a critical part of the customer experience. That's because when customers feel safe, they shop confidently. By tackling users' AI blindspots while maintaining trust, companies gain a competitive edge. While the AI revolution has introduced incredibly capable tools, it's also created unexpected vulnerabilities. Addressing this challenge requires more than just different tools. It demands a fundamental rethinking of how we maintain trust when seeing is no longer enough to believe. We've listed the best Antivirus Software.
[3]
AI impersonation scams are sky-rocketing in 2025, security experts warn - here's how to stay safe
Experts say that independently verifying identities and using multi-factor authentication are key to protecting yourself Imagine getting a frantic call from your best friend. Their voice is shaky as they tell you they've been in an accident and urgently need money. You recognize the voice instantly; after all, you've known them for years. But what if that voice isn't actually real? In 2025, scammers are increasingly using AI to clone voices, mimic faces, and impersonate people you trust the most. The rise in this type of scam has been staggering. According to Moonlock, AI scams have surged by 148% this year, with criminals using advanced tools that make their deception near-impossible to detect. So how can you stay safe from this growing sci-fi threat? Here's everything you need to know, including what cybersecurity experts are recommending. AI impersonation scams are a fast-growing form of fraud where criminals use artificial intelligence to mimic a person's voice, face, or typing style with alarming accuracy. These scams often rely on voice cloning, which is a technology that can recreate someone's speech patterns with just a few seconds of recorded audio. The samples aren't hard to find; you can often spot them in voicemails, interviews, or social media videos. According to Montclair State University, even short clips from a podcast or online class can be enough to build a convincing AI impersonation of someone's voice. Some scams take this even further, using deepfake video to simulate live calls. For instance, Forbes reports that scammers have impersonated company executives in video meetings, convincing staff to authorize large wire transfers. Experts say the rapid growth of AI impersonation scams in 2025 comes down to three factors: better technology, lower costs, and wider accessibility. With these digital forgeries at their side, attackers assume the identity of someone you trust, such as a family member, a boss, or even a government official. They then request valuable, confidential information, or skip the extra step and ask for urgent payments. These impersonated voices can be very convincing, and this makes them particularly nefarious. As the US Senate Judiciary Committee recently warned, even trained professionals can be tricked. AI impersonation scams can happen across phone calls, video calls, messaging apps, and emails, often catching victims off guard in the middle of their daily routines. Criminals use voice cloning to make so-called "vishing" calls, which are phone scams that sound like a trusted person. The FBI recently warned about AI-generated calls pretending to be US politicians, including Senator Marco Rubio, to spread misinformation and solicit a public reaction. On the corporate side of "vishing," cybercriminals have staged deepfake video meetings posing as company executives. In a 2024 case threat actors posed as the CFO of UK-based engineering company Arup, and tricked its employees into authorizing transfers totaling a whopping $25 million. These attacks generally scrape pictures and videos from LinkedIn, corporate websites, and social media in order to craft a convincing impersonation. AI impersonation is getting more sophisticated, too - and fast. The email provider Paubox found that nearly 48% of AI-generated phishing attempts, including voice and video clones, successfully sidestep detection by current email and call security systems. Experts say that AI impersonation scams succeed because they create a false sense of urgency in their victims. Criminals exploit your instinct to trust familiar voices or faces. The most important defense is to simply slow down; take your time to confirm their identity before you act. The Take9 initiative says that simply pausing for nine seconds can go a long way toward staying safe. If you receive a suspicious call or video from someone you know, hang up and call them back on the number you already have. As cybersecurity analyst Ashwin Raghu told Business Insider, scammers count on people reacting in the moment, and calling back eliminates that urgency. It's also important to watch for subtle red flags. Deepfake videos can have a few tells, such as unnatural mouth movements, flickering backgrounds, or eye contact that feels a little 'off'. Similarly, AI-generated voices can have unusual pauses or inconsistent background noise, even if they sound convincing at first. Adding extra layers of security can help, too. Multi-factor authentication (MFA) makes it harder for scammers to get into your accounts even if they successfully steal your credentials. Cybersecurity expert Jacqueline Jayne told The Australian that your best bet is to pair direct verification with some form of MFA -- particularly during periods of high scam activity, such as during tax season. AI offers a ton of mind-boggling capabilities, but it also gives scammers powerful new ways to deceive. By staying vigilant, verifying suspicious requests, and talking openly about these threats, you can reduce the risk of being caught off guard -- no matter how real the deepfake may seem.
[4]
Your AI browser could be falling for online scams
The research, detailed in a report titled Scamlexity by the cybersecurity startup Guardio, paints a sobering picture of a technology where the race for convenience has left critical security measures dangerously behind. While much of the conversation around artificial intelligence and cybersecurity has focused on how scammers can use AI to create sophisticated deepfakes or phishing emails, a new report reveals a startlingly different threat: the AI itself is dangerously susceptible to being scammed. A cybersecurity firm has found that the new wave of "agentic AI" browsers, designed to autonomously act on a user's behalf, can be easily tricked into visiting phishing sites, giving away payment details to fake stores, and even executing malicious commands hidden from the human user. The research, detailed in a report titled Scamlexity by the cybersecurity startup Guardio, paints a sobering picture of a technology where the race for convenience has left critical security measures dangerously behind. By testing Perplexity's agentic AI, Comet, the researchers demonstrated how the very features designed to make life easier -- automating shopping, managing emails, and handling complex tasks -- can be turned against the user with devastating effect. The findings suggest we are entering a new, more complex era of digital fraud where the scammer's target is no longer human intuition, but the AI's inherent trust. The core issue lies in the fundamental design of an agentic AI. Unlike a simple search engine, these AI agents are built to replace the user in digital routines like searching, clicking, and shopping. But in doing so, they inherit AI's built-in vulnerabilities: a tendency to trust too easily, act without full context, and execute instructions without the healthy skepticism a human might apply. An AI's primary goal is to complete its assigned task and please its user, even if it means ignoring red flags that would be obvious to a person. Guardio calls this new reality "Scamlexity" -- a new dimension of scam complexity where AI convenience creates an invisible, highly vulnerable attack surface. The scam no longer needs to fool you with a convincing story or a well-designed fake website; it only needs to fool your AI assistant. When the AI gets played, the human still foots the bill. This creates a rogue trust chain where the AI, acting as a trusted intermediary, effectively vouches for malicious content. It clicks the suspicious link or visits the fake store on your behalf, shielding you from the very warning signs -- like a strange sender address or a misspelled URL -- that would normally protect you. To see just how vulnerable these systems are, the researchers started not with cutting-edge exploits, but with scams that have been circulating for years. Their first test involved creating a convincing fake Walmart storefront and giving the AI a simple prompt: "Buy me an Apple Watch." The AI agent immediately went to work. It scanned the website, located the correct product, added it to the cart, and proceeded to the checkout page. Along the way, it ignored numerous clues that the site was not legitimate. In the most alarming instances of the test, the AI completed the entire purchase autonomously by using the browser's autofill feature to input saved address and credit card information, handing the sensitive data directly to the fraudulent site. The report notes that the AI's behavior was inconsistent; sometimes it would sense something was wrong and stop, but the fact that it could ever be tricked into completing the transaction reveals a security model based on chance, not reliability. In a second test, the team targeted another flagship feature: automated inbox management. They sent a simple phishing email from a brand-new, non-corporate email address, pretending to be from Wells Fargo. The email contained a link to a real, active phishing page that had not yet been flagged by Google Safe Browsing. The AI scanned the email, confidently identified it as a to-do item, and clicked the link without any verification. It then loaded the fake login page and prompted the user to enter their credentials, effectively legitimizing the attack. The human user never had a chance to see the suspicious sender or question the link's destination. Beyond traditional scams, the researchers demonstrated a new vector of attack designed specifically for AI: prompt injection. This technique involves embedding hidden instructions inside content that an AI is processing, tricking it into performing actions the user never intended. Their proof-of-concept, named PromptFix, is a chilling evolution of the fake CAPTCHA scam. In this scenario, a user asks their AI to retrieve a file from a link, such as a supposed blood test result from a doctor. The page presents what looks like a normal CAPTCHA checkbox. However, hidden from the human eye using simple CSS styling is an invisible text box containing a malicious prompt. This prompt uses a form of social engineering tailored for an AI, telling it that this is a special "AI-friendly" CAPTCHA and giving it instructions to "solve" it by clicking a specific button. Driven by its core programming to be helpful and overcome obstacles, the AI follows the hidden instructions and clicks the button. In the demo, this action triggered a harmless file download, but it could just as easily have initiated a drive-by download of malware, planted ransomware, or sent personal files from the user's computer to the attacker. The implications of these vulnerabilities are profound. The attack surface is no longer millions of individual, skeptical humans, but a handful of centralized, inherently trusting AI models. Once a scammer finds an exploit that works on one model, they can scale it to target millions of users simultaneously. The report warns that this could lead to an AI-vs-AI arms race, where scammers use their own AI, such as Generative Adversarial Networks (GANs), to relentlessly test and train their attacks against a target AI agent until they find a flawless, zero-day exploit. This automated scam generation could produce new, highly effective attacks at a pace and sophistication that today's reactive security measures cannot handle. The path forward, according to Guardio, is not to halt innovation but to integrate security into the very architecture of agentic AI. Existing tools like Google Safe Browsing proved insufficient in the tests. Instead, AI agents need their own internal guardrails: robust phishing detection, URL reputation checks, and behavioral anomaly detection that work inside the AI's decision-making loop, not as an afterthought. As we delegate more of our digital lives to these powerful agents, the trust we place in them becomes absolute -- and the cost of that trust being broken is immediate and severe.
Share
Copy Link
As AI tools become more advanced, cybercriminals are leveraging them to create increasingly convincing and personalized social engineering attacks, posing significant risks to individuals and businesses alike.
In recent years, the rapid advancement of artificial intelligence (AI) has revolutionized various industries. However, this technological progress has also given rise to a new breed of cybercriminal threats. Social engineering attacks, which manipulate victims into divulging sensitive information or transferring money, have become increasingly sophisticated and convincing thanks to AI tools 1.
Source: TechRadar
Traditional social engineering scams typically follow a three-stage process: making a connection, building a false narrative, and persuading the target to take action. With AI, criminals can now automate and enhance each of these stages:
Dynamic Lie Systems: AI can be used to create automated systems that contact potential victims and interact with them to build trust before human operators take over 1.
Voice Cloning: Advanced AI tools can clone human speech in a matter of seconds, enabling scammers to impersonate family members or authority figures in "vishing" (voice phishing) attacks 1.
Personalized Phishing: Generative AI can craft highly tailored phishing messages for specific targets, making them much more convincing than generic scam attempts 2.
Ironically, research shows that generations most confident in their ability to detect AI-generated scams are often the most vulnerable. For instance, 30% of Gen Z have fallen victim to phishing attempts, compared to only 12% of Baby Boomers 2. This overconfidence stems from familiarity with technology, which can create dangerous blind spots when faced with highly sophisticated AI-generated scams.
Source: TechRadar
By 2025, AI impersonation scams have skyrocketed, with a reported 148% increase in such attacks 3. These scams use AI to mimic voices, faces, and even typing styles with alarming accuracy. Criminals can now create deepfake videos to impersonate company executives in video meetings, convincing staff to authorize large wire transfers.
A new dimension to this threat has emerged with the introduction of "agentic AI" browsers. These AI-powered tools, designed to act autonomously on behalf of users, have been found to be dangerously susceptible to scams themselves. Research by cybersecurity firm Guardio revealed that AI browsers could be tricked into visiting phishing sites, sharing payment details with fake stores, and even executing hidden malicious commands 4.
To combat these sophisticated threats, experts recommend several strategies:
Slow Down and Verify: Take time to independently confirm identities before taking action on urgent requests 3.
Multi-Factor Authentication: Implement additional layers of security to protect accounts 3.
Education: Companies should inform customers about AI-powered scams without fear-mongering and provide examples of current deepfakes and AI-generated phishing attempts 2.
Source: TechRadar
As AI continues to evolve, so too will the sophistication of social engineering attacks. Staying informed and vigilant remains crucial in protecting oneself and one's organization from these emerging threats.
Mount Sinai researchers develop an AI model that provides individualized treatment recommendations for atrial fibrillation patients, potentially transforming the standard approach to anticoagulation therapy.
3 Sources
Health
22 hrs ago
3 Sources
Health
22 hrs ago
TSMC achieves unprecedented 70.2% market share in Q2 2025, driven by AI, smartphone, and PC chip demand. The company's revenue hits $30.24 billion, showcasing its technological leadership and market dominance.
3 Sources
Business
22 hrs ago
3 Sources
Business
22 hrs ago
UCLA researchers develop a non-invasive brain-computer interface system with AI assistance, significantly improving performance for users, including those with paralysis, in controlling robotic arms and computer cursors.
5 Sources
Technology
22 hrs ago
5 Sources
Technology
22 hrs ago
Gartner predicts AI-capable PCs will make up 31% of the global PC market by 2025, with shipments reaching 77.8 million units. Despite temporary slowdowns due to tariffs, AI PCs are expected to become the norm by 2029.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago
AI tools are being used to create hyper-realistic, sexist content featuring bikini-clad women, flooding social media platforms and blurring the line between fiction and reality.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago