Curated by THEOUTPOST
On Fri, 7 Feb, 4:01 PM UTC
3 Sources
[1]
AI-Powered Social Engineering: Ancillary Tools and Techniques
Social engineering is advancing fast, at the speed of generative AI. This is offering bad actors multiple new tools and techniques for researching, scoping, and exploiting organizations. In a recent communication, the FBI pointed out: 'As technology continues to evolve, so do cybercriminals' tactics.' This article explores some of the impacts of this GenAI-fueled acceleration. And examines what it means for IT leaders responsible for managing defenses and mitigating vulnerabilities. More realism, better pretexting, and multi-lingual attack scenarios Traditional social engineering methods usually involve impersonating someone the target knows. The attacker may hide behind email to communicate, adding some psychological triggers to boost the chances of a successful breach. Maybe a request to act urgently, so the target is less likely to pause and develop doubts. Or making the email come from an employee's CEO, hoping the employee's respect for authority means they won't question the message. If using voice, then the attacker may instead pretend to be someone that the target hasn't spoken to (and would recognize the voice). Maybe pretending to be from another department or external partner. Of course, these methods often fall apart when the target wants to verify their identity in some way. Whether that's wanting to check their appearance, or how they write in a real-time chat. However, now that GenAI has entered the conversation, things have changed. The rise in deepfake videos means that adversaries no longer need to hide behind keyboards. These blend genuine recordings to analyze and recreate a person's mannerisms and speech. Then it's simply a case of directing the deepfake to say anything, or using it as a digital mask that reproduces what the attacker says and does in front of the camera. The rise in digital-first work, with remote workers used to virtual meetings, means it's easier to explain away possible warning signs. Unnatural movements, or voice sounding slightly different? Blame it on a bad connection. By speaking face-to-face this adds a layer of authenticity that supports our natural instinct to think that 'seeing is believing'. Voice cloning technology means attackers can speak in any voice too, carrying out voice phishing, also known as vishing, attacks. The growing capability of this technology is reflected in Open AI's recommendation for banks to start 'Phasing out voice based authentication as a security measure for accessing bank accounts and other sensitive information.' Text-based communication is also transformed with GenAI. The rise of LLMs allows malicious actors to operate at near-native speaker level, with outputs able to be trained on regional dialects for even greater fluency. This opens the door to new markets for social engineering attacks, with language no longer a blocker when selecting targets. Bringing order to unstructured OSINT with GenAI If someone's ever been online, they'll have left a digital footprint somewhere. Depending on what they share, this can sometimes be enough to reveal enough information to impersonate them or compromise their identity. They may share their birthday on Facebook, post their place of employment on LinkedIn, and put pictures of their home, family, and life on Instagram. These actions offer ways to build up profiles to use with social engineering attacks on the individuals and organizations they're connected to. In the past, gathering all this information would be a long and manual process. Searching each social media channel, trying to join the dots between people's posts and public information. Now, AI can do all this at hyperspeed, scouring the internet for unstructured data, to retrieve, organize and classify all possible matches. This includes facial recognition systems, where it's possible to upload a photo of someone and let the search engine find all the places they appear online. What's more, because the information is available publicly, it's possible to access and aggregate this information anonymously. Even when using paid-for GenAI tools, stolen accounts are for sale on the dark web, giving attackers another way to hide their activity, usage, and queries. Turning troves of data into troves of treasure Large-scale data leaks are a fact of modern digital life, from over 533 million Facebook users having details (including birthdays, phone numbers, locations) compromised in 2021, to more than 3 billion Yahoo users having sensitive information exposed in 2024. Of course, manually sifting through these volumes of data troves isn't practical or possible. Instead, people can now harness GenAI tools to autonomously sort through high volumes of content. These can find any data that could be used maliciously, such as for extortion, weaponizing private discussions, or stealing Intellectual Property hidden in documents. The AI also maps the creators of the documents (using a form of Named Entity Recognition), to establish any incriminating connections between different parties including wire transfers and confidential discussions. Many tools are open source, allowing users to customize with plugins and modules. For example, Recon-ng can be configured for use cases such as email harvesting and OSINT gathering. Other tools aren't for public use, such as Red Reaper. This is a form of Espionage AI, capable of sifting through hundreds of thousands of emails to detect sensitive information that could be used against organizations. The GenAI genie is out of the bottle - is your business exposed? Attackers can now use the internet as a database. They just need a piece of data as a starting point, such as a name, email address, or image. GenAI can get to work, running real-time queries to mine, uncover, and process connections and relationships. Then it's about choosing the appropriate tool for exploits, often at scale and running autonomously. Whether that's deepfake videos and voice cloning, or LLM-based conversation-driven attacks. These would have been limited to a select group of specialists with the necessary knowledge. Now, the landscape is democratized with the rise of 'hacking as a service' that does much of the hard work for cybercriminals. So how can you know what potentially compromising information is available about your organization? We've built a threat monitoring tool that tells you. It crawls every corner of the internet, letting you know what data is out there and could be exploited to build effective attack pretexts, so you can take action before an attacker gets to it first.
[2]
AI-Powered Social Engineering: Reinvented Threats
The foundations for social engineering attacks - manipulating humans - might not have changed much over the years. It's the vectors - how these techniques are deployed - that are evolving. And like most industries these days, AI is accelerating its evolution. This article explores how these changes are impacting business, and how cybersecurity leaders can respond. Impersonation attacks: using a trusted identity Traditional forms of defense were already struggling to solve social engineering, the 'cause of most data breaches' according to Thomson Reuters. The next generation of AI-powered cyber attacks and threat actors can now launch these attacks with unprecedented speed, scale, and realism. The old way: Silicone masks By impersonating a French government minister, two fraudsters were able to extract over €55 million from multiple victims. During video calls, one would wear a silicone mask of Jean-Yves Le Drian. To add a layer of believability, they also sat in a recreation of his ministerial office with photos of the then-President François Hollande. Over 150 prominent figures were reportedly contacted and asked for money for ransom payments or anti-terror operations. The biggest transfer made was €47 million, when the target was urged to act because of two journalists held in Syria. The new way: Video deepfakes Many of the requests for money failed. After all, silicon masks can't fully replicate the look and movement of skin on a person. AI video technology is offering a new way to step up this form of attack. We saw this last year in Hong Kong, where attackers created a video deepfake of a CFO to carry out a $25 million scam. They then invited a colleague to a videoconference call. That's where the deepfake CFO persuaded the employee to make the multi-million transfer to the fraudsters' account. Live calls: voice phishing Voice phishing, often known as vishing, uses live audio to build on the power of traditional phishing, where people are persuaded to give information that compromises their organization. The old way: Fraudulent phone calls The attacker may impersonate someone, perhaps an authoritative figure or from another trustworthy background, and make a phone call to a target. They add a sense of urgency to the conversation, requesting that a payment be made immediately to avoid negative outcomes such as losing access to an account or missing a deadline. Victims lost a median $1,400 to this form of attack in 2022. The new way: Voice cloning Traditional vishing defense recommendations include asking people not to click on links that come with requests, and calling back the person on an official phone number. It's similar to the Zero Trust approach of Never Trust, Always Verify. Of course, when the voice comes from someone the person knows, it's natural for trust to bypass any verification concerns. That's the big challenge with AI, with attackers now using voice cloning technology, often taken from just a few seconds of a target speaking. A mother received a call from someone who'd cloned her daughter's voice, saying she'd be kidnapped and that the attackers wanted a $50,000 reward. Phishing email Most people with an email address have been a lottery winner. At least, they've received an email telling them that they've won millions. Perhaps with a reference to a King or Prince who might need help to release the funds, in return for an upfront fee. The old way: Spray and pray Over time these phishing attempts have become far less effective, for multiple reasons. They're sent in bulk with little personalization and lots of grammatical errors, and people are more aware of '419 scams' with their requests to use specific money transfer services. Other versions, such as using fake login pages for banks, can often be blocked using web browsing protection and spam filters, along with educating people to check the URL closely. However, phishing remains the biggest form of cybercrime. The FBI's Internet Crime Report 2023 found phishing/spoofing was the source of 298,878 complaints. To give that some context, the second-highest (personal data breach) registered 55,851 complaints. The new way: Realistic conversations at scale AI is allowing threat actors to access word-perfect tools by harnessing LLMs, instead of relying on basic translations. They can also use AI to launch these to multiple recipients at scale, with customization allowing for the more targeted form of spear phishing. What's more, they can use these tools in multiple languages. These open the doors to a wider number of regions, where targets may not be as aware of traditional phishing techniques and what to check. The Harvard Business Review warns that 'the entire phishing process can be automated using LLMs, which reduces the costs of phishing attacks by more than 95% while achieving equal or greater success rates.' Reinvented threats mean reinventing defenses Cybersecurity has always been in an arms race between defense and attack. But AI has added a different dimension. Now, targets have no way of knowing what's real and what's fake when an attacker is trying to manipulate their: These are essential parts of human nature and instinct that have evolved over thousands of years. Naturally, this isn't something that can evolve at the same speed as malicious actors' methods or the progress of AI. Traditional forms of awareness, with online courses and questions and answers, aren't built for this AI-powered reality. That's why part of the answer -- especially while technical protections are still catching up -- is to make your workforce experience simulated social engineering attacks. Because your employees might not remember what you say about defending against a cyber attack when it occurs, but they will remember how it makes them feel. So that when a real attack happens, they're aware of how to respond.
[3]
AI-powered cyber threats demand enhanced security awareness for SMEs and supply chains
The cybersecurity landscape enters a new era of sophisticated threats in 2025. Already, AI is reshaping cyber-attack strategies and in turn defense mechanisms - from threat detection, automated incident response, and intelligent vulnerability management to data and infrastructure protection. In 2025, as organizations wrestle with further evolution of the cyber-attack environment, the need for comprehensive security awareness training becomes increasingly crucial, especially in the face of mounting data breach costs and stricter regulations. Greater adoption of more sophisticated and stealthier AI-powered phishing presents a significant cybersecurity challenge for small and medium enterprises (SMEs). Cybercriminals are leveraging AI to craft highly personalized attacks, using publicly available data and advanced language capabilities, making these scams increasingly difficult to detect. Their approach involves multi-stage attack chains where initial communications appear innocent to gradually build trust before dumping malicious payloads. These cybercriminals are specifically targeting widely used platforms such as Microsoft 365 and Google Workspace, exploiting their inherent limitations for credential harvesting. Ransomware operators are refining email as a delivery mechanism, using crafty and obfuscated file attachments or links. They have developed their tactics to include "hybrid ransomware" campaigns that combine traditional phishing techniques with highly refined social engineering to manipulate recipients into unsuspectingly downloading dangerous files. SMEs are particularly vulnerable due to their often limited cybersecurity resources, and so are at risk of becoming prime targets, not only for direct attacks but also use them as potential entry points for wider supply chain attacks against larger enterprises. Increased mis-delivery-related data breaches are an escalating risk, as organizations increasingly adopt AI-driven email drafting tools. Already, misdirected emails are the most common cyber incident reported to the UK's Information Commissioner's Office (ICO) from a GDPR compliance standpoint. The widespread adoption of hybrid work models and the use of personal devices for work-related tasks is exacerbating this risk, leading to misdirection of email, incorrect file attachments, and miscommunication. The integration of these advanced email writing assistants, while undoubtedly boosting productivity, also introduces additional complexity through features that suggest recipients based on historical patterns. This automation, combined with existing auto-complete and auto-correct features in popular email clients, significantly increases the risk of sensitive information being exposed to unintended recipients. The consequences of such accidental exposure to sensitive information is often costly and severe. The cybersecurity landscape in 2024 witnessed a noticeable increase in malware attacks levelled at corporate networks, leading to widely publicized data leaks and reputational damage for the organizations involved. Simultaneously, the bad actors exploited supply chain vulnerabilities to infiltrate systems and cause severe disruptions, highlighting the far-reaching consequences of software integrity failures. As we move through 2025, cybercriminals are advancing their tactics by deploying AI-generated malware to breach both corporate networks and exploit supply chain ecosystems for vulnerabilities. These tools are highly evasive and can bypass traditional detection methods while also automating vulnerability scanning and phishing attempts. The cost implications of data breaches have reached unprecedented levels, with the global average now estimated at $4.88 million per incident. Human error continues to be the primary factor in successful breaches, as cybercriminals successfully exploit the most advanced technologies currently available to breach organizations and cause chaos. To control this continuously intensifying situation, regulation is becoming more demanding. The EU AI Act has already taken effect, bringing significant implications for organizations using AI in their operations, including cybersecurity and privacy. In the United States, many states are either enforcing or enacting Data privacy laws in 2025 - all focusing on the collection, use, and disclosure of personal data. These laws impose various obligations on businesses, including data protection, breach notification, and consumer rights. As we navigate 2025, the combination of rapidly evolving technology, sophisticated cyber threats, and an increasingly strict regulatory environment, emphasizes the vital need for enhanced security awareness and training across the board. Technological solutions, of course, remain crucial in defending against cyber-attacks, and security professionals respond with proactive and innovative defensive strategies, including measures such as seamlessly integrating zero-trust architecture, embedding AI-powered tools, and implementing rigorous software development practices into their operational workflows. However, due to the stealthy nature of the bad actors, heightened employee vigilance and understanding of the threat landscape have become ever more indispensable components of effective cybersecurity risk mitigation and regulatory compliance. Organizations, especially SMEs, must recognize that investing in comprehensive, up-to-date security awareness training is no longer optional, but a fundamental requirement for survival in today's cybersecurity threat landscape. Additionally, this training needs to be in line with the latest adult learning trends and best practices. Learner motivation, high engagement, and a focus on information retention are essential to preparing employees to face today's threats. If not, the training will prove inadequate. Employees must be made aware of the latest AI threats, including AI-based phishing and compliance, as well as how potentially these coercions can lead to the leakage of confidential information. Phishing simulation campaigns must better reflect this new AI threat reality. For instance, email phishing templates could be designed to mirror real-life attacks with fewer grammatical and obvious errors to better prepare employees for these scenarios. In addition to courses, security awareness programs should include additional options to reinforce the training in the workplace, such as posters, digital signage, cybersecurity events, and so forth. Any opportunity to allow employees to be more aware and better prepared will make an impact on de-risking the organization. Security teams would also do well to consider and implement an AI policy in addition to their broader infosecurity policy(ies). Employees must understand the policies and procedures that align with their organization's AI security strategy. As AI continues to evolve and shape both offensive and defensive capabilities in cybersecurity, the human element remains both the greatest vulnerability and the strongest potential defense against emerging threats. We've featured the best malware removal.
Share
Share
Copy Link
As AI technology advances, cybercriminals are leveraging it to create more sophisticated and personalized social engineering attacks, posing significant challenges for organizations, especially SMEs and supply chains.
In 2025, the cybersecurity landscape is witnessing a significant transformation as artificial intelligence (AI) reshapes both attack strategies and defense mechanisms. Social engineering, a long-standing threat, has evolved rapidly with the advent of generative AI, offering bad actors new tools and techniques for exploiting organizations 1.
Traditional social engineering methods often involved impersonating known individuals through email or voice calls. However, AI has dramatically improved the realism of these attacks. Deepfake videos now allow adversaries to create convincing visual impersonations, while voice cloning technology enables attackers to mimic any voice for vishing (voice phishing) attacks 2.
AI has revolutionized Open Source Intelligence (OSINT) gathering. Malicious actors can now use AI to rapidly collect and analyze vast amounts of unstructured data from social media and other online sources. This allows for the creation of detailed profiles for targeted attacks. AI tools can also sift through large-scale data leaks, identifying sensitive information that could be used for extortion or intellectual property theft 1.
AI-powered phishing has become more sophisticated and harder to detect. Large Language Models (LLMs) enable attackers to craft highly personalized and grammatically correct phishing emails in multiple languages. This has opened up new markets for social engineering attacks and increased the success rates of phishing campaigns 2.
Small and Medium Enterprises (SMEs) are particularly vulnerable to these advanced threats due to their often limited cybersecurity resources. Cybercriminals are specifically targeting widely used platforms like Microsoft 365 and Google Workspace for credential harvesting. Moreover, SMEs are increasingly being used as entry points for wider supply chain attacks against larger enterprises 3.
Human error continues to be a primary factor in successful breaches. The adoption of AI-driven email drafting tools has increased the risk of mis-delivery-related data breaches. Features that suggest recipients based on historical patterns, combined with auto-complete and auto-correct functions, significantly increase the risk of sensitive information being exposed to unintended recipients 3.
In response to these evolving threats, regulations are becoming more demanding. The EU AI Act has taken effect, bringing significant implications for organizations using AI in their operations. In the United States, many states are enforcing or enacting data privacy laws focusing on the collection, use, and disclosure of personal data 3.
As AI-powered threats continue to evolve, organizations must recognize that investing in comprehensive, up-to-date security awareness training is no longer optional. This is particularly crucial for SMEs and companies involved in complex supply chains. Training needs to align with the latest adult learning trends, focusing on high engagement and information retention to effectively prepare employees for today's sophisticated threats 3.
Reference
[1]
[2]
As AI revolutionizes cybersecurity, it presents both unprecedented threats and powerful defensive tools. This story explores the evolving landscape of AI-based attacks and the strategies businesses and cybersecurity professionals are adopting to counter them.
2 Sources
2 Sources
As AI transforms the cybersecurity landscape, businesses are increasingly adopting AI-powered defenses to combat sophisticated AI-driven threats, highlighting both the risks and opportunities in this technological arms race.
3 Sources
3 Sources
Malicious AI models like FraudGPT and WormGPT are becoming the latest tools for cybercriminals, posing significant risks to online security. These AI systems are being used to create sophisticated phishing emails, malware, and other cyber threats.
2 Sources
2 Sources
Kaspersky explores how AI is revolutionizing phishing attacks, making them more sophisticated and difficult to detect, posing a significant threat even to experienced employees.
2 Sources
2 Sources
Cybersecurity experts warn of the increasing use of generative AI by hackers to create more effective malware, bypass security systems, and conduct personalized phishing attacks, posing significant threats to individuals and organizations.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved