Curated by THEOUTPOST
On Wed, 12 Feb, 8:03 AM UTC
2 Sources
[1]
5 sneaky ways hackers are utilizing generative AI
Artificial Intelligence (AI) can be a force for good in our future, that much is obvious from the fact that it's being utilized to advance things like medical research. But what about it being a force for bad? The thought that somewhere out there, there's a James Bond-like villain in an armchair stroking a cat and using generative AI to hack your PC may seem like fantasy but, quite frankly, it's not. Cyber security experts are already scrambling to thwart millions of threats by hackers that have used generative AI to hack PCs, steal money, credentials, and data, and, with the rapid proliferation of new and improved AI tools, it's only going to get worse. The type of cyberattacks hackers are using aren't necessarily new. They're just more prolific, sophisticated, and effective now that they have weaponized AI. Here's what to look out for... Next time you see a pop-up, you may want to hit Ctrl-Alt-Delete real quick! Why? Because hackers are using AI tools to write malware like there's no tomorrow and it's showing up in browsers. Security experts can tell when malware has been written by generative AI by looking at its code. Malware written by AI tools is quicker to make, can be better targeted against victims, and more effective at bypassing security platforms than code written by hand, according to a paper in the journal Artificial Intelligence Review. One example is malware discovered by HP's threat research team which it highlights in its September 2024 Threats Insights Report. The company said it discovered malicious code hidden in an extension that hackers used to take over browser sessions and direct users to websites flogging fake PDF tools. The team also found SVG images to be harboring malicious code which could launch infostealer malware. The malware in question had code featuring "native language and variables that were consistent with an AI generative tool," which is a clear indicator of its AI origin. It's one thing to write malware with AI tools, it's quite another thing to keep it effective at bypassing security. Hackers know that cyber security companies move quickly to detect and block new malware, hence why they're using Large Language Models (LLMs) to obfuscate or slightly change it. AI can be used to blend code into known malware or create whole new variants that security detection systems won't recognize. Doing this is most effective against security software that recognizes known patterns of malicious activity, cybersecurity professionals say. In fact, it's actually quicker to do this than create malware from scratch, according to Palo Alto Networks Unit 42 researchers. The Unit 42 researchers demonstrated how this is possible. They used LLMs to rewrite 10,000 malicious JavaScript code variants of known malware that had the same functionality as the original code. These variants were highly successful at avoiding detection by LM detection algorithms like Innocent Until Proven Guilty (IUPG), the researchers found. They concluded that with enough code transformations it was possible for hackers to "degrade the performance of malware classification systems" enough to avoid detection. Two other kinds of malware that hackers are using to evade detection are possibly even more alarming because of their smart capabilities. Dubbed "adaptive malware" and "dynamic malware payloads" these types are able to evade security systems by learning and adjusting their coding, encryption, and behavior in real time to bypass security systems, cybersecurity experts say. While these types predate LLMs and AI, generative AI is making them more responsive to their environments and therefore more effective, they explain. AI software and algorithms are also being used to more successfully steal user passwords and logins and unlawfully access their accounts, according to cybersecurity firms. Cybercriminals generally use three techniques to do this: credential stuffing, password spraying, and brute force attacks, and AI tools are useful for all of these techniques, they say. Predictive biometric algorithms are making it easier for hackers to spy on users typing passwords and therefore making it easier to hack into large databases containing user information. Additionally, scanning and analyzing algorithms are deployed by hackers to quickly scan and map networks, identify hosts, open ports, and identify the software in operation to discover user vulnerabilities. Brute force attacks have been a favorite method of cyberattack for amateur hackers. This attack type involves the trial-and-error bombarding of a large number of companies or individuals with cyber-attacks in the hope that just a few will be penetrated. Traditionally, only one in 10,000 attacks is successful thanks to the effectiveness of security software. But this software is becoming less effective due to the rise of password algorithms that can quickly analyze large data sets of leaked passwords and more effectively direct brute force attacks. Algorithms can also automate hacking attempts across multiple websites or platforms at once, cybersecurity experts warn. Conventional generative AI tools like Gemini and ChatGPT as well as their dark web counterparts like WormGPT and FraudGPT, are being used by hackers to mimic the language, tone, and writing styles of individuals to make social engineering and phishing attacks more personalized to victims. Hackers are also using AI algorithms and chatbots to harvest data from user social media profiles, search engines, and other websites (and directly from the victims themselves) to create dynamic phishing pitches based on an individual's location, interests, or their responses. With AI modelling, hackers can even predict the likelihood their hacks and scams will be successful. Again, this is another area where hackers are also deploying smart bots that can learn from attacks and change their behavior to make attacks more likely to succeed. Phishing emails generated by hackers using AI software are more successful at fooling people, research shows. One reason is that they tend to involve fewer red flags like grammatical errors or spelling mistakes that give them away. Singapore's Government Technology Agency (GovTech) demonstrated this at the Black Hat USA cybersecurity convention in 2021. At the convention, it reported on an experiment in which spear phishing emails generated by OpenAI's ChatGPT 3 and ones written by hand were sent to participants. The experiment found the participants were much more likely to click on the ChatGPT-created emails than the hand-generated ones. The use of generative AI for impersonation gets a little science-fictiony when you start talking about deep-fake videos and the use of voice-clones. Even so, hackers are using AI tools to copy the likenesses and voices (known as voice phishing or vishing) of people known to victims in videos and recordings in order to pull off their swindles. One high-profile case happened back in 2024 when a finance worker was conned into paying out $25m to hackers who used deep-fake video technology to pose as the company's chief financial officer and other colleagues. These aren't the only AI impersonation techniques, though. In our article "AI impersonators will wreak havoc in 2025. Here's what to watch out for," we cover eight ways AI impersonators are trying to scam you, so be sure to check it out for a deeper dive on the topic.
[2]
5 sneaky ways hackers use generative AI to scam you
Artificial Intelligence (AI) can be a force for good in our future, that much is obvious from the fact that it's being utilized to advance things like medical research. But what about it being a force for bad? The thought that somewhere out there, there's a James Bond-like villain in an armchair stroking a cat and using generative AI to hack your PC may seem like fantasy but, quite frankly, it's not. Cyber security experts are already scrambling to thwart millions of threats by hackers that have used generative AI to hack PCs, steal money, credentials, and data, and, with the rapid proliferation of new and improved AI tools, it's only going to get worse. The type of cyberattacks hackers are using aren't necessarily new. They're just more prolific, sophisticated, and effective now that they have weaponized AI. Here's what to look out for... Next time you see a pop-up, you may want to hit Ctrl-Alt-Delete real quick! Why? Because hackers are using AI tools to write malware like there's no tomorrow and it's showing up in browsers. Security experts can tell when malware has been written by generative AI by looking at its code. Malware written by AI tools is quicker to make, can be better targeted against victims, and more effective at bypassing security platforms than code written by hand, according to a paper in the journal Artificial Intelligence Review. One example is malware discovered by HP's threat research team which it highlights in its September 2024 Threats Insights Report. The company said it discovered malicious code hidden in an extension that hackers used to take over browser sessions and direct users to websites flogging fake PDF tools. The team also found SVG images to be harboring malicious code which could launch infostealer malware. The malware in question had code featuring "native language and variables that were consistent with an AI generative tool," which is a clear indicator of its AI origin. It's one thing to write malware with AI tools, it's quite another thing to keep it effective at bypassing security. Hackers know that cyber security companies move quickly to detect and block new malware, hence why they're using Large Language Models (LLMs) to obfuscate or slightly change it. AI can be used to blend code into known malware or create whole new variants that security detection systems won't recognize. Doing this is most effective against security software that recognizes known patterns of malicious activity, cybersecurity professionals say. In fact, it's actually quicker to do this than create malware from scratch, according to Palo Alto Networks Unit 42 researchers. The Unit 42 researchers demonstrated how this is possible. They used LLMs to rewrite 10,000 malicious JavaScript code variants of known malware that had the same functionality as the original code. These variants were highly successful at avoiding detection by LM detection algorithms like Innocent Until Proven Guilty (IUPG), the researchers found. They concluded that with enough code transformations it was possible for hackers to "degrade the performance of malware classification systems" enough to avoid detection. Two other kinds of malware that hackers are using to evade detection are possibly even more alarming because of their smart capabilities. Dubbed "adaptive malware" and "dynamic malware payloads" these types are able to evade security systems by learning and adjusting their coding, encryption, and behavior in real time to bypass security systems, cybersecurity experts say. While these types predate LLMs and AI, generative AI is making them more responsive to their environments and therefore more effective, they explain. AI software and algorithms are also being used to more successfully steal user passwords and logins and unlawfully access their accounts, according to cybersecurity firms. Cybercriminals generally use three techniques to do this: credential stuffing, password spraying, and brute force attacks, and AI tools are useful for all of these techniques, they say. Predictive biometric algorithms are making it easier for hackers to spy on users typing passwords and therefore making it easier to hack into large databases containing user information. Additionally, scanning and analyzing algorithms are deployed by hackers to quickly scan and map networks, identify hosts, open ports, and identify the software in operation to discover user vulnerabilities. Brute force attacks have been a favorite method of cyberattack for amateur hackers. This attack type involves the trial-and-error bombarding of a large number of companies or individuals with cyber-attacks in the hope that just a few will be penetrated. Traditionally, only one in 10,000 attacks is successful thanks to the effectiveness of security software. But this software is becoming less effective due to the rise of password algorithms that can quickly analyze large data sets of leaked passwords and more effectively direct brute force attacks. Algorithms can also automate hacking attempts across multiple websites or platforms at once, cybersecurity experts warn. Conventional generative AI tools like Gemini and ChatGPT as well as their dark web counterparts like WormGPT and FraudGPT, are being used by hackers to mimic the language, tone, and writing styles of individuals to make social engineering and phishing attacks more personalized to victims. Hackers are also using AI algorithms and chatbots to harvest data from user social media profiles, search engines, and other websites (and directly from the victims themselves) to create dynamic phishing pitches based on an individual's location, interests, or their responses. With AI modelling, hackers can even predict the likelihood their hacks and scams will be successful. Again, this is another area where hackers are also deploying smart bots that can learn from attacks and change their behavior to make attacks more likely to succeed. Phishing emails generated by hackers using AI software are more successful at fooling people, research shows. One reason is that they tend to involve fewer red flags like grammatical errors or spelling mistakes that give them away. Singapore's Government Technology Agency (GovTech) demonstrated this at the Black Hat USA cybersecurity convention in 2021. At the convention, it reported on an experiment in which spear phishing emails generated by OpenAI's ChatGPT 3 and ones written by hand were sent to participants. The experiment found the participants were much more likely to click on the ChatGPT-created emails than the hand-generated ones. The use of generative AI for impersonation gets a little science-fictiony when you start talking about deep-fake videos and the use of voice-clones. Even so, hackers are using AI tools to copy the likenesses and voices (known as voice phishing or vishing) of people known to victims in videos and recordings in order to pull off their swindles. One high-profile case happened back in 2024 when a finance worker was conned into paying out $25m to hackers who used deep-fake video technology to pose as the company's chief financial officer and other colleagues. These aren't the only AI impersonation techniques, though. In our article "AI impersonators will wreak havoc in 2025. Here's what to watch out for," we cover eight ways AI impersonators are trying to scam you, so be sure to check it out for a deeper dive on the topic.
Share
Share
Copy Link
Cybersecurity experts warn of the increasing use of generative AI by hackers to create more effective malware, bypass security systems, and conduct personalized phishing attacks, posing significant threats to individuals and organizations.
Cybersecurity experts are raising alarms about the increasing use of generative AI by hackers to create sophisticated malware. According to a paper in the Artificial Intelligence Review, AI-generated malware is quicker to produce, better targeted, and more effective at bypassing security platforms than manually written code 1. HP's threat research team discovered malicious code hidden in browser extensions and SVG images, featuring characteristics consistent with AI-generated tools 1.
Large Language Models (LLMs) are being employed to obfuscate and modify existing malware, making it harder for security systems to detect. Palo Alto Networks Unit 42 researchers demonstrated this by using LLMs to create 10,000 variants of known malicious JavaScript code, which successfully evaded detection algorithms like Innocent Until Proven Guilty (IUPG) 1.
Even more concerning are the emergence of "adaptive malware" and "dynamic malware payloads." These AI-enhanced threats can learn and adjust their coding, encryption, and behavior in real-time to bypass security systems. While not entirely new, generative AI is making these types of malware more responsive and effective 1.
Cybercriminals are leveraging AI to enhance traditional hacking techniques such as credential stuffing, password spraying, and brute force attacks. Predictive biometric algorithms are making it easier for hackers to spy on users typing passwords, while AI-powered scanning and analyzing algorithms quickly map networks and identify vulnerabilities 1.
Brute force attacks, traditionally successful in only one out of 10,000 attempts, are becoming more effective due to AI algorithms that can analyze large datasets of leaked passwords and direct attacks more efficiently. These algorithms can also automate hacking attempts across multiple platforms simultaneously 1.
Both mainstream AI tools like Gemini and ChatGPT, as well as dark web alternatives such as WormGPT and FraudGPT, are being used to create highly personalized phishing attacks. Hackers are employing these tools to mimic language, tone, and writing styles of individuals, making their social engineering attempts more convincing 1.
AI algorithms and chatbots are also being used to harvest data from social media profiles, search engines, and other online sources to create dynamic phishing pitches tailored to an individual's location, interests, or responses. This level of personalization significantly increases the chances of successful scams 1.
The rapid adoption of AI tools by hackers presents a significant challenge for cybersecurity professionals and ordinary users alike. As these AI-powered attacks become more sophisticated, traditional security measures may become less effective. This evolving threat landscape calls for increased vigilance, advanced security solutions, and ongoing education about emerging cyber risks 1.
Cybersecurity firms and researchers are working to develop AI-powered defenses to counter these threats, but the cat-and-mouse game between hackers and security experts is likely to intensify as AI technology continues to advance 1.
The misuse of AI for malicious purposes highlights the urgent need for regulations and ethical guidelines in AI development and deployment. As AI tools become more accessible, policymakers and tech companies must collaborate to establish frameworks that promote responsible AI use while mitigating potential harm 1.
In conclusion, while AI offers tremendous benefits across various fields, its potential for misuse in cybercrime cannot be ignored. As the threat landscape evolves, individuals and organizations must stay informed and adopt proactive measures to protect themselves against these AI-powered cyber threats.
Reference
As AI technology advances, scammers are using sophisticated tools to create more convincing frauds. Learn about the latest AI-enabled scams and how to safeguard yourself during the holidays.
7 Sources
7 Sources
As AI technology advances, cybercriminals are leveraging it to create more sophisticated and personalized social engineering attacks, posing significant challenges for organizations, especially SMEs and supply chains.
3 Sources
3 Sources
Cybersecurity experts have identified malware attacks using AI-generated code, marking a significant shift in the landscape of digital threats. This development raises concerns about the potential for more sophisticated and harder-to-detect cyberattacks.
6 Sources
6 Sources
Malicious AI models like FraudGPT and WormGPT are becoming the latest tools for cybercriminals, posing significant risks to online security. These AI systems are being used to create sophisticated phishing emails, malware, and other cyber threats.
2 Sources
2 Sources
As AI revolutionizes cybersecurity, it presents both unprecedented threats and powerful defensive tools. This story explores the evolving landscape of AI-based attacks and the strategies businesses and cybersecurity professionals are adopting to counter them.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved