Curated by THEOUTPOST
On Wed, 25 Sept, 12:07 AM UTC
6 Sources
[1]
Hackers are now using AI-generated code for malware attacks
Two separate attacks have been spotted using code that was probably written by AI. Software developers have embraced "artificial intelligence" language models for code generation in a big way, with huge gains in productivity but also some predictably dubious developments. It's no surprise that hackers and malware writers are doing the same. According to recent reports, there have been several active malware attacks spotted with code that's at least partially generated by AI. BleepingComputer chronicles multiple attacks using suspected AI-written code, with reports from Proofpoint and HP making the case that these tools were generated in ways that no longer need the technical expertise normally required for large-scale malware attacks. You could call it the democratization of hacking. The attacks used fairly straightforward vectors, HTML, VBScript, and JavaScript, with code that was more broad and less targeted. So, these attacks are most effective when set up as a download hidden within a ZIP file (or some other conventional attack method). It's the kind of thing power users are already wary about -- or at least should be -- with decades of these kinds of attacks long before AI code emerged. Complex and specifically targeted attacks, like the recent PKfail disaster, are probably outside of the reach of broad code generation like this for now. But there's still cause for concern. These tools could exponentially increase the prevalence of simpler attacks on web users, requiring extra diligence from users (especially on Windows) and making virus and malware protection even more crucial. I'm more worried about the combination of skilled malware developers and AI generation tools. Even if you can't train an AI to write brilliant code, a talented developer could use AI to automate their processes and become far more efficient. As always, keep those antivirus scanners up and don't download from unknown sources.
[2]
AI-written malware is here, and going after victims already
HP Arctic Wolf researchers claim to have found evidence hackers are using Generative Artificial Intelligence (GenAI) tools to create malware and other malicious code. GenAI tools, such as ChatGPT, or Gemini, are being used left and right to create convincing phishing emails, professional-looking landing pages, and similar, the researchers are saying, and the evidence is apparently overwhelming. However, when it comes to spotting malware code written by robots, it's a different story: "To date there has been limited evidence of threat actors using GenAI tools to write code," HP said. Whether or not HP has been the first is hard to tell, as security firm Proofpoint made a similar claim back in April 2024 concerning a PowerShell malware strain. Regardless of the timing, HP says it identified a campaign targets the French-speaking community with a VBScript and JavaScript that was probably written with the help of GenAI. Therefore, the researchers believe these findings are a big deal: "Speculation about AI being used by attackers is rife, but evidence has been scarce, so this finding is significant," commented Patrick Schläpfer, Principal Threat Researcher in the HP Security Lab. "Such capabilities further lower the barrier to entry for threat actors, allowing novices without coding skills to write scripts, develop infection chains, and launch more damaging attacks." It's a long shot, since one would still need significant knowledge to pull off malware, but GenAI would definitely be helpful. "The structure of the scripts, comments explaining each line of code, and the choice of native language function names and variables are strong indications that the threat actor used GenAI to create the malware," the researchers said. "The attack infects users with the freely available AsyncRAT malware, an easy-to-obtain infostealer which can record victim's screens and keystrokes. The activity shows how GenAI is lowering the bar for cybercriminals to infect endpoints."
[3]
HP Spots a Malware Attack That Was Likely Built With Generative AI
HP security researchers discovered the suspected AI use in June when the company's anti-phishing system, Sure Click, flagged an unusual email attachment meant for French language users. The attachment contained an HTML file that asked the user to type in a password to open it. The researchers managed to "brute-force" the protection and guess the right password, which revealed the HTML file produced a ZIP archive that secretly contained a piece of malware, known as AsyncRAT. AsyncRAT is an open-source remote access management tool that can be easily abused to become malware. In this case, the hackers behind the email attachment decided to use it to remotely control the victim's computer. But while investigating the attack, HP's security researchers noticed something odd: The malicious code within the email attachment's JavaScript and in the ZIP archive -- the two components to deliver the attack -- weren't scrambled or obfuscated at all. Instead, the computer code was easily readable. "In fact, the attacker had left comments throughout the code, describing what each line does - even for simple functions," HP's report says. "Genuine code comments in malware are rare because attackers want to their make malware as difficult to understand as possible." The comments also suggest that generative AI developed the code to deliver the AsyncRAT malware. That's because chatbot programs such as OpenAI's ChatGPT and Google's Gemini will typically explain each line of computer code if you ask them to write a computer program. "Based on the scripts' structure, consistent comments for each function and the choice of function names and variables, we think it's highly likely that the attacker used GenAI to develop these scripts," HP's report adds. The company's findings arrive as other companies, including OpenAI and Microsoft, have also spotted state-sponsored hackers using generative AI to refine their phishing attacks and conduct research. But using generative AI to develop actual malware is rare. In April, cybersecurity provider ProofPoint discovered a separate case of hackers possibly using generative AI to develop a PowerShell script to deliver malware. In a statement, HP's security researcher Patrick Schläpfer said: "Speculation about AI being used by attackers is rife, but evidence has been scarce, so this finding is significant." The company's report adds that generative AI has the potential to "lower the bar" for cybercriminals to spread malware. But others, like Google's VirusTotal, are more skeptical, and say it's still hard to tell if a malware attack can be traced to a generative AI program. "How do I know if you're copying the code from your neighbor, from [coding site] Stack Overflow, from some AI, it's very difficult to say," VirusTotal researcher Vicente Diaz said in May. "So it's already a hard question."
[4]
Hackers deploy AI-written malware in targeted attacks
In an email campaign targeting French users, researchers discovered malicious code believed to have been created with the help of generative artificial intelligence services to deliver the AsyncRAT malware. While cybercriminals have used generative AI technology to create convincing emails, government agencies have warned about the potential abuse of AI tools to creating malicious software, despite the safeguards and restrictions that vendors implemented. Suspected cases AI-created malware have been spotted in real attacks. Earlier this year, cybersecurity company Proofpoint discovered a malicious PowerShell script that was likely created using an AI system. As less technical malicious actors are increasingly relying on AI to develop malware, HP security researchers found a malicious campaign in early June that used code commented in the same way a generative AI system would create. The campaign employed HTML smuggling to deliver a password-protected ZIP archive that the researchers brute-forcing to unlock. HP Wolf Security reports that cybercriminals with lower technical skills are increasingly using generative AI to develop malware, with one example provided in the 'Threat Insights' report for Q2 2024. In early June, HP discovered a phishing campaign targeting French users, employing HTML smuggling to deliver a password-protected ZIP archive that contained a VBScript and JavaScript code. After brute-forcing the password, the researchers analyzed the code and found "that the attacker had neatly commented the entire code," something that rarely happens with human-developed code, because threat actors want to hide how the malware works. The VBScript established persistence on the infected machine, creating scheduled tasks and writing new keys in the Windows Registry. The researchers note that some of the indicators pointing to AI-generated malicious code include the structure of the scripts, the comments that explain each line, choosing the native language for function names and variables. In later stages, the attack downlaods and executes AsyncRAT, an open-source and freely available malware that can log keystrokes on the victim machine and provide an encrypted connection to it for remote monitoring and control. The malware can also deliver additional payloads. The HP Wolf Security report also highlights that, based on its visibility, archives represent the most popular delivery method in the first half of the year. Generative AI can help lower-level threat actors write malware in minutes and customize it for attacks targeting various regions and platforms (Linux, macOS). Even if they are not using AI to build fully functional malware, hackers are relying on this technology to speed up their work when creating more advanced threats.
[5]
Fears realised? Gen AI being used to create malware, confirms HP security report
An ever-improving generative artificial intelligence (AI) is all fun and games till it isn't. Deepfakes and realistic looking morphed content are problems the world is grappling with, and we can add online security threats to that. HP Inc., in their latest Threats Insights Report released at the company's annual HP Imagine keynote, suggests generative AI is being deployed to help write malicious code. HP's threat research team detected an instance of this -- what they call a large and refined ChromeLoader campaign spread through 'malvertising' that leads to professional-looking rogue PDF tools. They also logged instances of cybercriminals embedding malicious code in SVG images. While the threats of AI being used to create malware isn't new, with some instances documents previously, HP's researchers are worried about the acceleration in creation of malware. "Threat actors have been using generative artificial intelligence (GenAI) to create convincing phishing lures for some time, but there has been limited evidence of attackers using this technology to write malicious code in the wild. In Q2, however, the HP Threat Research team identified a malware campaign spreading AsyncRAT using VBScript and JavaScript that was highly likely to have been written with the help of GenAI.2," says the report. Also Read:Fight fire with fire? Gen AI as a defence against AI powered cyberattacks "The activity shows how GenAI is accelerating attacks and lowering the bar for cybercriminals to infect endpoints," the researchers point out. ChromeLoader, as it is called, references family of web browser malware that enables attackers to take over a computing device's browsing session, which enables the attacker to redirect searches to their own websites. "In Q2, ChromeLoader campaigns were larger and more polished, relying on malvertising to direct victims to websites offering productivity tools like PDF converters," say the researchers. The applications these web browsing sessions were directed to hid malicious code, alongside what seem to be valid code-signing certificates that helped this malware to bypass Windows security policies. The HP Threats Insights Report says that in just the second quarter of this year, as many as 12% of all threats delivered using email, had managed to evade gateway security used by businesses and enterprises for their networks and workstations. Cybercriminals had used as many as 122 file formats to deliver malware threats to devices, including PDF files as well as Scalable Vector Graphics (SVG) which are widely used in graphic designing as well as in web layouts. Also Read: Banks rely on AI as digital transactions grow, and methodology of frauds evolves Though .exe remains the most popular extension for malware (39%), the other formats that are being increasingly used include .pdf, .rar, .zip, .docx, .gz, and .img. In terms of the method for delivery, email remained the top vector for delivering malware to endpoints (61% of threats), and that's grown 8% compared with the threat landscape in Q1. The report points to malicious web browser downloads that reduced by7% to make up 18% of the threats in Q2. Earlier this year, HT had reported that Large Language Models (or LLMs), that are at the very core of generative AI's utility, are being used by threat actors to generate phishing attacks, malware and deepfakes. It is no longer possible to distinguish between consumers and enterprises as separate streams, as we often do with technology and solutions, since generative AI has blurred those lines. Similar toolsets are available to consumers and enterprise subscribers. Google Gemini and Microsoft's Copilot, two examples. Any improvements to LLMs for enterprise and cloud systems, will benefit consumers too. Banks and payment platforms are a worried lot too, and they're increasingly relying on AI solutions to counter the threat of sophisticated malware. "The integration of AI and machine learning has further increased the complexity of cyberattacks. Cybercriminals can now leverage these technologies to automate tasks, enhance their evasion techniques, and develop customised malware," Joy Sekhri, who is Vice President for Cyber & Intelligence Solutions for South Asia at Mastercard, explained to us. HDFC Bank's head of credit intelligence and control, Manish Agrawal told HT that every credit card transaction is being monitored by AI, and any varying patterns or swipes at known dodgy for unknown merchants are flagged for human intervention. The next steps include blocking transactions and contacting the card holder.
[6]
HP Wolf Security Uncovers Evidence of Attackers Using AI to Generate Malware
Latest report points to AI use in creating malware scripts, threat actors relying on malvertising to spread rogue PDF tools, and malware embedded in image filesHP threat researchers identified a campaign targeting French-speakers using malware believed to have been written with the help of GenAIThe malware's structure, comments explaining each line of code, and native language function names and variables all indicate the threat actor used GenAI to create the malwareThe activity shows how GenAI is accelerating attacks and lowering the bar for cybercriminals to infect endpointsHP also found ChromeLoader campaigns are getting bigger and more polished, using malvertising to direct victims to well-designed websites offering fake tools like PDF convertersInstalling the fake applications, delivered as MSI files, causes malicious code to run on endpointsThe malware loads a browser extension that enables attackers to take over the victim's browsing session and redirect searches to attacker-controlled sitesAnother campaign showed some cybercriminals are bucking the trend by shifting from HTML files to SVG vector images to smuggle malware At HP Imagine, HP Inc. (NYSE: HPQ) today issued its latest Threat Insights Report revealing how attackers are using generative AI to help write malicious code. HP's threat research team found a large and refined ChromeLoader campaign spread through malvertising that leads to professional-looking rogue PDF tools, and identified cybercriminals embedding malicious code in SVG images. The report provides an analysis of real-world cyberattacks, helping organizations to keep up with the latest techniques cybercriminals are using to evade detection and breach PCs in the fast-changing cybercrime landscape. Based on data from millions of endpoints running HP Wolf Security, notable campaigns identified by HP threat researchers include: Generative AI assisting malware development in the wild: Cybercriminals are already using GenAI to create convincing phishing lures but to date there has been limited evidence of threat actors using GenAI tools to write code. The team identified a campaign targeting French-speakers using VBScript and JavaScript believed to have been written with the help of GenAI. The structure of the scripts, comments explaining each line of code, and the choice of native language function names and variables are strong indications that the threat actor used GenAI to create the malware. The attack infects users with the freely available AsyncRAT malware, an easy-to-obtain infostealer which can record victim's screens and keystrokes. The activity shows how GenAI is lowering the bar for cybercriminals to infect endpoints.Slick malvertising campaigns leading to rogue-but-functional PDF tools: ChromeLoader campaigns are becoming bigger and increasingly polished, relying on malvertising around popular search keywords to direct victims to well-designed websites offering functional tools like PDF readers and converters. These working applications hide malicious code in a MSI file, while valid code-signing certificates bypass Windows security policies and user warnings, increasing the chance of infection. Installing these fake applications allows attackers to take over the victim's browsers and redirect searches to attacker-controlled sites.This logo is a no-go - hiding malware in Scalable Vector Graphics (SVG) images: some cybercriminals are bucking the trend by shifting from HTML files to vector images for smuggling malware. Vector images, widely used in graphic design, commonly use the XML-based SVG format. As SVGs open automatically in browsers, any embedded JavaScript code is executed as the image is viewed. While victims think they're viewing an image, they are interacting with a complex file format that leads to multiple types of infostealer malware being installed. Example of code likely written with the help of GenAI Example of a fake PDF converter tool website, leading to ChromeLoader Patrick Schläpfer, Principal Threat Researcher in the HP Security Lab, comments: "Speculation about AI being used by attackers is rife, but evidence has been scarce, so this finding is significant. Typically, attackers like to obscure their intentions to avoid revealing their methods, so this behavior indicates an AI assistant was used to help write their code. Such capabilities further lower the barrier to entry for threat actors, allowing novices without coding skills to write scripts, develop infection chains, and launch more damaging attacks." By isolating threats that have evaded detection tools on PCs - but still allowing malware to detonate safely - HP Wolf Security has specific insight into the latest techniques used by cybercriminals. To date, HP Wolf Security customers have clicked on over 40 billion email attachments, web pages, and downloaded files with no reported breaches. The report, which examines data from calendar Q2 2024, details how cybercriminals continue to diversify attack methods to bypass security policies and detection tools, such as: At least 12% of email threats identified by HP Sure Click bypassed one or more email gateway scanners, the same as the previous quarter.The top threat vectors were email attachments (61%), downloads from browsers (18%) and other infection vectors, such as removable storage - like USB thumb drives and file shares (21%).Archives were the most popular malware delivery type (39%), 26% of which were ZIP files. Dr. Ian Pratt, Global Head of Security for Personal Systems at HP Inc., comments: "Threat actors are constantly updating their methods, whether it's using AI to enhance attacks, or creating functioning-but-malicious tools to bypass detection. So, businesses must build resilience, closing off as many common attack routes possible. Adopting a defense-in-depth strategy -- including isolating high-risk activities like opening email attachments or web downloads -- helps to minimize the attack surface and neutralize the risk of infection." HP Wolf Security[i] runs risky tasks in isolated, hardware-enforced virtual machines running on the endpoint to protect users, without impacting their productivity. It also captures detailed traces of attempted infections. HP's application isolation technology mitigates threats that can slip past other security tools and provides unique insights into intrusion techniques and threat actor behavior. About the Data This data was gathered from consenting HP Wolf Security customers from April-June 2024. About HP HP Inc. (NYSE: HPQ) is a global technology leader and creator of solutions that enable people to bring their ideas to life and connect to the things that matter most. Operating in more than 170 countries, HP delivers a wide range of innovative and sustainable devices, services and subscriptions for personal computing, printing, 3D printing, hybrid work, gaming, and more. For more information, please visit: http://www.hp.com. About HP Wolf Security HP Wolf Security is world class endpoint security. HP's portfolio of hardware-enforced security and endpoint-focused security services are designed to help organizations safeguard PCs, printers, and people from circling cyber predators. HP Wolf Security provides comprehensive endpoint protection and resiliency that starts at the hardware level and extends across software and services. Visit https://hp.com/wolf.
Share
Share
Copy Link
Cybersecurity experts have identified malware attacks using AI-generated code, marking a significant shift in the landscape of digital threats. This development raises concerns about the potential for more sophisticated and harder-to-detect cyberattacks.
In a concerning development for cybersecurity professionals, hackers have begun deploying malware created with the assistance of artificial intelligence (AI). This marks a significant evolution in the cybercrime landscape, potentially ushering in a new era of more sophisticated and challenging-to-detect digital threats 1.
Recent reports indicate that AI-generated malware has been used in targeted attacks against victims in France. This revelation underscores the global nature of the threat and the potential for geographically focused campaigns utilizing advanced AI technologies 2.
HP's threat intelligence team has identified a malware attack that appears to have been constructed using generative AI. This discovery provides concrete evidence of cybercriminals leveraging AI tools to enhance their malicious software capabilities. The attack involved a phishing email containing a malicious Excel attachment, demonstrating the integration of AI-generated code with traditional attack vectors 3.
The AI-written malware deployed in these attacks exhibits several notable characteristics. It utilizes obfuscation techniques to evade detection and employs polymorphic abilities, allowing it to change its code dynamically. These features make the malware particularly challenging for traditional security measures to identify and neutralize 4.
The emergence of AI-generated malware represents a significant shift in the cybersecurity landscape. Experts warn that this development could lead to an increase in the volume and sophistication of cyberattacks. The ability of AI to rapidly generate and modify code may enable hackers to create malware variants at an unprecedented pace, potentially overwhelming existing defense mechanisms 5.
As the threat of AI-generated malware becomes more apparent, cybersecurity firms and researchers are ramping up efforts to develop countermeasures. This includes exploring the use of AI in defensive capacities to detect and neutralize AI-generated threats. However, the rapid evolution of AI technologies presents an ongoing challenge, necessitating continuous adaptation and innovation in cybersecurity strategies 1.
The discovery of AI-generated malware in active attacks serves as a wake-up call for individuals, businesses, and governments alike. It underscores the need for enhanced cybersecurity measures, increased awareness, and proactive strategies to combat this emerging threat landscape. As AI continues to advance, the cybersecurity community must remain vigilant and adaptive to stay ahead of malicious actors exploiting these powerful technologies.
Reference
[4]
Cybersecurity experts warn of the increasing use of generative AI by hackers to create more effective malware, bypass security systems, and conduct personalized phishing attacks, posing significant threats to individuals and organizations.
2 Sources
2 Sources
As AI revolutionizes cybersecurity, it presents both unprecedented threats and powerful defensive tools. This story explores the evolving landscape of AI-based attacks and the strategies businesses and cybersecurity professionals are adopting to counter them.
2 Sources
2 Sources
As AI technology advances, cybercriminals are leveraging it to launch more sophisticated attacks on macOS, posing significant challenges for cybersecurity professionals and individual users alike.
2 Sources
2 Sources
OpenAI reports multiple instances of ChatGPT being used by cybercriminals to create malware, conduct phishing attacks, and attempt to influence elections. The company has disrupted over 20 such operations in 2024.
15 Sources
15 Sources
A comprehensive look at how AI and malware-as-a-service are transforming the landscape of Mac security, making it easier for cybercriminals to create sophisticated malware targeting macOS users.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved