2 Sources
2 Sources
[1]
Manage attack infrastructure? AI agents can now help
Crims 'will do what gets them their objective easiest and fastest,' Microsoft threat intel boss tells The Reg interview AI agents allow cybercriminals and nation-state hackers to outsource the "janitorial-type work" needed to plan and carry out cyberattacks, according to Sherrod DeGrippo, Microsoft's GM of global threat intelligence. North Korea is taking advantage. This includes tasks such as performing reconnaissance on compromised computers, and standing up and managing attack infrastructure - which may not sound as thrilling as plotting and carrying out digital intrusions, but are real-world criminal use cases for agentic AI that should make threat hunters sit up and take notice. "Agentic, automated reconnaissance against systems is something that is worth taking a look at," DeGrippo said during an interview with The Register. "Go find out about XYZ, and come back to me with everything you've seen. Go scan the net blocks owned by this particular entity." An attacker could do this manually, but it would take a lot more time than asking an agent to do it for them. It's "a great example of AI that can be used for regular, standard business purposes and can also be used by threat actors for malicious purposes," she said. In a Friday blog, Microsoft says that this is one of the ways miscreants are using AI to improve the efficiency and productivity of their criminal operations, resulting in attacks that are better, bigger, and faster. Infrastructure management is another area where AI agents come in handy, DeGrippo said. "We have always seen threat actors stand up the infrastructure, whether that means compromising existing legitimate infrastructure and using it for malicious purposes, or purchasing accounts and setting up their own infrastructure to launch threat campaigns," she said. Microsoft Threat Intelligence has observed North Korea's Coral Sleet - one of the crews behind the fake IT worker scam - using development platforms to quickly create and manage their attack infrastructure at scale, allowing more rapid campaign staging, testing, and command-and-control operations, according to the Friday blog. "From an agentic AI use case, this is very interesting because you can talk to your malicious infrastructure with natural language and convey your ideas just by expressing them," DeGrippo said. Both uses save attackers time and effort, and also lower barriers for less technically savvy criminals, especially when it comes to building infrastructure that won't be detected by defenders. "Threat actors will do what works, and they will do what gets them their objective easiest and fastest," DeGrippo said. "And so handing threat actors these really powerful tools is going to allow them to do more of that." While Microsoft's threat intel team and other security researchers have documented attackers using agentic AI to generate malware, agents' code-writing skills can't yet rival those of humans, DeGrippo told us. But, she added, there are two parts to this use case. "When we detect AI-generated or AI-enabled malware, traditionally, we have noticed that it's different from regular malware," she said. "It does have those hallmarks that when a human looks at the code, they can say, 'I think this was AI generated.'" The second part, which involves malware that can call different AI functions and libraries, is the more interesting use, "and more sophisticated," according to DeGrippo. "Anybody who has a software development background, regardless of if they're developing benign software or malicious software, is thinking about how to better enhance their workflows with AI," she said. "It doesn't matter if you're building the next SaaS CRM application, a phone app to manage your kids' soccer games, or malware that's intended to steal money or do espionage. Anyone developing any kind of code is thinking about how to use an AI assistant to do that." ยฎ
[2]
Microsoft: Hackers abusing AI at every stage of cyberattacks
Microsoft says threat actors are increasingly using artificial intelligence in their operations to accelerate attacks, scale malicious activity, and lower technical barriers across all aspects of a cyberattack. According to a new Microsoft Threat Intelligence report, attackers are using generative AI tools for a wide range of tasks, including reconnaissance, phishing, infrastructure development, malware creation, and post-compromise activity. In many cases, AI is used to draft phishing emails, translate content, summarize stolen data, debug malware, and assist with scripting or infrastructure configuration. "Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media. Threat actors use generative AI to draft phishing lures, translate content, summarize stolen data, generate or debug malware, and scaffold scripts or infrastructure," warns Microsoft. "For these uses, AI functions as a force multiplier that reduces technical friction and accelerates execution, while human operators retain control over objectives, targeting, and deployment decisions." Microsoft has observed multiple threat groups incorporating AI into their cyberattacks, including North Korean actors tracked as Jasper Sleet (Storm-0287) and Coral Sleet (Storm-1877), who use the technology as part of remote IT worker schemes. In these operations, AI tools help generate realistic identities, resumes, and communications to gain employment at Western companies and maintain access once hired. The report also describes how AI is being used to assist with malware development and infrastructure creation, with threat actors using AI coding tools to generate and refine malicious code, troubleshoot errors, or port malware components to different programming languages. Some malware experiments show signs of AI-enabled malware that dynamically generate scripts or modify behavior at runtime. Microsoft also observed Coral Sleet using AI to quickly generate fake company sites, provision infrastructure, and test and troubleshoot their deployments. When AI safeguards attempt to prevent the use of AI in these tasks, Microsoft says threat actors are using jailbreaking techniques to trick LLMs into generating malicious code or content. In addition to generative AI use, Microsoft researchers have begun to see threat actors experiment with agentic AI to perform tasks autonomously and adapt to results. However, Microsoft says AI is currently used primarily for decision-making rather than for autonomous attacks. Because many IT worker campaigns rely on the abuse of legitimate access, Microsoft advises organizations to treat these schemes and similar activity as insider risks. Furthermore, as these AI-powered attacks mirror conventional cyberattacks, defenders should focus on detecting abnormal credential use, hardening identity systems against phishing, and securing AI systems that may become targets in future attacks. Microsoft is not alone in seeing threat actors increasingly using artificial intelligence to power attacks and lower barriers to entry. Google recently reported that threat actors are abusing Gemini AI across all stages of cyberattacks, mirroring what Amazon observed in this campaign. Amazon and the Cyber and Ramen security blog also recently reported on a threat actor using multiple generative AI services as part of a campaign that breached more than 600 FortiGate firewalls.
Share
Share
Copy Link
Microsoft reveals that cybercriminals and nation-state hackers are deploying AI agents to automate reconnaissance, manage attack infrastructure, and scale malicious operations. North Korea's Coral Sleet group is using development platforms to rapidly create and control attack systems, while AI tools lower technical barriers for less sophisticated criminals.
Cybercriminals and nation-state hackers are increasingly deploying AI agents to outsource what Sherrod DeGrippo, Microsoft's GM of global threat intelligence, describes as "janitorial-type work" in cyberattacks
1
. This shift represents a fundamental change in how threat actors operate, with artificial intelligence functioning as a force multiplier for hackers that reduces technical friction and accelerates execution across every stage of malicious campaigns2
.
Source: BleepingComputer
According to Microsoft Threat Intelligence, attackers are leveraging generative AI tools for reconnaissance and phishing, infrastructure development, malware creation and development, and post-compromise activities
2
. The technology enables criminals to draft phishing lures, translate content, summarize stolen data, debug malware, and assist with scripting or infrastructure configurationโtasks that previously required significant time and technical expertise.Microsoft has observed North Korea's Coral Sleet groupโone of the crews behind fake IT worker schemesโusing development platforms to quickly create and manage their attack infrastructure at scale
1
. This capability allows more rapid campaign staging, testing, and command-and-control operations. DeGrippo explained that agentic AI enables attackers to "talk to your malicious infrastructure with natural language and convey your ideas just by expressing them"1
.The report also identifies Jasper Sleet, another North Korean group tracked by Microsoft, as incorporating AI into remote IT worker schemes where AI tools help generate realistic identities, resumes, and communications to gain employment at Western companies
2
.Agentic, automated reconnaissance against systems represents a significant evolution in how threat actors gather intelligence. "Go find out about XYZ, and come back to me with everything you've seen. Go scan the net blocks owned by this particular entity," DeGrippo described as typical commands an attacker might give an AI agent
1
. While attackers could perform these tasks manually, AI agents complete them far more quickly, exemplifying how AI can serve both legitimate business purposes and malicious objectives.The ability to manage attack infrastructure through natural language interfaces and automate cyberattacks saves attackers time and effort while lowering barriers for less technically savvy criminals, particularly when building infrastructure that evades detection
1
. "Threat actors will do what works, and they will do what gets them their objective easiest and fastest," DeGrippo noted, adding that "handing threat actors these really powerful tools is going to allow them to do more of that"1
.
Source: The Register
When AI safeguards attempt to prevent malicious use, nation-state hackers and cybercriminals are employing jailbreaking techniques to trick Large Language Models (LLMs) into generating malicious code or content
2
.Related Stories
While AI-generated malware still exhibits distinctive characteristics that human analysts can identify, DeGrippo told The Register that the more sophisticated use case involves malware that can call different AI functions and libraries
1
. "Anyone developing any kind of code is thinking about how to use an AI assistant to do that," she explained, whether they're building legitimate applications or malware intended for theft or espionage1
.Some malware experiments show signs of AI-enabled capabilities that dynamically generate scripts or modify behavior at runtime
2
. Microsoft researchers have also begun observing threat actors experiment with agentic AI to perform tasks autonomously and adapt to results, though AI is currently used primarily for decision-making rather than fully autonomous attacks2
.Because many IT worker campaigns rely on abuse of legitimate access, Microsoft advises organizations to treat these schemes as insider risks
2
. As these AI-powered attacks mirror conventional cyberattacks, defenders should focus on detecting abnormal credential use, hardening identity systems against phishing lures, and securing AI systems that may become targets in future attacks2
. The trend extends beyond Microsoft's observations, with Google recently reporting similar abuse of Gemini AI across all attack stages, and Amazon documenting campaigns using multiple generative AI services to breach more than 600 FortiGate firewalls2
.Summarized by
Navi
[1]
[2]
12 Feb 2026โขTechnology

11 Nov 2025โขTechnology

16 Oct 2025โขTechnology

1
Technology

2
Policy and Regulation

3
Policy and Regulation
