Threat actors now use AI agents to manage attack infrastructure and accelerate cyberattacks

Reviewed byNidhi Govil

2 Sources

Share

Microsoft reveals that cybercriminals and nation-state hackers are deploying AI agents to automate reconnaissance, manage attack infrastructure, and scale malicious operations. North Korea's Coral Sleet group is using development platforms to rapidly create and control attack systems, while AI tools lower technical barriers for less sophisticated criminals.

AI in Cyberattacks Transforms Criminal Operations

Cybercriminals and nation-state hackers are increasingly deploying AI agents to outsource what Sherrod DeGrippo, Microsoft's GM of global threat intelligence, describes as "janitorial-type work" in cyberattacks

1

. This shift represents a fundamental change in how threat actors operate, with artificial intelligence functioning as a force multiplier for hackers that reduces technical friction and accelerates execution across every stage of malicious campaigns

2

.

Source: BleepingComputer

Source: BleepingComputer

According to Microsoft Threat Intelligence, attackers are leveraging generative AI tools for reconnaissance and phishing, infrastructure development, malware creation and development, and post-compromise activities

2

. The technology enables criminals to draft phishing lures, translate content, summarize stolen data, debug malware, and assist with scripting or infrastructure configurationโ€”tasks that previously required significant time and technical expertise.

North Korea's AI-Powered Attack Infrastructure

Microsoft has observed North Korea's Coral Sleet groupโ€”one of the crews behind fake IT worker schemesโ€”using development platforms to quickly create and manage their attack infrastructure at scale

1

. This capability allows more rapid campaign staging, testing, and command-and-control operations. DeGrippo explained that agentic AI enables attackers to "talk to your malicious infrastructure with natural language and convey your ideas just by expressing them"

1

.

The report also identifies Jasper Sleet, another North Korean group tracked by Microsoft, as incorporating AI into remote IT worker schemes where AI tools help generate realistic identities, resumes, and communications to gain employment at Western companies

2

.

Automated Reconnaissance Accelerates Threat Operations

Agentic, automated reconnaissance against systems represents a significant evolution in how threat actors gather intelligence. "Go find out about XYZ, and come back to me with everything you've seen. Go scan the net blocks owned by this particular entity," DeGrippo described as typical commands an attacker might give an AI agent

1

. While attackers could perform these tasks manually, AI agents complete them far more quickly, exemplifying how AI can serve both legitimate business purposes and malicious objectives.

AI Agents Lower Barriers for Less Sophisticated Criminals

The ability to manage attack infrastructure through natural language interfaces and automate cyberattacks saves attackers time and effort while lowering barriers for less technically savvy criminals, particularly when building infrastructure that evades detection

1

. "Threat actors will do what works, and they will do what gets them their objective easiest and fastest," DeGrippo noted, adding that "handing threat actors these really powerful tools is going to allow them to do more of that"

1

.

Source: The Register

Source: The Register

When AI safeguards attempt to prevent malicious use, nation-state hackers and cybercriminals are employing jailbreaking techniques to trick Large Language Models (LLMs) into generating malicious code or content

2

.

Malware Development Gets AI Assistance

While AI-generated malware still exhibits distinctive characteristics that human analysts can identify, DeGrippo told The Register that the more sophisticated use case involves malware that can call different AI functions and libraries

1

. "Anyone developing any kind of code is thinking about how to use an AI assistant to do that," she explained, whether they're building legitimate applications or malware intended for theft or espionage

1

.

Some malware experiments show signs of AI-enabled capabilities that dynamically generate scripts or modify behavior at runtime

2

. Microsoft researchers have also begun observing threat actors experiment with agentic AI to perform tasks autonomously and adapt to results, though AI is currently used primarily for decision-making rather than fully autonomous attacks

2

.

Defending Against AI-Powered Threats

Because many IT worker campaigns rely on abuse of legitimate access, Microsoft advises organizations to treat these schemes as insider risks

2

. As these AI-powered attacks mirror conventional cyberattacks, defenders should focus on detecting abnormal credential use, hardening identity systems against phishing lures, and securing AI systems that may become targets in future attacks

2

. The trend extends beyond Microsoft's observations, with Google recently reporting similar abuse of Gemini AI across all attack stages, and Amazon documenting campaigns using multiple generative AI services to breach more than 600 FortiGate firewalls

2

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Donโ€™t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

ยฉ 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo