3 Sources
3 Sources
[1]
AI platforms can be abused for stealthy malware communication
AI assistants like Grok and Microsoft Copilot with web browsing and URL-fetching capabilities can be abused to intermediate command-and-control (C2) activity. Researchers at cybersecurity company Check Point discovered that threat actors can use AI services to relay communication between the C2 server and the target machine. Attackers can exploit this mechanism to deliver commands and retrieve stolen data from victim systems. The researchers created a proof-of-concept to show how it all works and disclosed their findings to Microsoft and xAI. Instead of malware connecting directly to a C2 server hosted on the attacker's infrastructure, Check Point's idea was to have it communicate with an AI web interface, instructing the agent to fetch an attacker-controlled URL and receive the response in the AI's output. In Check Point's scenario, the malware interacts with the AI service using the WebView2 component in Windows 11. The researchers say that even if the component is missing on the target system, the threat actor can deliver it embedded in the malware. WebView2 is used by developers to show web content in the interface of native desktop applications, thus eliminating the need of a full-featured browser. The researchers created "a C++ program that opens a WebView pointing to either Grok or Copilot." This way, the attacker can submit to the assistant instructions that can include commands to be executed or extract information from the compromised machine. The webpage responds with embedded instructions that the attacker can change at will, which the AI extracts or summarizes in response to the malware's query. The malware parses the AI assistant's response in the chat and extracts the instructions. This creates a bidirectional communication channel via the AI service, which is trusted by internet security tools and can thus help carry out data exchanges without being flagged or blocked. Check Point's PoC, tested on Grok and Microsoft Copilot, does not require an account or API keys for the AI services, making traceability and primary infrastructure blocks less of a problem. "The usual downside for attackers [abusing legitimate services for C2] is how easily these channels can be shut down: block the account, revoke the API key, suspend the tenant," explains Check Point. "Directly interacting with an AI agent through a web page changes this. There is no API key to revoke, and if anonymous usage is allowed, there may not even be an account to block." The researchers explain that safeguards exist to block obviously malicious exchanges on the said AI platforms, but these safety checks can be easily bypassed by encrypting the data into high-entropy blobs. CheckPoint argues that AI as a C2 proxy is just one of multiple options for abusing AI services, which could include operational reasoning such as assessing if the target system is worth exploiting and how to proceed without raising alarms. BleepingComputer has contacted Microsoft to ask whether Copilot is still exploitable in the way demonstrated by Check Point and the safeguards that could prevent such attacks. A reply was not immediately available, but we will update the article when we receive one.
[2]
'AI assistants are no longer just productivity tools; they are becoming part of the infrastructure that malware can abuse': Experts warn Copilot and Grok can be hijacked to spread malware
* Check Point warns GenAI tools can be abused as C2 infrastructure * Malware can hide traffic by encoding data into attacker-controlled URLs via AI queries * AI assistants may act as decision engines, enabling stealthy, adaptive malware operations Hackers can use some Generative Artificial Intelligence (GenAI) tools as command-and-control (C2) infrastructure, hiding malicious traffic in plain sight and even using them as decision-making engines, experts have warned. Research from Check Point claims Microsoft Copilot and xAI Grok's web browsing capabilities can be leveraged for malicious activity, although some prerequisites remain. Deploying malware on a device is just half the work. That malware still needs to be instructed what to do, and the results of those instructions still need to be sent out through the internet. Security solutions can pick up on this traffic and that way determine if a device is compromised or not - which is why "blending with legitimate traffic" is one of the key features of high-quality malware - and now, Check Point says that there is a way to do that through AI assistants. Harvesting sensitive data and getting further instructions If a threat actor infects a device with malware, it can harvest sensitive data and system information, encode it, and insert it into a URL controlled by the attacker. For example, http://malicious-site.com/report?data=12345678, where the data= part contains the sensitive information. Then, the malware can instruct the AI: "Summarize the contents of this website". Since this is legitimate AI traffic, it doesn't trigger any security alarms. However, the information gets logged on the attacker-controlled server, successfully relaying it in plain sight. To make matters worse, the website can respond with a hidden prompt that the AI executes. The problem can escalate further if the malware asks AI what to do next. For example, it can ask, based on the system information it harvested, if it's running in a high-value enterprise system, or a sandbox. If it's the latter, the malware can stay dormant. If it's not, it can proceed to stage two. "Once AI services can be used as a stealthy transport layer, the same interface can also carry prompts and model outputs that act as an external decision engine, a stepping stone toward AI-Driven implants and AIOps-style C2 that automate triage, targeting, and operational choices in real time," Check Point concluded. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[3]
Using AI for Covert Command-and-Control Channels
* Check Point Research identified a potential future attack technique in which AI assistants with web-browsing capabilities could be abused as covert command-and-control (C2) channels. * As AI services become widely adopted andimplicitly trusted, their network traffic increasingly blends into normal enterprise activity, expanding the attack surface. * AI-enabled C2 could allow attacker communications to evade traditional detection by hiding inside legitimate-looking AI interactions. * The same building blocks point toward a broader shift to AI-Driven malware, where AI systems influence targeting, prioritization, and operational decisions rather than serving only as development tools. Check Point Research has identified a potential new abuse pattern: AI assistants with web-browsing capabilities could, in the future, be repurposed as covert command-and-control (C2) relays. While we have not observed threat actors exploiting this technique in active campaigns, the growing adoption of AI services expands the attack surface available to adversaries. In effect, AI services could be used as a proxy layer that hides malicious communication inside legitimate-looking AI traffic. More broadly, this research points to a growing shift toward AI-driven malware, where AI is no longer just a development aid but an active component of malware operations. From AI-Assisted Attacks to AI-Driven Malware AI has already lowered the barrier to entry for cyber crime. Attackers routinely use it to generate malware code, craft phishing messages, translate lures, write scripts, and summarize stolen data. These uses reduce cost and speed up operations, allowing even low skill actors to execute more sophisticated campaigns. The Change: Where AI is Used Decision making in AI-driven malware is no longer fully hardcoded. Instead of following a fixed sequence of instructions, malware can collect information about its environment and rely on AI output to decide what to do next. This may include determining whether a system is worth targeting, which actions to prioritize, how aggressively to operate, or when to remain dormant. The result is malware that behaves less like a script and more like an adaptive operator. This makes campaigns harder to predict, harder to model, and less reliant on repeatable patterns that defenders typically detect. AI Assistants as a Covert C2 Channel Abusing legitimate cloud services for command and control is not new. Attackers have long hidden communications inside platforms such as email, cloud storage, and collaboration tools. The weakness of those approaches is also well known: accounts can be blocked, API keys revoked, and tenants suspended. AI assistants accessed through web interfaces change that equation. Check Point Research demonstrated that AI platforms offering web-browsing or URL-fetch capabilities could be abused as intermediaries between malware and attacker-controlled infrastructure. By prompting an AI assistant to fetch and summarize content from a specific URL, malware can send data out and receive commands back, without ever directly contacting a traditional C2 server. Proposed flow for malware to use an AI Webchat in order to communicate with a C2 server This technique was demonstrated in a controlled research setting against Grok and Microsoft Copilot, both of which allow web access through their interfaces. Crucially, the interaction can occur without API keys or authenticated user accounts, reducing the effectiveness of common takedown mechanisms. From a network perspective, the traffic appears similar to normal AI usage. From an attacker's perspective, the AI service becomes a stealthy relay that blends into allowed enterprise communications. Why This Matters Beyond One Technique On its own, using AI assistants as a C2 proxy is a service-abuse technique. Its real significance lies in what it enables next. Once AI services can be used as a transport layer, they can also carry instructions, prompts, and decisions, not just raw commands. This opens the door to malware that relies on AI for operational guidance rather than static logic. Instead of embedding complex decision trees, malware could send a short description of the infected system, such as user context, environment indicators, or software profile, and receive guidance on how to proceed. Over time, this allows campaigns to adapt dynamically across victims without changing code. This shift mirrors trends already seen in legitimate IT operations, where automation and AI-driven decision systems increasingly guide workflows. In malicious operations, the same ideas translate into AIOps-style command and control, where AI helps manage infections, prioritize targets, and optimize outcomes. The Near-Future Impact of AI-Driven Attacks While today's AI-driven malware remains largely experimental, there is one area where AI is likely to have a decisive impact: targeting and prioritization. Instead of encrypting everything, stealing everything, or spreading indiscriminately, future attacks could use AI to identify what actually matters. This may include determining whether a system belongs to a high-value user or organization, prioritizing sensitive files or databases, avoiding sandboxes and analysis environments, or reducing noisy activity that typically triggers detection. For ransomware and data-theft operations, this is particularly important. Many defensive tools rely on volume-based indicators, such as how fast files are encrypted or how much data is accessed. AI-driven targeting allows attackers to achieve impact with far fewer observable events, shrinking the window for detection. A Shift Defenders Can't Ignore This is not a traditional software vulnerability. It is a service-abuse problem rooted in how trusted AI platforms are integrated into enterprise environments. Any AI service that can fetch external content or browse the web inherits a degree of abuse potential. As AI becomes more embedded in daily workflows, defenders can no longer treat AI traffic as inherently benign. Mitigations will require action on both sides. AI providers need stronger controls around web-fetch capabilities, clearer guardrails for anonymous usage, and better enterprise visibility. Defenders need to treat AI domains as high-value egress points, monitor for automated or abnormal usage patterns, and incorporate AI traffic into threat hunting and incident response. Looking Ahead Following responsible disclosure, Microsoft confirmed our findings and implemented changes to address the behavior in Copilot's web-fetch flow. From a defensive standpoint, organizations need visibility and control over AI-bound traffic. Check Point's AI Security leverages agentic AI capabilities to inspect and contextualize traffic to and from AI services and block malicious communication attempts before they can be abused as covert channels. As enterprises accelerate AI adoption, security controls must evolve in parallel to ensure that trusted AI platforms do not become blind spots in the network.
Share
Share
Copy Link
Cybersecurity researchers at Check Point discovered that AI assistants with web browsing capabilities can be abused as covert command-and-control channels. Threat actors can exploit Microsoft Copilot and Grok to relay malicious traffic disguised as legitimate AI queries, bypassing traditional security tools without requiring API keys or user accounts.
Cybersecurity researchers at Check Point have identified a troubling vulnerability in how AI assistants can be weaponized by threat actors. The research demonstrates that AI platforms with web browsing capabilities, specifically Microsoft Copilot and Grok from xAI, can be exploited as covert command-and-control channels for AI malware operations
1
. This discovery marks a shift from AI command and control being merely theoretical to a demonstrated proof-of-concept that expands the attack surface available to adversaries3
.
Source: TechRadar
The technique allows malicious software to communicate with attacker infrastructure without directly connecting to traditional command-and-control servers. Instead, AI malware interacts with AI assistants through legitimate-looking queries, instructing the agent to fetch attacker-controlled URLs and receive responses embedded in the AI's output
1
. This creates a bidirectional communication channel that security tools typically trust, enabling stealthy malware communication that evades detection.Check Point's proof-of-concept demonstrates a sophisticated abuse mechanism. The researchers created a C++ program using Windows 11's WebView2 component to open interfaces pointing to either Grok or Microsoft Copilot
1
. Even if WebView2 is missing from a target system, threat actors can embed it within the malware itself, removing a potential barrier to exploitation.The attack flow works through encoded data exchanges. Malware on an infected device can harvest sensitive information and system details, encode this data, and insert it into URLs controlled by attackers—for example, http://malicious-site.com/report?data=12345678
2
. The malware then instructs the AI assistant to "summarize the contents of this website." Since this appears as legitimate AI traffic, it doesn't trigger security alarms, yet the sensitive information gets logged on the attacker-controlled server2
.
Source: BleepingComputer
The webpage can respond with hidden prompts that the AI extracts or summarizes, which the malware parses to extract instructions. This enables data exfiltration and command delivery to bypass traditional security tools through what appears to be normal AI usage
1
.What makes exploiting AI web browsing capabilities particularly concerning is the absence of typical takedown mechanisms. Check Point explains that when attackers abuse legitimate services for command-and-control operations, defenders can usually block accounts, revoke API keys, or suspend tenants
1
. However, directly interacting with AI assistants through web pages eliminates these safeguards.The proof-of-concept tested on Grok and Microsoft Copilot requires no API keys or authenticated user accounts, making traceability and infrastructure blocking significantly more difficult
1
. While AI platforms have safeguards to block obviously malicious exchanges, these safety checks can be circumvented by encrypting data into high-entropy blobs that appear innocuous1
.From a network perspective, malicious traffic blends with AI queries that enterprises increasingly consider trusted and normal. As Generative AI tools become widely adopted across organizations, their network traffic naturally blends into enterprise activity, making detection even more challenging
3
.Related Stories
Beyond serving as a C2 proxy, the research points toward a more concerning development: AI-driven malware that uses AI assistants as decision engines. Check Point researchers warn that malware can query AI about operational choices, asking whether a compromised system runs in a high-value enterprise environment or a security sandbox
2
. Based on the AI's assessment, malware can remain dormant or proceed to more aggressive stages of attack.This represents a shift from AI-assisted attacks to truly AI-driven malware. Instead of following fixed instruction sequences, malicious software can collect environmental information and rely on AI output to determine targeting priorities, operational intensity, and timing
3
. The result is adaptive malware that behaves less like a script and more like a human operator, making campaigns harder to predict and detect through traditional pattern recognition.Check Point notes this mirrors trends in legitimate IT operations, where automation and AI-driven systems increasingly guide workflows. Applied to malicious operations, this translates into AIOps-style command and control that helps manage infections, prioritize targets, and optimize outcomes dynamically
3
.While Check Point has not observed threat actors actively exploiting this technique in campaigns, the research demonstrates its feasibility and the expanding attack surface created by AI adoption
3
. The cybersecurity researchers disclosed their findings to both Microsoft and xAI, though immediate safeguards remain unclear1
.Experts emphasize that "AI assistants are no longer just productivity tools; they are becoming part of the infrastructure that malware can abuse"
2
. Organizations deploying AI assistants should monitor for unusual patterns in AI service usage, particularly repeated requests to fetch external URLs or interactions that follow scripted patterns rather than human behavior.The technique's significance extends beyond a single abuse vector. Once AI services function as a stealthy transport layer, they can carry instructions, prompts, and decisions that enable malware to adapt across victims without code changes
3
. Security teams should prepare for a future where bypassing security through legitimate AI channels becomes a standard component of sophisticated attacks, requiring new detection methodologies that account for AI-mediated threats.Summarized by
Navi
[1]
[3]
22 Aug 2025•Technology

11 Nov 2025•Technology

12 Jun 2025•Technology

1
Policy and Regulation

2
Technology

3
Technology
