3 Sources
3 Sources
[1]
Microsoft: OpenAI API moonlights as malware HQ
Redmond uncovers SesameOp, a backdoor hiding its tracks by using OpenAI's Assistants API as a command channel Hackers have found a new use for OpenAI's Assistants API - not to write poems or code, but to secretly control malware. Microsoft this week detailed a previously unseen backdoor dubbed "SesameOp," which abuses OpenAI's Assistants API as a command-and-control channel to relay instructions between infected systems and the attackers pulling the strings. First spotted in July during a months-long intrusion, the campaign hid in plain sight by blending its network chatter with legitimate AI traffic - an ingenious way to stay invisible to anyone assuming "api.openai.com" meant business as usual. According to Microsoft's Incident Response team, the attack chain starts with a loader that uses a trick known as ".NET AppDomainManager injection" to plant the backdoor. The malware doesn't talk to ChatGPT or do anything remotely conversational; it simply hijacks OpenAI's infrastructure as a data courier. Commands come in, results go out, all via the same channels millions of users rely on every day. By piggy-backing on a legitimate cloud service, SesameOp avoids the usual giveaways: no sketchy domains, no dodgy IPs, and no obvious C2 infrastructure to block. "Instead of relying on more traditional methods, the threat actor behind this backdoor abuses OpenAI as a C2 channel as a way to stealthily communicate and orchestrate malicious activities within the compromised environment," Microsoft said. "This threat does not represent a vulnerability or misconfiguration, but rather a way to misuse built-in capabilities of the OpenAI Assistants API." Microsoft's analysis shows the implant uses payload compression and layered encryption to hide commands and exfiltrated results; the DLL is heavily obfuscated with Eazfuscator.NET and is loaded at runtime via .NET AppDomainManager injection, after which the backdoor fetches encrypted commands from the Assistants API, decrypts and executes them locally, then posts the results back - techniques Microsoft describes as sophisticated and designed for stealth. For defenders, this is where things get messy. Seeing a connection to OpenAI's API on your network doesn't exactly scream "compromise." Microsoft even published a hunting query to help analysts spot unusual connections to OpenAI endpoints by process name - an early step toward distinguishing genuine chatbot activity from malicious use. The Assistants API itself is scheduled for deprecation in August 2026, which may close this particular loophole. But the pattern is here to stay: if it's cloud-hosted and trusted, it's fair game. Microsoft hasn't said who's behind the campaign, but noted that it shared its findings with OpenAI, which identified and disabled an API key and account believed to have been used by the attackers. OpenAI didn't respond to The Register's request for comment. In an age where everything from HR chatbots to help-desk scripts talks to an API, this won't be the last time a threat actor turns your favorite cloud tool into their getaway car. ®
[2]
Microsoft Detects "SesameOp" Backdoor Using OpenAI's API as a Stealth Command Channel
Microsoft has disclosed details of a novel backdoor dubbed SesameOp that uses OpenAI Assistants Application Programming Interface (API) for command-and-control (C2) communications. "Instead of relying on more traditional methods, the threat actor behind this backdoor abuses OpenAI as a C2 channel as a way to stealthily communicate and orchestrate malicious activities within the compromised environment," the Detection and Response Team (DART) at Microsoft Incident Response said in a technical report published Monday. "To do this, a component of the backdoor uses the OpenAI Assistants API as a storage or relay mechanism to fetch commands, which the malware then runs." The tech giant said it discovered the implant in July 2025 as part of a sophisticated security incident in which unknown threat actors had managed to maintain persistence within the target environment for several months. It did not name the impacted victim. Further investigation into the intrusion activity has led to the discovery of what it described as a "complex arrangement" of internal web shells, which are designed to execute commands relayed from "persistent, strategically placed" malicious processes. These processes, in turn, leverage Microsoft Visual Studio utilities that were compromised with malicious libraries, an approach referred to as AppDomainManager injection. SesameOp is a custom backdoor engineered to maintain persistence and allow a threat actor to covertly manage compromised devices, indicating that the attack's overarching goal was to ensure long-term access for espionage efforts. OpenAI Assistants API enables developers to integrate artificial intelligence (AI)-powered agents directly into their applications and workflows. The API is scheduled for deprecation by OpenAI in August 2026, with the company replacing it with a new Responses API. The infection chain, per Microsoft, includes a loader component ("Netapi64.dll") and a .NET-based backdoor ("OpenAIAgent.Netapi64") that leverages the OpenAI API as a C2 channel to fetch encrypted commands, which are subsequently decoded and executed locally. The results of the execution are sent back to OpenAI as a message. "The dynamic link library (DLL) is heavily obfuscated using Eazfuscator.NET and is designed for stealth, persistence, and secure communication using the OpenAI Assistants API," the company said. "Netapi64.dll is loaded at runtime into the host executable via .NET AppDomainManager injection, as instructed by a crafted .config file accompanying the host executable." The message supports three types of values in the description field of the Assistants list retrieved from OpenAI - It's currently not clear who is behind the malware, but the development signals continued abuse of legitimate tools for malicious purposes to blend in with normal network activity and sidestep detection. Microsoft said it shared its findings with OpenAI, which identified and disabled an API key and associated account believed to have been used by the adversary.
[3]
Microsoft: A key OpenAI API is being used for 'espionage' by bad actors
On Monday, Microsoft Detection and Response Team (DART) researchers warned that an OpenAI API was being abused as a backdoor for malware. The researchers concluded that bad actors were using the novel backdoor to conduct long-term espionage operations. Specifically, Microsoft's cybersecurity researchers discovered that cybercriminals were taking advantage of the OpenAI Assistants API, a clever way to hide their illicit activities, according to Bleeping Computer. "Instead of relying on more traditional methods, the threat actor behind this backdoor abuses OpenAI as a [command-and-control] channel as a way to stealthily communicate and orchestrate malicious activities within the compromised environment. To do this, a component of the backdoor uses the OpenAI Assistants API as a storage or relay mechanism to fetch commands, which the malware then runs," the researchers wrote in a Microsoft Incident Response published on Nov. 3. Read on to find out how the exploit worked and how to guard against it. In July, the researchers say they discovered a new backdoor within OpenAI's Assistants API while investigating a "sophisticated security incident." They named the backdoor SesameOp. (Cybersecurity researchers often give catchy names to new strains of malware or cybersecurity exploits.) The Assistants API is a developer tool that lets OpenAI's enterprise clients build AI assistants within their own apps. Essentially, it brings OpenAI tools like ChatGPT and Code Interpreter into other third-party apps. We should also note that this system is set to be replaced by OpenAI's Responses API. The DART researchers found that the covert backdoor enabled threat actors to manage compromised devices undetected, using the Assistants API to piggyback malicious commands and encrypted data. While the incident response is short on specifics, the backdoor allowed the bad actors to harvest data for "espionage-type purposes." By using the OpenAI API, the cybercriminals were able to mask their activities. "This threat does not represent a vulnerability or misconfiguration, but rather a way to misuse built-in capabilities of the OpenAI Assistants API," the researchers concluded. Along with an in-depth technical analysis of the threat, Microsoft researchers provided a list of recommendations to mitigate the impact of the exploit. You can read the full list of recommendations in the Microsoft Incident Response. Some suggestions include "Audit and review firewalls and web server logs frequently," and "Review and configure your perimeter firewall and proxy settings to limit unauthorized access to services, including connections through non-standard ports." Because the OpenAI Assistants API is set to be deprecated next year anyway, developers may also want to go ahead and migrate to the Responses API that replaces it. OpenAI has a migration guide on its website.
Share
Share
Copy Link
Microsoft's cybersecurity team discovered a sophisticated backdoor called SesameOp that abuses OpenAI's Assistants API as a command-and-control channel. The malware hides malicious activities by blending with legitimate AI traffic, enabling long-term espionage operations while evading traditional detection methods.
Microsoft's Detection and Response Team (DART) has uncovered a sophisticated backdoor operation that represents a concerning evolution in cybercriminal tactics. The malware, dubbed SesameOp, was first detected in July 2025 during an investigation into what Microsoft described as a "sophisticated security incident" where unknown threat actors had maintained persistence within a target environment for several months
1
2
.
Source: Mashable
What makes SesameOp particularly noteworthy is its innovative approach to command-and-control communications. Rather than establishing traditional malicious infrastructure that security teams typically monitor, the backdoor exploits OpenAI's Assistants API as a covert communication channel
3
.The SesameOp backdoor operates through a sophisticated multi-component system designed for maximum stealth and persistence. The malware consists of a loader component called "Netapi64.dll" and a .NET-based backdoor named "OpenAIAgent.Netapi64" that leverages the OpenAI API for command-and-control operations
2
.
Source: The Register
The attack chain begins with a technique known as ".NET AppDomainManager injection," which allows the malware to plant itself within legitimate processes. The DLL component is heavily obfuscated using Eazfuscator.NET and is loaded at runtime through this injection method, making detection significantly more challenging
1
.Once operational, the backdoor fetches encrypted commands from OpenAI's Assistants API, decrypts and executes them locally, then posts the results back through the same channel. This creates what Microsoft describes as a "complex arrangement" of internal web shells designed to execute commands relayed from persistent, strategically placed malicious processes
2
.The genius of SesameOp lies in its ability to hide in plain sight. By piggy-backing on OpenAI's legitimate cloud infrastructure, the malware avoids traditional detection methods that look for suspicious domains, questionable IP addresses, or obvious command-and-control infrastructure. Network traffic to "api.openai.com" appears entirely normal to security monitoring systems
1
.Microsoft's analysis reveals that the implant uses payload compression and layered encryption to further obscure commands and exfiltrated data. The malware doesn't interact with ChatGPT or perform any conversational AI functions; instead, it simply hijacks OpenAI's infrastructure as a data courier, blending malicious communications with the millions of legitimate API calls made daily
1
.
Source: Hacker News
Related Stories
Microsoft has shared its findings with OpenAI, which subsequently identified and disabled the API key and associated account believed to have been used by the attackers
2
. The company emphasized that this threat "does not represent a vulnerability or misconfiguration, but rather a way to misuse built-in capabilities of the OpenAI Assistants API"3
.To help security teams identify similar threats, Microsoft has published hunting queries designed to spot unusual connections to OpenAI endpoints by process name, providing an early detection method for distinguishing legitimate chatbot activity from malicious use
1
.The researchers have also provided comprehensive mitigation recommendations, including frequent auditing of firewalls and web server logs, and reviewing perimeter firewall and proxy settings to limit unauthorized access to services
3
.Summarized by
Navi
[1]
[2]
10 Oct 2024•Technology

11 Jan 2025•Technology

06 Jun 2025•Technology

1
Business and Economy

2
Technology

3
Policy and Regulation
