2 Sources
2 Sources
[1]
Whisper Leak exposes how your encrypted AI conversation can be stolen
Attackers can track conversation topics using packet size and timing Microsoft has revealed a new type of cyberattack it has called "Whisper Leak", which is able to expose the topics users discuss with AI chatbots, even when conversations are fully encrypted. The company's research suggests attackers can study the size and timing of encrypted packets exchanged between a user and a large language model to infer what is being discussed. "If a government agency or internet service provider were monitoring traffic to a popular AI chatbot, they could reliably identify users asking questions about specific sensitive topics," Microsoft said. This means "encrypted" doesn't necessarily mean invisible - with the vulnerability lies in how LLMs send responses. These models do not wait for a complete reply, but transmit data incrementally, creating small patterns that attackers can analyze. Over time, as they collect more samples, these patterns become clearer, allowing more accurate guesses about the nature of conversations. This technique doesn't decrypt messages directly but exposes enough metadata to make educated inferences, which is arguably just as concerning. Following Microsoft's disclosure, OpenAI, Mistral, and xAI all said they moved quickly to deploy mitigations. One solution adds a, "random sequence of text of variable length" to each response, disrupting the consistency of token sizes that attackers rely on. However, Microsoft advises users to avoid sensitive discussions over public Wi-Fi, using a VPN, or sticking with non-streaming models of LLMs. The findings come alongside new tests showing that several open-weight LLMs remain vulnerable to manipulation, especially during multi-turn conversations. Researchers from Cisco AI Defense found even models built by major companies struggle to maintain safety controls once the dialogue becomes complex. Some models, they said, displayed "a systemic inability... to maintain safety guardrails across extended interactions." In 2024, reports surfaced that an AI chatbot leaked over 300,000 files containing personally identifiable information, and hundreds of LLM servers were left exposed, raising questions about how secure AI chat platforms truly are. Traditional defenses, such as antivirus software or firewall protection, cannot detect or block side-channel leaks like Whisper Leak, and these discoveries show AI tools can unintentionally widen exposure to surveillance and data inference.
[2]
This Security Flaw Can Let Attacker See Your Chats With AI, Microsoft Finds
Microsoft has also published a paper detailing its findings Microsoft has revealed details of a new vulnerability it discovered in most server-based artificial intelligence (AI) chatbots. The vulnerability, dubbed Whisper Leak, is claimed to let attackers learn about the conversation topics an individual has had with AI platforms such as ChatGPT and Gemini. As per the Redmond-based tech giant, the vulnerability can be exploited via a side-channel attack. This attack is said to affect all remote large language model (LLM)-based chatbots. Microsoft said it has worked with multiple vendors to mitigate the risk. Microsoft Finds a Major Vulnerability in AI Chatbots In a blog post, the tech giant detailed the Whisper Leak vulnerability and how attackers might exploit it. A detailed analysis of the same has also been published as a study on arXiv. Microsoft researchers claim that the side-channel attack can allow bad actors to observe the user's network traffic to conclude the conversation topics a user has had with these apps and websites. The exploit is said to work even if this data is protected via end-to-end encryption. The exploit targets both standalone AI chatbots as well as those that are embedded into search engines or other apps. Usually, the Transport Layer Security (TLS) encryption protects the user data when shared with these AI platforms. TLS is a popular encryption technique that is also used in online banking. During its testing, the researchers found that the metadata of the network traffic, or how the messages move across the Internet, remains visible. The exploit does not try to break open the encryption, but instead, it leverages the metadata that is not hidden. Microsoft revealed that it tested 28 different LLMs for this vulnerability and was able to find it in 98 percent of them. Essentially, what the researchers did was to analyse the packet size of data and its timing when a user interacts with a chatbot. Then they trained an AI tool to distinguish the target topic based on the data rhythm. The researchers found that the AI system was successfully able to decipher the topics without trying to pry open the encryption. "Importantly, this is not a cryptographic vulnerability in TLS itself, but rather exploitation of metadata that TLS inherently reveals about encrypted traffic structure and timing," the study highlighted. Highlighting the scope of this method, the company claimed that a government agency or Internet service provider (ISP) monitoring traffic to popular AI chatbots could reliably identify users asking questions about topics such as money laundering, political dissent, or other subjects. Microsoft said it shared its disclosures with affected companies once it was able to confirm its findings. Among the various chatbots that were found to have this vulnerability, the company said OpenAI, Mistral, and xAI have already deployed protections We have engaged in responsible disclosures with affected vendors and are pleased to report successful collaboration in implementing mitigations. Notably, OpenAI, Mistral, Microsoft, and xAI have deployed protections at the time of writing. This industry-wide response demonstrates the commitment to user privacy across the AI ecosystem. "OpenAI, and later mirrored by Microsoft Azure, implemented an additional field in the streaming responses under key "obfuscation," where a random sequence of text of variable length is added to each response. This notably masks the length of each token, and we observed it mitigates the cyberattack effectiveness substantially," the company said. For end users, the tech giant recommends avoiding discussing highly sensitive topics with AI chatbots over untrusted networks, using VPN services to add another layer of protection, using non-streaming models of LLMs (on-device LLMs), and opting for chatbot services that have implemented mitigations.
Share
Share
Copy Link
Microsoft researchers have uncovered a new side-channel attack called 'Whisper Leak' that can expose conversation topics with AI chatbots by analyzing encrypted traffic patterns. The vulnerability affects 98% of tested LLMs, prompting rapid mitigation efforts from major AI companies.
Microsoft researchers have unveiled a sophisticated new cyberattack method called "Whisper Leak" that poses significant privacy risks for users of AI chatbots. This side-channel attack can expose the topics users discuss with large language models, even when conversations are protected by end-to-end encryption
1
.
Source: NDTV Gadgets 360
The vulnerability affects virtually all server-based AI chatbots, including popular platforms like ChatGPT and Gemini. Microsoft's research team tested 28 different LLMs and found that 98% of them were susceptible to this attack method
2
.Unlike traditional cyberattacks that attempt to break encryption directly, Whisper Leak exploits metadata patterns in network traffic. The attack leverages the incremental nature of how LLMs generate responses, studying the size and timing of encrypted packets exchanged between users and AI systems
1
.Microsoft researchers trained an AI system to distinguish conversation topics based on data rhythm patterns. The attack doesn't decrypt messages but analyzes how data moves across the internet, creating enough metadata visibility to make educated inferences about conversation content. As researchers collect more samples over time, these patterns become clearer, allowing increasingly accurate predictions about the nature of discussions
2
.The implications of this vulnerability are far-reaching. Microsoft warned that government agencies or internet service providers monitoring traffic to popular AI chatbots could reliably identify users asking questions about sensitive topics such as money laundering, political dissent, or other controversial subjects
1
.The attack targets both standalone AI chatbots and those embedded into search engines or other applications. Importantly, this represents a fundamental limitation of current security measures, as traditional defenses like antivirus software or firewall protection cannot detect or block side-channel leaks like Whisper Leak
1
.Related Stories
Following Microsoft's responsible disclosure, major AI companies moved quickly to implement protective measures. OpenAI, Mistral, and xAI have already deployed mitigations to address the vulnerability
1
.The primary solution involves adding obfuscation techniques to LLM responses. OpenAI and Microsoft Azure implemented an additional field in streaming responses that includes "a random sequence of text of variable length" to each response. This approach masks the length of each token and substantially reduces the effectiveness of the cyberattack
2
.The discovery comes amid growing concerns about AI security vulnerabilities. Recent research from Cisco AI Defense revealed that several open-weight LLMs remain vulnerable to manipulation, particularly during multi-turn conversations. Some models displayed "a systemic inability to maintain safety guardrails across extended interactions"
1
.These findings add to a troubling pattern of AI security incidents in 2024, including reports of an AI chatbot leaking over 300,000 files containing personally identifiable information and hundreds of LLM servers being left exposed to potential attacks
1
.Summarized by
Navi
[2]
18 Oct 2024•Technology

18 Sept 2025•Technology

12 Jun 2025•Technology
