Microsoft Discovers 'Whisper Leak' Vulnerability Exposing AI Chatbot Conversation Topics

Reviewed byNidhi Govil

2 Sources

Share

Microsoft researchers have uncovered a new side-channel attack called 'Whisper Leak' that can expose conversation topics with AI chatbots by analyzing encrypted traffic patterns. The vulnerability affects 98% of tested LLMs, prompting rapid mitigation efforts from major AI companies.

The Discovery of Whisper Leak

Microsoft researchers have unveiled a sophisticated new cyberattack method called "Whisper Leak" that poses significant privacy risks for users of AI chatbots. This side-channel attack can expose the topics users discuss with large language models, even when conversations are protected by end-to-end encryption

1

.

Source: NDTV Gadgets 360

Source: NDTV Gadgets 360

The vulnerability affects virtually all server-based AI chatbots, including popular platforms like ChatGPT and Gemini. Microsoft's research team tested 28 different LLMs and found that 98% of them were susceptible to this attack method

2

.

How the Attack Works

Unlike traditional cyberattacks that attempt to break encryption directly, Whisper Leak exploits metadata patterns in network traffic. The attack leverages the incremental nature of how LLMs generate responses, studying the size and timing of encrypted packets exchanged between users and AI systems

1

.

Microsoft researchers trained an AI system to distinguish conversation topics based on data rhythm patterns. The attack doesn't decrypt messages but analyzes how data moves across the internet, creating enough metadata visibility to make educated inferences about conversation content. As researchers collect more samples over time, these patterns become clearer, allowing increasingly accurate predictions about the nature of discussions

2

.

Security Implications and Scope

The implications of this vulnerability are far-reaching. Microsoft warned that government agencies or internet service providers monitoring traffic to popular AI chatbots could reliably identify users asking questions about sensitive topics such as money laundering, political dissent, or other controversial subjects

1

.

The attack targets both standalone AI chatbots and those embedded into search engines or other applications. Importantly, this represents a fundamental limitation of current security measures, as traditional defenses like antivirus software or firewall protection cannot detect or block side-channel leaks like Whisper Leak

1

.

Industry Response and Mitigation Efforts

Following Microsoft's responsible disclosure, major AI companies moved quickly to implement protective measures. OpenAI, Mistral, and xAI have already deployed mitigations to address the vulnerability

1

.

The primary solution involves adding obfuscation techniques to LLM responses. OpenAI and Microsoft Azure implemented an additional field in streaming responses that includes "a random sequence of text of variable length" to each response. This approach masks the length of each token and substantially reduces the effectiveness of the cyberattack

2

.

Broader AI Security Concerns

The discovery comes amid growing concerns about AI security vulnerabilities. Recent research from Cisco AI Defense revealed that several open-weight LLMs remain vulnerable to manipulation, particularly during multi-turn conversations. Some models displayed "a systemic inability to maintain safety guardrails across extended interactions"

1

.

These findings add to a troubling pattern of AI security incidents in 2024, including reports of an AI chatbot leaking over 300,000 files containing personally identifiable information and hundreds of LLM servers being left exposed to potential attacks

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo