Meta Fixes Critical Security Flaw in AI Chatbot, Preventing Unauthorized Access to Private Conversations

Reviewed byNidhi Govil

8 Sources

Share

Meta addressed a significant security vulnerability in its AI chatbot that could have exposed users' private prompts and AI-generated responses. The bug was discovered by a security researcher who received a $10,000 bounty for reporting it.

Security Vulnerability Discovery

Meta, the parent company of Facebook, has addressed a critical security flaw in its AI chatbot that could have potentially exposed users' private conversations. The vulnerability was discovered by Sandeep Hodkasia, founder of security testing firm AppSecure, who reported it to Meta on December 26, 2024

1

.

The Nature of the Bug

Source: Dataconomy

Source: Dataconomy

The security flaw stemmed from how Meta AI assigned unique identifiers to user prompts and AI-generated responses. When a logged-in user edited their AI prompt to regenerate text or images, Meta's backend servers assigned a unique number to both the prompt and its corresponding AI-generated response

2

.

Hodkasia discovered that by analyzing network traffic in his browser while editing an AI prompt, he could alter this unique number. This manipulation caused Meta's servers to return prompts and AI-generated responses belonging to other users, indicating that the servers were not adequately verifying user authorization

3

.

Potential Implications

The vulnerability could have had serious privacy implications. Many users share sensitive information with AI chatbots, including business documents, personal information, and even intimate life details. This data, if exposed, could potentially be exploited for various malicious purposes, such as highly customized phishing attacks, identity theft, or even ransomware deployment

3

.

Meta's Response

Source: Wccftech

Source: Wccftech

Upon receiving Hodkasia's report, Meta took swift action to address the issue. The company deployed a fix on January 24, 2025, and awarded Hodkasia a $10,000 bug bounty for his responsible disclosure

4

.

Meta spokesperson Ryan Daniels confirmed the fix and stated, "We found no evidence of abuse and rewarded the researcher"

1

. The company maintains that there is no evidence of malicious exploitation of the bug prior to its resolution.

Broader Context and Implications

Source: Digit

Source: Digit

This incident highlights the ongoing challenges faced by tech giants as they rush to launch and refine AI products. It underscores the importance of robust security measures in AI applications, particularly those handling sensitive user data

5

.

The vulnerability discovery comes in the wake of other privacy concerns surrounding Meta's AI chatbot. In a separate incident, some users inadvertently shared what they believed were private conversations publicly through the Meta AI app's discover feed

5

.

As AI chatbots become increasingly integrated into various aspects of our digital lives, this incident serves as a reminder of the critical need for stringent security protocols and regular vulnerability assessments in AI-powered applications.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo