7 Sources
[1]
Exclusive: Meta fixes bug that could leak users' AI prompts and generated content
Meta has fixed a security bug that allowed Meta AI chatbot users to access and view the private prompts and AI-generated responses of other users. Sandeep Hodkasia, the founder of security testing firm Appsecure, exclusively told TechCrunch that Meta paid him $10,000 in a bug bounty reward for privately disclosing the bug he filed on December 26, 2024. Meta deployed a fix on January 24, 2025, said Hodkasia, and found no evidence that the bug was maliciously exploited. Hodkasia told TechCrunch that he identified the bug after examining how Meta AI allows its logged-in users to edit their AI prompts to re-generate text and images. He discovered that when a user edits their prompt, Meta's back-end servers assign the prompt and its AI-generated response a unique number. By analyzing the network traffic in his browser while editing an AI prompt, Hodkasia found he could change that unique number and Meta's servers would return a prompt and AI-generated response of someone else entirely. The bug meant that Meta's servers were not properly checking to ensure that the user requesting the prompt and its response was authorized to see it. Hodkasia said the prompt numbers generated by Meta's servers were "easily guessable," potentially allowing a malicious actor to scrape users' original prompts by rapidly changing prompt numbers using automated tools. When reached by TechCrunch, Meta confirmed it fixed the bug in January and that the company "found no evidence of abuse and rewarded the researcher," Meta spokesperson Ryan Daniels told TechCrunch. News of the bug comes at a time when tech giants are scrambling to launch and refine their AI products, despite many security and privacy risks associated with their use. Meta AI's standalone app, which debuted earlier this year to compete with rival apps like ChatGPT, launched to a rocky start after some users inadvertently publicly shared what they thought were private conversations with the chatbot.
[2]
Meta patches worrying security bug which could have exposed user AI prompts and responses - and pays the bug hunter $10,000
The servers were not checking who had access rights to these identifiers A bug which could have exposed user's prompts and AI responses on Meta's artificial intelligence platform has been patched. The bug stemmed from the way Meta AI assigned identifiers to both prompts, and responses. As it turns out, when a logged-in user tries to edit their previous prompt to get a different response, Meta assigns both of them a unique identifier. By changing that number, Meta's servers would return someone else's queries and results. The bug was discovered by a security researcher and AppSecure founder, Sandeep Hodkasia, in late December 2024. He reported it to Meta, who deployed a fix on January 24, 2025, and paid out a $10,000 bounty for his troubles. Hodkasia said that the prompt numbers that Meta's servers were generating were easy to guess, but apparently - no threat actors thought of this before it was addressed. This basically means that Meta's servers weren't double-checking if the user had proper authorization to view the contents. This is clearly problematic in a number of ways, the most obvious one being that many people share sensitive information with chatbots these days. Business documents, contracts and reports, personal information, all of these get uploaded to LLMs every day, and in many cases - people are using AI tools as psychotherapists, sharing intimate life details and private revelations. This information can be abused, among other things, in highly customized phishing attacks, that could lead to infostealer deployment, identity theft, or even ransomware. For example, if a threat actor knows that a person was prompting the AI for cheap VPN solutions, they could send them an email offering a great, cost-effective product, that is nothing more than a backdoor.
[3]
Meta paid a $10,000 bounty for a major AI privacy flaw
Meta addressed a security flaw within its Meta AI chatbot, which permitted users to view the private prompts and AI-generated responses of other individuals. Sandeep Hodkasia, founder of AppSecure, disclosed this vulnerability to TechCrunch, confirming Meta paid him a $10,000 bug bounty reward for his private disclosure filed on December 26, 2024. Hodkasia stated Meta deployed a fix on January 24, 2025, adding that no evidence of malicious exploitation of the bug was found. He explained to TechCrunch that he identified the vulnerability by examining Meta AI's mechanism for allowing logged-in users to edit their AI prompts to regenerate text and images. Hodkasia discovered that upon a user editing their prompt, Meta's backend servers assigned a unique identification number to the prompt and its corresponding AI-generated response. By analyzing network traffic in his browser while editing an AI prompt, Hodkasia determined he could alter this unique number, resulting in Meta's servers returning a prompt and AI-generated response belonging to a different user. The bug indicated that Meta's servers were not adequately verifying user authorization to view specific prompts and responses. Hodkasia noted the prompt numbers generated by Meta's servers were "easily guessable," which could have enabled an unauthorized actor to systematically retrieve other users' original prompts by rapidly altering prompt numbers using automated tools. Meta confirmed to TechCrunch that the bug was fixed in January. Meta spokesperson Ryan Daniels stated, "found no evidence of abuse and rewarded the researcher." This bug disclosure occurs as technology companies accelerate the launch and refinement of AI products, despite inherent security and privacy concerns. Meta AI's standalone application, introduced earlier this year to compete with rival applications, faced initial issues, including instances where users inadvertently shared what they believed were private conversations with the chatbot publicly.
[4]
Meta Said to Have Fixed a Bug That Could Leak Users' Private AI Chats
Meta AI chatbots can now send users proactive follow-up messages Meta AI reportedly had a vulnerability that could be exploited to access other users' private conversations with the chatbot. Accessing this bug did not require breaking into Meta's servers or manipulating the code of the app; instead, it could be triggered by just analysing the network traffic. As per the report, a researcher found the bug late last year and informed the Menlo Park-based social media giant about it. The company then deployed a fix to the issue in January, and rewarded the researcher for finding the exploit. According to a TechCrunch report, the Meta AI vulnerability was discovered by Sandeep Hodkasia, founder of AppSecure, a security testing firm. The researcher reportedly informed Meta about it in December 2024 and received a bug bounty reward of $10,000 (roughly Rs. 8.5 lakh). Meta spokesperson Ryan Daniels told the publication that the issue was fixed in January, and that the company did not find any evidence of the method being used by bad actors. The vulnerability reportedly was in how Meta AI handled user prompts on its servers. The researcher told the publication that the AI chatbot assigns a unique ID to every prompt and its AI-generated responses whenever a logged-in user tries to edit the prompt to regenerate an image or text. In a general use case, such incidents are very common, as most people conversationally try to get a better response or a desired image. Hodkasia reportedly found that he could access his unique number by analysing the network traffic on the browser while editing an AI prompt. Then, by changing the number, the researcher could access someone else's prompt and designated AI response, the report claimed. The researcher claimed that these numbers were "easily guessable" and finding another legitimate ID did not take much effort. Essentially, the vulnerability existed in the way the AI system handled the authorisation of these unique IDs, and did not place enough security measures to check who was accessing this data. That means, in the hands of a bad actor, this method could have led to compromising a large amount of private data of users. Notably, a report last month found that the Meta AI app's discover feed was filled with posts that appeared to be private conversations with the chatbot. These messages included asking for medical and legal advice, and even confessing to crimes. Later in June, the company began showing a warning message to dissuade people from unknowingly sharing their conversations.
[5]
Man Finds Major Bug In Meta's AI Platform That Exposed Private Chats - Receives $10,000 Reward For Proving The System Was Not As Secure As Claimed
With AI investments not slowing down any time soon, one would assume the technology is secure and that there would be no vulnerabilities in the existing platform, but that is not the case. Even if we see the tech giants, they are not free from errors, and their systems can be exploited. Such has been the case recently, where a man discovered a security vulnerability in Meta's AI platform and was rewarded $10,000 for pointing it out to the company. While it might have worked for the person who found the bug, it does serve as a reminder that multi-billion-dollar companies are not immune to flaws either. Meta has recently patched a security bug in its AI chatbot that led to private user prompts and also the AI-generated responses being exposed to other users, as reported by TechCrunch. Sandeep Hodkasia, the founder of a security testing firm called AppSecure, stumbled upon the flaw and let the company know about the vulnerability last December. Meta did not hold back on awarding this man for the disclosure and gave him about $10,000 through their bug bounty program. The tech giant has confirmed that the bug is no longer there and the security vulnerability has been resolved. It was also quick to point out that while the system flaw was there, there was no evidence of it being exploited. Hodkasia detailed that he got to know about the existing gap while he was examining the way Meta AI lets logged-in users edit prompts in order to recreate text and images. This might sound like a simple feature, but given how it exposes sensitive user interaction, it is not plain after all. The issue with Meta's servers was that they were not fully verifying whether or not the user requesting the prompt had the authorization to access it. Meta's systems gave each prompt a unique identifier; anyone able to modify the identifier could intercept and gain access to the user's prompts and responses without proper authorization. Since the identifiers were predictable, this was a major flaw that could have potentially paved the way for attackers to harvest sensitive information. The discovery of the security bug is followed by growing scrutiny regarding Meta's AI practices, specifically after its stand-alone app came out last year. The app unintentionally exposes private conversations owing to unclear sharing settings. Many users are unaware that their interactions are shared publicly due to the feature in the app, leading to many questioning the tech giant's approach to ethical and responsible AI.
[6]
Meta AI Bug Exposed Private Conversations to Other Users
Meta investigated the issue and confirmed that the bug was fixed by January 24, 2025. Meta stated that there was no evidence of exploitation. The flaw highlights the fact that cutting-edge cybersecurity tools must go hand in hand with AI development. Meta acknowledged the bug and immediately fixed the issue. The company has stated that it will address and improve its systems and take steps to ensure that vulnerabilities do not hamper the process again. This move shows Meta is making strides to protect user data in an environment, especially as is expanding rapidly across multiple regions and platforms. This response may certainly restore confidence and trust among users after repeated privacy lapses.
[7]
Using Meta AI? A bug may have exposed your conversations to other users
A bug in Meta AI allowed users access others' private prompts and responses. If you've been using Meta's AI chatbot to generate text or images, there's an important privacy issue you should be aware of. A bug in the system may have allowed other users to see your private prompts and the responses generated by the AI. Although the bug has now been fixed, it was live for several weeks, raising concerns about the security of your data when using AI tools. Meta told TechCrunch that no misuse was detected, but the vulnerability shows how even trusted platforms can have unexpected lapses. The issue was discovered by Sandeep Hodkasia, the founder of security testing firm AppSecure. Meta paid him $10,000 in a bug bounty reward for privately disclosing the bug on December 26, 2024, according to the report. Meta then rolled out a fix on January 24, 2025. Also read: Meta sets new rules to tackle unoriginal content on Facebook According to Hodkasia, the bug was linked to how Meta AI handles prompt editing. When users tweak a prompt to get different text or image responses, Meta's systems assign a unique number to that specific prompt and response. While monitoring his browser's network traffic, Hodkasia discovered that by simply changing this number, he could view the prompt and AI-generated reply from another user. Meta's servers weren't properly checking whether the person asking to view the content was actually allowed to see it. And since the unique numbers used for prompts were "easily guessable," as Hodkasia described, a determined attacker could have scrape users' original prompts by quickly switching prompt numbers using automated tools. Also read: From iPhone 17e to M5 MacBook Pros: Here's every Apple product expected in early 2026 This isn't the first time Meta AI has faced privacy concerns. When the standalone Meta AI app launched earlier this year to rival tools like ChatGPT, some users accidentally shared what they thought were private chats publicly.
Share
Copy Link
Meta addressed a significant security vulnerability in its AI chatbot that could have exposed users' private prompts and AI-generated responses. The bug, discovered by a security researcher, was fixed and resulted in a $10,000 bug bounty reward.
Meta, the tech giant behind Facebook, has recently addressed a critical security flaw in its AI chatbot that could have potentially exposed users' private prompts and AI-generated responses. The vulnerability was discovered by Sandeep Hodkasia, founder of security testing firm AppSecure, who reported the issue to Meta on December 26, 2024 1.
Source: Wccftech
The security flaw stemmed from the way Meta AI assigned unique identifiers to both prompts and responses when users edited their previous inputs. Hodkasia found that by analyzing network traffic and manipulating these identifiers, he could access prompts and AI-generated responses belonging to other users 2.
This vulnerability raised significant privacy concerns, as many users share sensitive information with AI chatbots, including business documents, personal information, and even intimate details. Such data, if exposed, could potentially be exploited for various malicious purposes, including highly customized phishing attacks, identity theft, or even ransomware deployment 2.
Upon receiving Hodkasia's report, Meta took swift action to address the issue. The company deployed a fix on January 24, 2025, and awarded Hodkasia a $10,000 bug bounty for his responsible disclosure 3. Meta spokesperson Ryan Daniels confirmed that the company found no evidence of abuse related to this vulnerability 1.
Source: Dataconomy
This incident occurs against the backdrop of rapid AI development and deployment by tech giants. It underscores the ongoing challenges in balancing innovation with security and privacy concerns in the AI sector 1.
The discovery of this bug follows earlier issues with Meta's AI initiatives. In a previous incident, some users of Meta AI's standalone app inadvertently shared what they believed to be private conversations publicly 4. These events have intensified scrutiny of Meta's AI practices and raised questions about the company's approach to ethical and responsible AI development 5.
Source: NDTV Gadgets 360
This security flaw serves as a reminder that even large tech companies are not immune to vulnerabilities in their AI systems. It highlights the need for continued vigilance, robust security measures, and responsible disclosure programs in the rapidly evolving field of AI technology 5.
Google rolls out an AI-powered business calling feature in Search and upgrades AI Mode with Gemini 2.5 Pro and Deep Search capabilities, showcasing significant advancements in AI integration for everyday tasks.
11 Sources
Technology
14 hrs ago
11 Sources
Technology
14 hrs ago
Calvin French-Owen, a former OpenAI engineer, shares insights into the company's intense work environment, rapid growth, and secretive culture, highlighting both challenges and achievements in AI development.
4 Sources
Technology
14 hrs ago
4 Sources
Technology
14 hrs ago
Microsoft's AI assistant Copilot lags behind ChatGPT in downloads and user adoption, despite the company's significant investment in AI technology and infrastructure.
4 Sources
Technology
14 hrs ago
4 Sources
Technology
14 hrs ago
Larry Ellison, Oracle's co-founder, surpasses Mark Zuckerberg to become the world's second-richest person with a net worth of $251 billion, driven by Oracle's AI-fueled stock rally and strategic partnerships.
4 Sources
Business and Economy
23 hrs ago
4 Sources
Business and Economy
23 hrs ago
OpenAI has added Google Cloud to its list of cloud partners, joining Microsoft, Oracle, and CoreWeave, as the AI giant seeks to meet escalating demands for computing capacity to power its AI models like ChatGPT.
5 Sources
Technology
7 hrs ago
5 Sources
Technology
7 hrs ago