3 Sources
[1]
A Single Poisoned Document Could Leak 'Secret' Data Via ChatGPT
Security researchers found a weakness in OpenAI's Connectors, which let you hook up ChatGPT to other services, that allowed them to extract data from a Google Drive without any user interaction. The latest generative AI models are not just stand-alone text-generating chatbots -- instead, they can easily be hooked up to your data to give personalized answers to your questions. OpenAI's ChatGPT can be linked to your Gmail inbox, allowed to inspect your GitHub code, or find appointments in your Microsoft calendar. But these connections have the potential to be abused -- and researchers have shown it can take just a single "poisoned" document to do so. New findings from security researchers Michael Bargury and Tamir Ishay Sharbat, revealed at the Black Hat hacker conference in Las Vegas today, show how a weakness in OpenAI's Connectors allowed sensitive information to be extracted from a Google Drive account using an indirect prompt injection attack. In a demonstration of the attack, dubbed AgentFlayer, Bargury shows how it was possible to extract developer secrets, in the form of API keys, that were stored in a demonstration Drive account. The vulnerability highlights how connecting AI models to external systems and sharing more data across them increases the potential attack surface for malicious hackers and potentially multiplies the ways where vulnerabilities may be introduced. "There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out," Bargury, the CTO at security firm Zenity, tells WIRED. "We've shown this is completely zero-click; we just need your email, we share the document with you, and that's it. So yes, this is very, very bad," Bargury says. OpenAI did not immediately respond to WIRED's request for comment about the vulnerability in Connectors. The company introduced Connectors for ChatGPT as a beta feature earlier this year, and its website lists at least 17 different services that can be linked up with its accounts. It says the system allows you to "bring your tools and data into ChatGPT" and "search files, pull live data, and reference content right in the chat." Bargury says he reported the findings to OpenAI earlier this year and that the company quickly introduced mitigations to prevent the technique he used to extract data via Connectors. The way the attack works means only a limited amount of data could be extracted at once -- full documents could not be removed as part of the attack. "While this issue isn't specific to Google, it illustrates why developing robust protections against prompt injection attacks is important," says Andy Wen, senior director of security product management at Google Workspace, pointing to the company's recently enhanced AI security measures.
[2]
This ChatGPT Flaw Could Have Let Hackers Steal Your Google Drive Data
(Credit: Thomas Fuller/SOPA Images/LightRocket via Getty Images) Security researchers have revealed an exploit that hackers could have used to gain access to Google Drive data through a ChatGPT integration. The hack could have happened without any user interaction aside from connecting to the external service. Had it been used, the victim may have been unaware that an attack took place. Security researchers Michael Bargury and Tamir Ishay Sharbat revealed the exploit during the Black Hat hacker conference in Las Vegas. Bargury confirmed to Wired that OpenAI mitigated the problem after it showed the company the flaw. However, it isn't without continued risk. The exploit, dubbed AgentFlayer, works through ChatGPT's Connectors tool. Connectors first debuted in June and allows you to add external services, like documents and spreadsheets, to your ChatGPT account. ChatGPT includes integration with Box, Dropbox, and GitHub as well as various Google and Microsoft services for calendars, file storage, and more. The exploit allowed a hacker to share a file directly to the unsuspecting victim's Google Drive, and the hack got to work right away. ChatGPT would read the information included in the file, but it would include a hidden prompt. The hackers could include it in a size one font, with white text, allowing it to mostly go unnoticed. The prompt is around 300 words long, and it allowed the hackers access to specific files from other areas of Google Drive. In the researchers' example, it allowed them to pull API keys stored within a Drive file. The hidden prompt would also have allowed hackers to continue controlling the AI. In this example, the AI was looking for confidential information to share directly with the hacker, and it may have continued to do so until the victim disconnected the integration with Drive. This shows how AI is a new frontier in the world of security. It means hackers now have even more intensive tools to attack people, with the AI itself, in this scenario, working against the victim. "This isn't exclusively applicable to Google Drive; any resource connected to ChatGPT can be targeted for data exfiltration," Sharbat wrote in a blog post. If you have used connections with third-party tools with any AI chatbot, be careful with what data you've shared so you don't fall victim to an attack. Keep sensitive information locked away, and avoid storing passwords and personal information in cloud services wherever possible.
[3]
It's Staggeringly Easy for Hackers to Trick ChatGPT Into Leaking Your Most Personal Data
OpenAI's ChatGPT can easily be coaxed into leaking your personal data -- with just a single "poisoned" document. As Wired reports, security researchers revealed at this year's Black Hat hacker conference that highly sensitive information can be stolen from a Google Drive account with an indirect prompt injection attack. In other words, hackers feed a document with hidden, malicious prompts to an AI that controls your data instead of manipulating it directly with a prompt injection, one of the most serious types of security flaws threatening the safety of user-facing AI systems. ChatGPT's ability to be linked to a Gmail account allows it to rifle through your files, which could easily expose you to simple hacks. This latest glaring lapse in cybersecurity highlights the tech's enormous shortcomings, and raises concerns that your personal data simply isn't safe with these types of tools. "There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out," security firm Zenity CTO Michael Bargury, who discovered the vulnerability with his colleagues, told Wired. "We've shown this is completely zero-click; we just need your email, we share the document with you, and that's it. So yes, this is very, very bad." Earlier this year, OpenAI launched its Connectors for ChatGPT feature in the form of a beta, giving the chatbot access to Google accounts that allow it to "search files, pull live data, and reference content right in the chat." The way the exploit works is by hiding a 300-word malicious prompt in a document in white text and size-one font -- something that's easily overlooked by a human, but not a chatbot like ChatGPT. In a proof of concept, Bargury and his colleagues showed how the hidden prompt flagged a "mistake" to ChatGPT, instructing it that it doesn't actually need a document to be summarized. Instead, it calls for the chatbot to extract Google Drive API keys and share them with the attackers. Bargury already flagged the exploit to OpenAI, which acted quickly enough to plug the hole. The exploit also didn't allow hackers to extract full documents due to how it works, Wired points out. Still, the incident shows that even ChatGPT, with all the staggering resources of OpenAI behind it, is a leaky tub of potential security vulnerabilities even as it's being pushed to institutions ranging from colleges to the federal government. It's not just Google, either -- Connectors allows users to connect up to 17 different services, raising the possibility that other personal information could be extracted as well. It's far from the first time security researchers have flagged glaring cybersecurity gaps in AI systems. There have been numerous other instances of how indirect prompt injections can extract personal data. The same day Wired published its piece, the outlet also reported on a separate indirect prompt injection attack that allowed hackers to hijack a smart home system, enabling them to turn off the lights, open and close smart shutters, and even turn on a boiler. Researchers at Tel Aviv University found that Google's Gemini AI chatbot could be manipulated to figuratively give up the keys to a smart home by feeding it a poisoned Google Calendar invite. A later prompt to summarize calendar events triggers hidden instructions inside the poisoned invite, causing the smart home products to jump into action, Wired reports -- only one of 14 different indirect prompt injection attacks aimed at the AI. "LLMs are about to be integrated into physical humanoids, into semi- and fully autonomous cars, and we need to truly understand how to secure LLMs before we integrate them with these kinds of machines, where in some cases the outcomes will be safety and not privacy," Tel Aviv University researcher Ben Nassi told the publication. We've known about indirect prompt injection attacks for several years now, but given the latest news, companies still have a lot of work to do to mitigate the substantial risks. By giving tools like ChatGPT more and more access to our personal lives, security researchers warn of many more lapses in cybersecurity that could leave our data exposed to hackers. "It's incredibly powerful, but as usual with AI, more power comes with more risk," Bargury told Wired.
Share
Copy Link
Security researchers uncover a flaw in ChatGPT's Connectors feature that could allow hackers to extract sensitive data from connected services like Google Drive, highlighting the potential risks of integrating AI with personal information.
Security researchers have uncovered a significant vulnerability in OpenAI's ChatGPT, specifically in its Connectors feature, which allows the AI to interface with external services like Google Drive. This flaw, dubbed "AgentFlayer," could potentially allow hackers to extract sensitive data from connected accounts without any user interaction, raising serious concerns about the security implications of integrating AI with personal data 1.
Source: Wired
Researchers Michael Bargury and Tamir Ishay Sharbat demonstrated at the Black Hat hacker conference in Las Vegas how a single "poisoned" document could be used to exploit ChatGPT's Connectors. The attack works by sharing a malicious file with the victim's Google Drive, which contains a hidden prompt in white text and size-one font 2.
When ChatGPT processes this document, it executes the hidden instructions, potentially allowing attackers to:
This vulnerability highlights several critical issues:
Zero-Click Exploitation: The attack requires no user interaction beyond the initial connection of services, making it particularly dangerous 1.
Expanded Attack Surface: As AI models become more integrated with external systems, the potential for vulnerabilities increases 1.
AI as a Security Risk: The incident demonstrates how AI itself can be manipulated to work against users, opening new avenues for cyberattacks 2.
Broader Implications: While this specific attack targeted Google Drive, researchers warn that any resource connected to ChatGPT could potentially be vulnerable to similar exploits 2.
OpenAI has reportedly implemented quick fixes to address this specific vulnerability after being notified by the researchers 3. However, the incident underscores the ongoing challenges in securing AI systems, especially as they become more integrated into various aspects of our digital lives.
Source: PC Magazine
The ChatGPT vulnerability is not an isolated incident. Researchers have identified similar security gaps in other AI systems:
Smart Home Vulnerabilities: A separate study demonstrated how Google's Gemini AI could be manipulated to control smart home devices through a poisoned Google Calendar invite 3.
Physical World Implications: As AI systems become integrated into autonomous vehicles and robotics, the potential consequences of security breaches extend beyond data privacy to physical safety 3.
As AI technology continues to advance and integrate more deeply with our personal and professional lives, the need for robust security measures becomes increasingly critical. The ChatGPT vulnerability serves as a stark reminder of the potential risks associated with AI integration and the ongoing challenge of balancing convenience with security in the age of artificial intelligence.
Cybersecurity researchers demonstrate a novel "promptware" attack that uses malicious Google Calendar invites to manipulate Gemini AI into controlling smart home devices, raising concerns about AI safety and real-world implications.
13 Sources
Technology
22 hrs ago
13 Sources
Technology
22 hrs ago
Google's search head Liz Reid responds to concerns about AI's impact on web traffic, asserting that AI features are driving more searches and higher quality clicks, despite conflicting third-party reports.
8 Sources
Technology
22 hrs ago
8 Sources
Technology
22 hrs ago
OpenAI has struck a deal with the US government to provide ChatGPT Enterprise to federal agencies for just $1 per agency for one year, marking a significant move in AI adoption within the government sector.
14 Sources
Technology
22 hrs ago
14 Sources
Technology
22 hrs ago
Microsoft announces the integration of OpenAI's newly released GPT-5 model across its Copilot ecosystem, including Microsoft 365, GitHub, and Azure AI. The update promises enhanced AI capabilities for users and developers.
3 Sources
Technology
6 hrs ago
3 Sources
Technology
6 hrs ago
Google has officially launched its AI coding agent Jules, powered by Gemini 2.5 Pro, offering asynchronous coding assistance with new features and tiered pricing plans.
10 Sources
Technology
22 hrs ago
10 Sources
Technology
22 hrs ago