2 Sources
2 Sources
[1]
Google won't fix new ASCII smuggling attack in Gemini
Google has decided not to fix a new ASCII smuggling attack in Gemini that could be used to trick the AI assistant into providing users with fake information, alter the model's behavior, and silently poison its data. ASCII smuggling is an attack where special characters from the Tags Unicode block are used to introduce payloads that are invisible to users but can still be detected and processed by large-language models (LLMs). It's similar to other attacks that researchers presented recently against Google Gemini, which all exploit a gap between what users see and what machines read, like performing CSS manipulation or exploiting GUI limitations. While LLMs' susceptibility to ASCII smuggling attacks isn't a new discovery, as several researchers have explored this possibility since the advent of generative AI tools, the risk level is now different [1, 2, 3, 4]. Before, chatbots could only be maliciously manipulated by such attacks if the user was tricked into pasting specially crafted prompts. With the rise of agentic AI tools like Gemini, which have widespread access to sensitive user data and can perform tasks autonomously, the threat is more significant. Viktor Markopoulos, a security researcher at FireTail cybersecurity company, has tested ASCII smuggling against several widely used AI tools and found that Gemini (Calendar invites or email), DeepSeek (prompts), and Grok (X posts), are vulnerable to the attack. Claude, ChatGPT, and Microsoft CoPilot proved secure against ASCII smuggling, implementing some form of input sanitization, FireTail found. Regarding Gemini, its integration with Google Workspace poses a high risk, as attackers could use ASCII smuggling to embed hidden text in Calendar invites or emails. Markopoulos found that it's possible to hide instructions on the Calendar invite title, overwrite organizer details (identity spoofing), and smuggle hidden meeting descriptions or links. Regarding the risk from emails, the researcher states that "for users with LLMs connected to their inboxes, a simple email with hidden commands can instruct the LLM to search the inbox for sensitive items or send contact details, turning a standard phishing attempt into an autonomous data extraction tool." LLMs instructed to browse websites can also stumble upon hidden payloads in product descriptions and feed them with malicious URLs to convey to users. The researcher reported the findings to Google on September 18 but the tech giant dismissed the issue as not being a security bug and may only be exploited in the context of social engineering attacks. Even so, Markopoulos showed that the attack can trick Gemini into supplying false information to users. In one example, the researcher passed an invisible instruction that Gemini processed to present a potentially malicious site as the place to get a good quality phone with a discount. Other tech firms, though, have a different perspective on this type of problems. For example, Amazon published detailed security guidance on the topic of Unicode character smuggling. BleepingComputer has contacted Google for more clarification on the bug but we have yet to receive a response.
[2]
Google says it won't fix this potentially concerning Gemini security issue
Gemini's integration with Workspace apps makes it vulnerable to hidden prompt-triggered phishing attacks A recently-detected "ASCII smuggling attack" will not be getting a fix in Google's Gemini artificial intelligence tool, the company has said - saying it is not a security issue but rather a social engineering tactic and as such, the responsibility falls on the end user. This is according to Viktor Markopoulos, a security researcher at FireTail, who demonstrated the risks these attacks pose to Gemini users but was apparently dismissed by the company. ASCII smuggling is a type of attack in which crooks trick victims into prompting their AI tool a malicious command that puts their computers and data at risk. The trick works by "smuggling", or hiding, the prompt in plain sight by, for example, having the AI read text invisible to the human behind the screen. In the early years of AI, this wasn't much of an issue, because the user needed to bring up the AI tool and type (or copy/paste) the prompt themselves. However, a lot has changed since then and many AI tools are now being integrated with other apps and platforms. Gemini, for example, is now integrated with Google Workplace, being able to pull data from Sheets, generate text in Docs, and read and summarize emails. This last point is crucial here. As Markopoulos demonstrated, a threat actor could send a phishing email that, on the surface, looks completely legitimate. However, it also comes with a malicious prompt written in font 0, in white, on a white background, so that the reader doesn't even see it. But when the victim asks Gemini to summarize the email, the tool reads the prompt too, and responds to it. That prompt could be to display a message saying "your computer is compromised, call Google to mitigate the threat immediately," or a similar message, standard to phishing tricks. Even more ominously, the prompt could force different AI agents to exfiltrate sensitive data from the inbox. All it takes is a simple, benign command from the user, to summarize or read the contents of the email. Via BleepingComputer
Share
Share
Copy Link
Google has decided not to address a newly discovered ASCII smuggling attack in its Gemini AI assistant, potentially exposing users to hidden malicious prompts. The vulnerability, particularly concerning due to Gemini's integration with Google Workspace, could lead to data breaches and phishing attacks.
Security researchers have uncovered a new vulnerability in Google's Gemini AI assistant, known as an ASCII smuggling attack. This technique allows attackers to embed hidden text using special Unicode characters, which are invisible to users but can be processed by large language models (LLMs) like Gemini
1
.Source: TechRadar
Despite the potential risks, Google has decided not to address this vulnerability, stating that it is not a security bug but rather a social engineering tactic
2
. This decision has raised concerns among cybersecurity experts, particularly due to Gemini's integration with Google Workspace applications.Viktor Markopoulos, a security researcher at FireTail, demonstrated the attack's potential impact:
1
.1
.1
.Markopoulos tested several AI tools for ASCII smuggling vulnerabilities:
1
The secure AI tools implement some form of input sanitization to prevent such attacks.
The integration of Gemini with Google Workspace significantly increases the risk profile of this vulnerability. Attackers could potentially:
1
In a phishing scenario, a malicious email could contain hidden prompts that, when processed by Gemini, could lead to data exfiltration or display misleading security warnings
2
.While Google maintains that this is not a security issue, other tech companies have taken a different stance. For instance, Amazon has published detailed security guidance on Unicode character smuggling
1
. This discrepancy in approach highlights the ongoing debate about AI security and the responsibilities of AI developers in mitigating potential risks.🟡 harsher_than_expected=🟡### ASCII Smuggling Attack Discovered in Gemini AISecurity researchers have uncovered a new vulnerability in Google's Gemini AI assistant, known as an ASCII smuggling attack. This technique allows attackers to embed hidden text using special Unicode characters, which are invisible to users but can be processed by large language models (LLMs) like Gemini
1
.Source: TechRadar
Despite the potential risks, Google has decided not to address this vulnerability, stating that it is not a security bug but rather a social engineering tactic
2
. This decision has raised concerns among cybersecurity experts, particularly due to Gemini's integration with Google Workspace applications.Related Stories
Viktor Markopoulos, a security researcher at FireTail, demonstrated the attack's potential impact:
1
.1
.1
.Markopoulos tested several AI tools for ASCII smuggling vulnerabilities:
1
The secure AI tools implement some form of input sanitization to prevent such attacks.
The integration of Gemini with Google Workspace significantly increases the risk profile of this vulnerability. Attackers could potentially:
1
In a phishing scenario, a malicious email could contain hidden prompts that, when processed by Gemini, could lead to data exfiltration or display misleading security warnings
2
.While Google maintains that this is not a security issue, other tech companies have taken a different stance. For instance, Amazon has published detailed security guidance on Unicode character smuggling
1
. This discrepancy in approach highlights the ongoing debate about AI security and the responsibilities of AI developers in mitigating potential risks.Summarized by
Navi
[1]