6 Sources
6 Sources
[1]
Google won't fix new ASCII smuggling attack in Gemini
Google has decided not to fix a new ASCII smuggling attack in Gemini that could be used to trick the AI assistant into providing users with fake information, alter the model's behavior, and silently poison its data. ASCII smuggling is an attack where special characters from the Tags Unicode block are used to introduce payloads that are invisible to users but can still be detected and processed by large-language models (LLMs). It's similar to other attacks that researchers presented recently against Google Gemini, which all exploit a gap between what users see and what machines read, like performing CSS manipulation or exploiting GUI limitations. While LLMs' susceptibility to ASCII smuggling attacks isn't a new discovery, as several researchers have explored this possibility since the advent of generative AI tools, the risk level is now different [1, 2, 3, 4]. Before, chatbots could only be maliciously manipulated by such attacks if the user was tricked into pasting specially crafted prompts. With the rise of agentic AI tools like Gemini, which have widespread access to sensitive user data and can perform tasks autonomously, the threat is more significant. Viktor Markopoulos, a security researcher at FireTail cybersecurity company, has tested ASCII smuggling against several widely used AI tools and found that Gemini (Calendar invites or email), DeepSeek (prompts), and Grok (X posts), are vulnerable to the attack. Claude, ChatGPT, and Microsoft CoPilot proved secure against ASCII smuggling, implementing some form of input sanitization, FireTail found. Regarding Gemini, its integration with Google Workspace poses a high risk, as attackers could use ASCII smuggling to embed hidden text in Calendar invites or emails. Markopoulos found that it's possible to hide instructions on the Calendar invite title, overwrite organizer details (identity spoofing), and smuggle hidden meeting descriptions or links. Regarding the risk from emails, the researcher states that "for users with LLMs connected to their inboxes, a simple email with hidden commands can instruct the LLM to search the inbox for sensitive items or send contact details, turning a standard phishing attempt into an autonomous data extraction tool." LLMs instructed to browse websites can also stumble upon hidden payloads in product descriptions and feed them with malicious URLs to convey to users. The researcher reported the findings to Google on September 18 but the tech giant dismissed the issue as not being a security bug and may only be exploited in the context of social engineering attacks. Even so, Markopoulos showed that the attack can trick Gemini into supplying false information to users. In one example, the researcher passed an invisible instruction that Gemini processed to present a potentially malicious site as the place to get a good quality phone with a discount. Other tech firms, though, have a different perspective on this type of problems. For example, Amazon published detailed security guidance on the topic of Unicode character smuggling. BleepingComputer has contacted Google for more clarification on the bug but we have yet to receive a response.
[2]
Google says it won't fix Gemini security flaw that could send your sensitive info to a stranger
There are a few reasons why something like this is problematic. For example, the prompt could tell the AI to search your inbox for sensitive information or send contact details. Considering that Gemini is now integrated with Google Workspace, this issue poses an even higher risk. Markopoulos reportedly reached out to Google with this discovery. He even went as far as to provide a demonstration where he passed on an invisible instruction to Gemini. The AI ended up getting tricked into sharing a malicious site for a good-quality, discounted phone. However, it's reported that Google dismissed the issue as not a security bug, but rather a social engineering tactic. Essentially, the company is saying that the onus falls on the end user.
[3]
Gemini has a known vulnerability, and Google is leaving it alone
Google's Gemini AI is facing scrutiny after a researcher discovered a new exploit called an "ASCII smuggling" attack, and the tech giant has made it clear it doesn't plan to fix it. Cybersecurity researcher Viktor Markopoulos from FireTail first revealed the flaw. It involves hidden characters in text that Gemini reads as commands, even though people cannot see them (via BleepingComputer). These invisible instructions can change how the model behaves, making it do things the user did not intend. The attack is hard to spot because it uses control characters or Unicode symbols that do not show up on screen but are still read by the AI. Markopoulos demonstrated that this vulnerability could be weaponized through simple text-based inputs like calendar invites or emails. For instance, an email that looks entirely harmless could contain hidden instructions, causing Gemini to misinterpret or rewrite data when summarizing or interacting with that text. In his tests, Gemini could be coaxed into changing meeting details or generating misleading outputs, all triggered by characters a human wouldn't even know were there. What's even more alarming is that when the same attack was tested against other major AI systems like OpenAI's ChatGPT, Anthropic's Claude, and Microsoft's Copilot, those models either sanitized or rejected the hidden inputs. Gemini, however, along with Elon Musk's Grok and China's DeepSeek, failed to block them. Google dismisses exploit as social engineering Despite the clear security implications, Google has decided not to address the issue. In a response to FireTail's report, the company classified ASCII smuggling as a "social engineering" problem rather than a technical vulnerability. In simpler terms, Google is suggesting that the issue arises from users being tricked, not from a flaw in the model's design. Google's decision could worry some users. Because Gemini works closely with email, scheduling, and document systems, there is a risk that it might expose confidential information or help spread misinformation in corporate networks. Google maintains that this attack is not a system flaw, but others may see it as a gap in how AI reads text compared to humans. Attackers often exploit this kind of difference. Meanwhile, Google has patched other Gemini-related vulnerabilities this year, including issues in logs, search summaries, and browsing histories known as the "Gemini Trifecta."
[4]
Researcher finds security flaw in Gemini -- but Google says it's not fixing it
AI assistant is vulnerable to ASCII smuggling attacks which can feed users malicious info Although a researcher was able to demonstrate that Google Gemini could be tricked into giving users fake information like leading them to malicious websites, Google has said it doesn't consider this ASCII smuggling attack a true security bug and it has no plans to issue a fix for the flaw. As reported by BleepingComputer, the company dismissed the findings as being more of an issue of social engineering attacks than an actual security vulnerability. Since Gemini is so closely integrated with Google Workspace, this vulnerability is a high risk issue as this attack could be used to embed hidden text in Calendar invites or emails to instruct the AI assistant in unseen Calendar invite tiles, overwrite organizer details or hidden meeting descriptions or links. ASCII smuggling is an attack style that uses special characters from the Tags Unicode block to introduce payloads that are invisible to users, but can still be detected and processed by large language models. - Essentially, this means that it hides letters, numbers or other characters to introduce malicious code to the AI assistant that users can't see. LLMs have been vulnerable to ASCII smuggling attacks - and similar methods - for quite some time, however, the threat is now higher because agentic AI tools, like Gemini, have both widespread access to sensitive user data and can perform autonomous tasks. According to the researchers "If users have LLMs connected to their inboxes, an email with hidden commands can instruct them [the AI] to search the inbox for sensitive items, send contact details and then turn a standard phishing attempt into an autonomous data extraction tool." LLMs that have been told to browse websites could also potentially stumble onto hidden payloads in product descriptions and feed them with malicious URLs to feed back to users. There are other techniques that use similar methods to manipulate the gap between what users see and what machines read including CSS manipulation and GUI limitations. The security researcher involved in this research found that Gemini, like Grok and DeepSeek, is vulnerable to ASCII smuggling attacks while Claude, ChatGPT and Microsoft Copilot are safe from such threats by implementing some form of input sanitization. We reached out to Google for comment about the research and will update this story if and when we hear back.
[5]
Google says it won't fix this potentially concerning Gemini security issue
Gemini's integration with Workspace apps makes it vulnerable to hidden prompt-triggered phishing attacks A recently-detected "ASCII smuggling attack" will not be getting a fix in Google's Gemini artificial intelligence tool, the company has said - saying it is not a security issue but rather a social engineering tactic and as such, the responsibility falls on the end user. This is according to Viktor Markopoulos, a security researcher at FireTail, who demonstrated the risks these attacks pose to Gemini users but was apparently dismissed by the company. ASCII smuggling is a type of attack in which crooks trick victims into prompting their AI tool a malicious command that puts their computers and data at risk. The trick works by "smuggling", or hiding, the prompt in plain sight by, for example, having the AI read text invisible to the human behind the screen. In the early years of AI, this wasn't much of an issue, because the user needed to bring up the AI tool and type (or copy/paste) the prompt themselves. However, a lot has changed since then and many AI tools are now being integrated with other apps and platforms. Gemini, for example, is now integrated with Google Workplace, being able to pull data from Sheets, generate text in Docs, and read and summarize emails. This last point is crucial here. As Markopoulos demonstrated, a threat actor could send a phishing email that, on the surface, looks completely legitimate. However, it also comes with a malicious prompt written in font 0, in white, on a white background, so that the reader doesn't even see it. But when the victim asks Gemini to summarize the email, the tool reads the prompt too, and responds to it. That prompt could be to display a message saying "your computer is compromised, call Google to mitigate the threat immediately," or a similar message, standard to phishing tricks. Even more ominously, the prompt could force different AI agents to exfiltrate sensitive data from the inbox. All it takes is a simple, benign command from the user, to summarize or read the contents of the email. Via BleepingComputer
[6]
Google Won't Fix Gemini Flaw That Lets Hackers Hide Instructions in Your Calendar - Phandroid
Google is refusing to patch a Gemini security flaw that lets attackers manipulate the AI assistant using invisible text. The exploit works through calendar invites or emails, putting anyone using Gemini with Google Workspace at risk. Here's how the attack works. Someone sends you a calendar invite that looks completely normal. Hidden inside are invisible instructions for Gemini. When you ask Gemini to summarize your calendar or read your email, it processes those hidden commands and follows them. You never see the malicious instructions, but Gemini reads and obeys them anyway. Security researcher Viktor Markopoulos from FireTail discovered the Gemini security flaw and contacted Google in September. Google dismissed the concerns, arguing this counts as social engineering rather than a vulnerability they need to fix. Attackers could exploit this to recommend phishing sites when you ask about meeting details or manipulate who appears as the meeting organizer. Since Gemini keeps expanding across Google's services, the problem only gets worse. FireTail tested other AI assistants and found that ChatGPT, Claude, and Microsoft Copilot all block these attacks through input sanitization. Google's Gemini AI lacks this protection, leaving users vulnerable through everyday tools like email and calendar. Amazon even published security guidelines about this exact threat, showing other tech companies take it seriously while Google shifts responsibility to users.
Share
Share
Copy Link
A security researcher has discovered a new ASCII smuggling attack vulnerability in Google's Gemini AI, which could be exploited to manipulate the AI's behavior and potentially expose sensitive user data. Google has dismissed the issue, classifying it as a social engineering problem rather than a security flaw.
Security researcher Viktor Markopoulos from FireTail has uncovered a new vulnerability in Google's Gemini AI, known as an 'ASCII smuggling attack'. This exploit allows attackers to insert hidden commands into text that are invisible to users but can be processed by the AI model
1
.
Source: Android Police
The attack works by using special characters from the Tags Unicode block to introduce payloads that are undetectable to the human eye but can be read and executed by large language models (LLMs) like Gemini
1
. This technique exploits the gap between what users see and what machines can process, similar to other recently discovered attacks involving CSS manipulation and GUI limitations.The integration of Gemini with Google Workspace significantly amplifies the potential risks associated with this vulnerability. Attackers could potentially embed hidden text in Calendar invites or emails, instructing the AI to perform unauthorized actions such as:
2
Markopoulos demonstrated that the attack could trick Gemini into providing false information to users, such as recommending potentially malicious websites
1
.
Source: Bleeping Computer
Despite the potential security implications, Google has decided not to address the issue. The company classified ASCII smuggling as a 'social engineering' problem rather than a technical vulnerability, suggesting that the responsibility lies with the end-user
3
.This stance contrasts with other major AI providers. When tested against similar attacks, OpenAI's ChatGPT, Anthropic's Claude, and Microsoft's Copilot were found to have implemented input sanitization measures, effectively blocking such attempts. However, Elon Musk's Grok and China's DeepSeek were also vulnerable to ASCII smuggling attacks
3
.Related Stories
The discovery of this vulnerability and Google's response have raised concerns within the cybersecurity community. As AI assistants like Gemini gain more access to sensitive user data and perform autonomous tasks, the potential impact of such attacks becomes more significant
4
.
Source: Phandroid
Some experts argue that Google's decision not to address the issue could lead to increased risks of data breaches and the spread of misinformation, particularly in corporate networks where Gemini is integrated with email, scheduling, and document systems
5
.As the AI landscape continues to evolve, the industry may need to reconsider how it approaches security vulnerabilities that blur the line between technical flaws and social engineering tactics. The incident highlights the ongoing challenges in balancing the capabilities of AI assistants with the need for robust security measures to protect user data and maintain trust in these emerging technologies.
Summarized by
Navi
[1]
[2]
[3]
07 Aug 2025•Technology

30 Sept 2025•Technology

30 Jul 2025•Technology
