Google Declines to Fix ASCII Smuggling Vulnerability in Gemini AI

Reviewed byNidhi Govil

6 Sources

Share

A security researcher has discovered a new ASCII smuggling attack vulnerability in Google's Gemini AI, which could be exploited to manipulate the AI's behavior and potentially expose sensitive user data. Google has dismissed the issue, classifying it as a social engineering problem rather than a security flaw.

ASCII Smuggling Attack Discovered in Google Gemini

Security researcher Viktor Markopoulos from FireTail has uncovered a new vulnerability in Google's Gemini AI, known as an 'ASCII smuggling attack'. This exploit allows attackers to insert hidden commands into text that are invisible to users but can be processed by the AI model

1

.

Source: Android Police

Source: Android Police

The attack works by using special characters from the Tags Unicode block to introduce payloads that are undetectable to the human eye but can be read and executed by large language models (LLMs) like Gemini

1

. This technique exploits the gap between what users see and what machines can process, similar to other recently discovered attacks involving CSS manipulation and GUI limitations.

Potential Risks and Implications

The integration of Gemini with Google Workspace significantly amplifies the potential risks associated with this vulnerability. Attackers could potentially embed hidden text in Calendar invites or emails, instructing the AI to perform unauthorized actions such as:

  1. Searching inboxes for sensitive information
  2. Sending contact details to malicious actors
  3. Overwriting organizer details (identity spoofing)
  4. Smuggling hidden meeting descriptions or links

    2

Markopoulos demonstrated that the attack could trick Gemini into providing false information to users, such as recommending potentially malicious websites

1

.

Source: Bleeping Computer

Source: Bleeping Computer

Google's Response and Industry Comparison

Despite the potential security implications, Google has decided not to address the issue. The company classified ASCII smuggling as a 'social engineering' problem rather than a technical vulnerability, suggesting that the responsibility lies with the end-user

3

.

This stance contrasts with other major AI providers. When tested against similar attacks, OpenAI's ChatGPT, Anthropic's Claude, and Microsoft's Copilot were found to have implemented input sanitization measures, effectively blocking such attempts. However, Elon Musk's Grok and China's DeepSeek were also vulnerable to ASCII smuggling attacks

3

.

Industry Concerns and Future Implications

The discovery of this vulnerability and Google's response have raised concerns within the cybersecurity community. As AI assistants like Gemini gain more access to sensitive user data and perform autonomous tasks, the potential impact of such attacks becomes more significant

4

.

Source: Phandroid

Source: Phandroid

Some experts argue that Google's decision not to address the issue could lead to increased risks of data breaches and the spread of misinformation, particularly in corporate networks where Gemini is integrated with email, scheduling, and document systems

5

.

As the AI landscape continues to evolve, the industry may need to reconsider how it approaches security vulnerabilities that blur the line between technical flaws and social engineering tactics. The incident highlights the ongoing challenges in balancing the capabilities of AI assistants with the need for robust security measures to protect user data and maintain trust in these emerging technologies.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo