Google Dismisses ASCII Smuggling Vulnerability in Gemini AI

2 Sources

Share

Google has decided not to address a newly discovered ASCII smuggling attack in its Gemini AI assistant, potentially exposing users to hidden malicious prompts. The vulnerability, particularly concerning due to Gemini's integration with Google Workspace, could lead to data breaches and phishing attacks.

ASCII Smuggling Attack Discovered in Gemini AI

Security researchers have uncovered a new vulnerability in Google's Gemini AI assistant, known as an ASCII smuggling attack. This technique allows attackers to embed hidden text using special Unicode characters, which are invisible to users but can be processed by large language models (LLMs) like Gemini

1

.

Source: TechRadar

Source: TechRadar

Google's Response and Implications

Despite the potential risks, Google has decided not to address this vulnerability, stating that it is not a security bug but rather a social engineering tactic

2

. This decision has raised concerns among cybersecurity experts, particularly due to Gemini's integration with Google Workspace applications.

Vulnerability Demonstration

Viktor Markopoulos, a security researcher at FireTail, demonstrated the attack's potential impact:

  1. Calendar Invites: Hidden text can be embedded in invite titles, overwriting organizer details and smuggling concealed meeting descriptions or links

    1

    .
  2. Emails: For users with LLMs connected to their inboxes, a simple email containing hidden commands could instruct the AI to search for sensitive information or extract contact details

    1

    .
  3. Website Browsing: LLMs instructed to browse websites might encounter hidden payloads in product descriptions, potentially leading users to malicious URLs

    1

    .

Comparative AI Tool Security

Markopoulos tested several AI tools for ASCII smuggling vulnerabilities:

  • Vulnerable: Gemini, DeepSeek, and Grok
  • Secure: Claude, ChatGPT, and Microsoft CoPilot

    1

The secure AI tools implement some form of input sanitization to prevent such attacks.

Potential Consequences

The integration of Gemini with Google Workspace significantly increases the risk profile of this vulnerability. Attackers could potentially:

  1. Trick Gemini into providing false information to users
  2. Alter the model's behavior
  3. Silently poison its data

    1

In a phishing scenario, a malicious email could contain hidden prompts that, when processed by Gemini, could lead to data exfiltration or display misleading security warnings

2

.

Industry Perspectives

While Google maintains that this is not a security issue, other tech companies have taken a different stance. For instance, Amazon has published detailed security guidance on Unicode character smuggling

1

. This discrepancy in approach highlights the ongoing debate about AI security and the responsibilities of AI developers in mitigating potential risks.🟡 harsher_than_expected=🟡### ASCII Smuggling Attack Discovered in Gemini AI

Security researchers have uncovered a new vulnerability in Google's Gemini AI assistant, known as an ASCII smuggling attack. This technique allows attackers to embed hidden text using special Unicode characters, which are invisible to users but can be processed by large language models (LLMs) like Gemini

1

.

Source: TechRadar

Source: TechRadar

Google's Response and Implications

Despite the potential risks, Google has decided not to address this vulnerability, stating that it is not a security bug but rather a social engineering tactic

2

. This decision has raised concerns among cybersecurity experts, particularly due to Gemini's integration with Google Workspace applications.

Vulnerability Demonstration

Viktor Markopoulos, a security researcher at FireTail, demonstrated the attack's potential impact:

  1. Calendar Invites: Hidden text can be embedded in invite titles, overwriting organizer details and smuggling concealed meeting descriptions or links

    1

    .
  2. Emails: For users with LLMs connected to their inboxes, a simple email containing hidden commands could instruct the AI to search for sensitive information or extract contact details

    1

    .
  3. Website Browsing: LLMs instructed to browse websites might encounter hidden payloads in product descriptions, potentially leading users to malicious URLs

    1

    .

Comparative AI Tool Security

Markopoulos tested several AI tools for ASCII smuggling vulnerabilities:

  • Vulnerable: Gemini, DeepSeek, and Grok
  • Secure: Claude, ChatGPT, and Microsoft CoPilot

    1

The secure AI tools implement some form of input sanitization to prevent such attacks.

Potential Consequences

The integration of Gemini with Google Workspace significantly increases the risk profile of this vulnerability. Attackers could potentially:

  1. Trick Gemini into providing false information to users
  2. Alter the model's behavior
  3. Silently poison its data

    1

In a phishing scenario, a malicious email could contain hidden prompts that, when processed by Gemini, could lead to data exfiltration or display misleading security warnings

2

.

Industry Perspectives

While Google maintains that this is not a security issue, other tech companies have taken a different stance. For instance, Amazon has published detailed security guidance on Unicode character smuggling

1

. This discrepancy in approach highlights the ongoing debate about AI security and the responsibilities of AI developers in mitigating potential risks.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo