Google Gemini exploited to leak private Calendar data through malicious invites

Reviewed byNidhi Govil

6 Sources

Share

Security researchers at Miggo Security discovered how to trick Google Gemini into leaking sensitive Calendar information using only a calendar invite and natural language instructions. The attack exploited Gemini's deep integration with Google Workspace, bypassing existing defenses to exfiltrate private meeting data without user awareness. Google has since added new mitigations, but the incident highlights ongoing challenges in securing AI assistants against prompt injection attacks.

Google Gemini Vulnerability Exposes Private Calendar Information

Security researchers at Miggo Security have revealed a critical vulnerability in Google Gemini that allowed attackers to leak Google Calendar data through a sophisticated prompt injection attack. The AI security flaw exploited the assistant's deep integration with Google Workspace apps, demonstrating how natural language instructions could bypass existing defenses and grant unauthorized access to meeting data

5

.

Source: Android Authority

Source: Android Authority

The attack required nothing more than a calendar invite. Miggo Security researchers embedded carefully crafted prompts into the description field of a Google Calendar event, which remained dormant until activated by a routine user query

2

. When victims asked Google Gemini simple questions about their schedule, the AI assistant would parse all calendar entries, including the malicious calendar invites, and execute the hidden instructions without raising security warnings.

How the Attack Exploited AI Assistant Integration

The exploit unfolded in three distinct stages. First, an attacker sent a calendar invite containing a payload disguised as benign text that instructed Gemini to summarize private meetings, create a new event, and store sensitive meeting summaries in the event description

4

. The instructions appeared harmless in isolation, which allowed them to evade Google's separate model designed to detect malicious prompts in the primary Gemini assistant

1

.

Source: SiliconANGLE

Source: SiliconANGLE

In the second stage, the payload remained inactive until the victim asked Gemini about their schedule. This triggered the exfiltration activity, causing Google's AI assistant to load and interpret all relevant events

1

. Finally, Gemini executed the embedded instructions, creating a new calendar event with a full summary of the user's private meetings while responding to the victim with an innocuous message like "it's a free time slot"

3

.

Enterprise Configurations Amplify Data Privacy Risks

In many enterprise setups, the newly created event containing sensitive meeting summaries became visible to event participants, directly granting the attacker access to confidential information without any direct user interaction

4

. This vulnerability in large language models highlights a fundamental challenge: AI assistants cannot distinguish between legitimate instructions and data used to execute those instructions

4

.

Miggo's head of research, Liad Eliyahu, told BleepingComputer that the attack demonstrates how Gemini's reasoning capabilities remained vulnerable to manipulation despite Google implementing additional defenses following a previous SafeBreach report in August 2025

1

. That earlier incident also involved malicious Google Calendar invites being used to take control of Gemini's agents and leak sensitive user data.

Mitigations and the Future of Application Security

Google confirmed the findings and has since added new mitigations to block such attacks

2

. However, the incident underscores broader challenges in cybersecurity as AI systems become more deeply integrated into enterprise workflows. The researchers argue that application security must evolve from syntactic detection to context-aware defenses that can reason about semantics and attribute intent

5

.

Miggo Security researchers emphasize that effective protection will require runtime systems that track data provenance and treat large language models as full application layers with privileges that must be carefully governed

5

. As AI assistants gain more capabilities across multiple services, the complexities of foreseeing new exploitation models driven by natural language with ambiguous intent will continue to challenge traditional security frameworks

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo