
Researchers bypassed Google Gemini’s defenses to exfiltrate private Google Calendar data using natural language instructions. The attack created misleading events, delivering sensitive data to an attacker within a Calendar event description.
Gemini, Google’s large language model (LLM) assistant, integrates across Google web services and Workspace applications such as Gmail and Calendar, summarizing emails, answering questions, and managing events. The newly identified Gemini-based Calendar invite attack begins when a target receives an event invitation containing a prompt-injection payload in its description.
The victim triggers data exfiltration by asking Gemini about their schedule, which causes the assistant to load and parse all relevant events, including the one with the attacker’s payload. Researchers at Miggo Security, an Application Detection & Response (ADR) platform, discovered they could manipulate Gemini into leaking Calendar data through natural language instructions:
“Because Gemini automatically ingests and interprets event data to be helpful, an attacker who can influence event fields can plant natural language instructions that the model may later execute,” the researchers said. They controlled an event’s description field, planting a prompt that Google Gemini obeyed despite the harmful outcome.
Upon sending the malicious invite, the payload remained dormant until the victim made a routine inquiry about their schedule. When Gemini executed the embedded instructions in the malicious Calendar invite, it created a new event and wrote the private meeting summary into its description. In many enterprise configurations, the updated description became visible to event participants, potentially leaking private information to the attacker.
Miggo noted that Google employs a separate, isolated model to detect malicious prompts in the primary Gemini assistant. However, their attack bypassed this safeguard because the instructions appeared innocuous. Miggo’s head of research, Liad Eliyahu, told BleepingComputer that the new attack demonstrated Gemini’s reasoning capabilities remained susceptible to manipulation, circumventing active security warnings and Google’s additional defenses implemented after SafeBreach’s August 2025 report. SafeBreach previously showed that a malicious Google Calendar invite could facilitate data leakage by seizing control of Gemini’s agents.
Miggo shared its findings with Google, which has since implemented new mitigations to block similar attacks. Miggo’s attack concept highlights the complexities of anticipating new exploitation and manipulation models in AI systems where APIs are driven by natural language with ambiguous intent. Researchers suggested that application security must transition from syntactic detection to context-aware defenses.