Researchers at Miggo Security disclosed a prompt injection vulnerability in Google Gemini that allowed attackers to bypass guardrails and exfiltrate private Google Calendar data via malicious invites.
How the Attack Worked
- Malicious calendar invite crafted by attacker.
- Payload hidden in event description as natural language instructions.
- Victim asks Gemini an innocent question (e.g., “Do I have meetings Tuesday?”).
- Gemini parses the injected prompt → creates a new calendar event.
- Event description contains a full summary of private meetings.
- In enterprise setups, the attacker could view the new event and read exfiltrated data.
Result: Unauthorized access to private meeting details without any user interaction.
Security Implications
- Bypasses privacy controls: Exploits Gemini’s ability to act on natural language.
- Indirect prompt injection: Attack lives in language/context, not code.
- Stealthy exfiltration: Victim sees a harmless response, while attacker gains sensitive data.
- Broader risk: AI-native features expand attack surfaces in enterprise workflows.
Related AI Vulnerabilities
- Varonis Reprompt: Exfiltrated sensitive data from Copilot in one click.
- XM Cyber (Google Vertex AI): Privilege escalation via “double agent” service identities.
- The Librarian (CVE-2026-0612–0616): Backend takeover and metadata leaks.
- Anthropic Claude Code plugin abuse: Indirect prompt injection to steal files.
- Cursor IDE (CVE-2026-22708): Remote code execution via shell built-in manipulation.
- Vibe coding IDEs (Cursor, Codex, Replit, Devin): Weak against SSRF, business logic flaws, lacking CSRF protection.
Defensive Recommendations
- Audit AI workflows: Treat natural language inputs as potential attack vectors.
- Guardrails: Anchor regex, sanitize inputs, and enforce strict authorization checks.
- Visibility: Monitor AI actions that write to external systems (calendar, DB, logs).
- Identity hygiene: Review service accounts and limit privileges.
- Human oversight: AI agents cannot reliably enforce nuanced business logic or security controls.
Takeaway
The Gemini flaw shows how language-driven AI systems can be manipulated to perform unauthorized actions. As enterprises adopt AI agents, prompt injection becomes a critical attack vector, requiring both traditional security controls and runtime monitoring of AI behavior.
Leave a Reply