A newly disclosed zero‑click vulnerability dubbed GeminiJack exposed Gmail, Calendar, and Docs data in Google Gemini Enterprise (formerly Vertex AI Search).
What Happened
- Nature of flaw: Considered an architectural vulnerability, not just a bug.
- Attack vector:
- No clicks or user interaction required.
- Attacker shares a poisoned Google Doc, Calendar invite, or email containing hidden prompt injections.
- Trigger: When employees run routine Gemini searches (e.g., “show Q4 budgets”), the AI retrieves the malicious content.
- Exfiltration method:
- AI executes injected instructions.
- Embeds sensitive results in an HTML
<img>tag. - Sends data to attacker’s server via normal HTTP traffic.
How GeminiJack Works (Step‑by‑Step)
- Poisoning → Attacker shares content with embedded prompt injection.
- Trigger → Employee runs a Gemini query.
- Retrieval → Gemini’s RAG (Retrieval‑Augmented Generation) pipeline indexes poisoned content.
- Exfiltration → AI outputs results into disguised image requests, leaking data externally.
Impact
- Blast radius amplified by Gemini’s persistent access to Workspace data sources.
- Potential exposure:
- Years of Gmail emails.
- Full calendars revealing deals, structures, and schedules.
- Docs repositories with contracts, financials, and sensitive intel.
- From the employee’s perspective → normal search results.
- From security’s perspective → no malware, no phishing, just AI “working as designed.”
Google’s Response
- Separated Vertex AI Search from Gemini.
- Patched RAG instruction handling to block malicious prompt injections.
Broader Implications
- AI‑native risks: Prompt injection attacks can weaponize assistants with access to corporate data.
- Traditional defenses fail: DLP and endpoint tools don’t detect poisoned inputs.
- Trust boundaries must be rethought:
- Monitor RAG pipelines.
- Limit AI data source access.
- Treat shared content as potential attack vectors.
Takeaways for Organizations
- Audit AI integrations: Ensure assistants don’t have unrestricted access to sensitive data sources.
- Implement guardrails: Validate and sanitize inputs before they enter RAG pipelines.
- Monitor for anomalies: Watch for unusual outbound traffic (e.g., image requests with embedded data).
- Educate staff: Highlight risks of poisoned shared content even if it looks benign.
GeminiJack is a wake‑up call: as AI assistants gain deeper access to enterprise systems, prompt injection becomes the new phishing — silent, invisible, and devastating.
Leave a Reply