Security researchers have disclosed critical vulnerabilities in ChatGPT that allowed attackers to exfiltrate sensitive data from Gmail, Outlook, GitHub, and other connected services. These flaws, dubbed ShadowLeak and ZombieAgent, exploited ChatGPT’s Connectors and Memory features to enable zero‑click attacks, persistence, and propagation.
How the Exploits Worked
- Connectors: Integrations with Gmail, Jira, GitHub, Teams, Google Drive gave attackers broad access to personal/corporate data.
- Memory: Stored conversations and user data for personalization, but also became a vector for malicious persistence.
Attack Types
| Attack Type | Trigger | Exfiltration Method | Scope |
|---|---|---|---|
| Zero‑Click Server‑Side | Malicious file shared | browser.open() tool via OpenAI servers | Gmail inboxes, PII |
| One‑Click Server‑Side | Uploading tainted files | Hidden prompts in docs | Google Drive, GitHub |
| Persistence (ZombieAgent) | Memory modification via file | Ongoing leaks per query | All chats, medical data |
| Propagation | Email address harvesting | Auto‑forward payloads to contacts | Organizational spread |
Techniques Used
- Invisible prompts: Hidden in emails/files via white text, tiny fonts, or footers.
- Zero‑click variant: ChatGPT executed payloads during routine tasks (e.g., summarizing emails).
- Persistence: Memory‑altering rules ensured data leaks on every query.
- Propagation: Harvested inbox addresses, auto‑sent payloads to spread across organizations.
- Bypassing defenses:
- OpenAI blocked dynamic URL modifications, but attackers used pre‑built static URLs for each character.
- Sensitive strings normalized (e.g., “Zvika Doe” → “zvikadoe”), then exfiltrated via sequential static links.
Timeline
- Sept 26, 2025: Radware reported flaws via BugCrowd.
- Sept 3, 2025: ShadowLeak patched.
- Dec 16, 2025: Full set of vulnerabilities fixed after reproduction.
Risks
- Zero‑click exploitation: No user interaction required.
- Persistent leaks: Memory manipulation enabled ongoing exfiltration.
- Organizational spread: Auto‑forwarding attacks could compromise entire networks.
- Cloud integration abuse: Leveraged trusted services (Gmail, GitHub, Outlook) for stealth.
Defensive Recommendations
- For organizations:
- Monitor agent behaviors for anomalies.
- Sanitize inputs in files/emails before processing.
- Restrict connector permissions to least privilege.
- Audit memory usage and disable if unnecessary.
- For individuals:
- Be cautious of unsolicited files/emails with hidden content.
- Regularly review connected services and revoke unused integrations.
- Enable MFA across Gmail, Outlook, GitHub.
Takeaway
ShadowLeak and ZombieAgent highlight the double‑edged nature of agentic AI features: while Connectors and Memory enhance utility, they also expand the attack surface. The flaws show how zero‑click and persistence attacks can weaponize AI integrations, underscoring the need for continuous monitoring and stricter safeguards.
Leave a Reply