ChatGPT Flaws: ShadowLeak & ZombieAgent Exploits

Security researchers have disclosed critical vulnerabilities in ChatGPT that allowed attackers to exfiltrate sensitive data from Gmail, Outlook, GitHub, and other connected services. These flaws, dubbed ShadowLeak and ZombieAgent, exploited ChatGPT’s Connectors and Memory features to enable zero‑click attacks, persistence, and propagation.

How the Exploits Worked

  • Connectors: Integrations with Gmail, Jira, GitHub, Teams, Google Drive gave attackers broad access to personal/corporate data.
  • Memory: Stored conversations and user data for personalization, but also became a vector for malicious persistence.

Attack Types

Attack TypeTriggerExfiltration MethodScope
Zero‑Click Server‑SideMalicious file sharedbrowser.open() tool via OpenAI serversGmail inboxes, PII
One‑Click Server‑SideUploading tainted filesHidden prompts in docsGoogle Drive, GitHub
Persistence (ZombieAgent)Memory modification via fileOngoing leaks per queryAll chats, medical data
PropagationEmail address harvestingAuto‑forward payloads to contactsOrganizational spread

Techniques Used

  • Invisible prompts: Hidden in emails/files via white text, tiny fonts, or footers.
  • Zero‑click variant: ChatGPT executed payloads during routine tasks (e.g., summarizing emails).
  • Persistence: Memory‑altering rules ensured data leaks on every query.
  • Propagation: Harvested inbox addresses, auto‑sent payloads to spread across organizations.
  • Bypassing defenses:
    • OpenAI blocked dynamic URL modifications, but attackers used pre‑built static URLs for each character.
    • Sensitive strings normalized (e.g., “Zvika Doe” → “zvikadoe”), then exfiltrated via sequential static links.

Timeline

  • Sept 26, 2025: Radware reported flaws via BugCrowd.
  • Sept 3, 2025: ShadowLeak patched.
  • Dec 16, 2025: Full set of vulnerabilities fixed after reproduction.

Risks

  • Zero‑click exploitation: No user interaction required.
  • Persistent leaks: Memory manipulation enabled ongoing exfiltration.
  • Organizational spread: Auto‑forwarding attacks could compromise entire networks.
  • Cloud integration abuse: Leveraged trusted services (Gmail, GitHub, Outlook) for stealth.

Defensive Recommendations

  • For organizations:
    • Monitor agent behaviors for anomalies.
    • Sanitize inputs in files/emails before processing.
    • Restrict connector permissions to least privilege.
    • Audit memory usage and disable if unnecessary.
  • For individuals:
    • Be cautious of unsolicited files/emails with hidden content.
    • Regularly review connected services and revoke unused integrations.
    • Enable MFA across Gmail, Outlook, GitHub.

Takeaway

ShadowLeak and ZombieAgent highlight the double‑edged nature of agentic AI features: while Connectors and Memory enhance utility, they also expand the attack surface. The flaws show how zero‑click and persistence attacks can weaponize AI integrations, underscoring the need for continuous monitoring and stricter safeguards.

1 Trackback / Pingback

  1. ChatGPT Prompt Injection: Don't Paste Passwords (2026)

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.