Security researchers have demonstrated how agentic AI browsers — tools designed to autonomously navigate and act across websites — can themselves become victims of phishing scams. In a recent test, Perplexity’s Comet AI browser was tricked into a scam in under four minutes, revealing a new frontier in attack surfaces: targeting the AI agent instead of the human user.
How the Attack Works
- Agentic blabbering: AI browsers narrate their reasoning in real time — what they see, what they plan to do, and what they consider suspicious.
- GAN‑driven phishing: Researchers intercepted this narration and fed it into a Generative Adversarial Network (GAN), iteratively refining phishing pages until the AI stopped flagging them as suspicious.
- Offline training, online exploitation: Once a phishing page is tuned to bypass one AI browser, it can reliably trick all users of that agent.
Why This Matters
- Shift in target: Traditional phishing deceives humans; this new approach deceives the AI model itself.
- Scam evolution: Attackers can train scams offline against the exact AI agent millions rely on, ensuring success on first contact.
- Credential theft: Once the AI browser is tricked, it may autonomously enter sensitive data (like refund details or login credentials) into malicious forms.
Related Exploits
- Trail of Bits: Showed prompt injection techniques against Comet to extract private data from Gmail.
- Zenity Labs: Detailed zero‑click attacks (PerplexedComet) that exfiltrated local files or hijacked password manager accounts via indirect prompt injection.
- Intent collision: Occurs when benign user requests merge with attacker‑controlled instructions, creating a single malicious execution plan.
Defensive Recommendations
- Reduce blabbering: Limit how much AI browsers narrate their reasoning, as this provides attackers with training signals.
- Adversarial training: Continuously test AI agents against evolving phishing techniques.
- Workflow hardening: Restrict CI/CD and browser automation tasks from executing untrusted instructions.
- System safeguards: Implement stricter isolation between user intent and web data to prevent intent collision.
Final Thought
The Comet AI phishing experiment highlights a sobering reality: AI browsers are not just tools, they are new attack surfaces. As agentic systems take on more autonomy, attackers will increasingly target the machine’s reasoning rather than the human’s judgment. For defenders, the challenge is clear:
Leave a Reply