How a Ransomware Gang Took Down Nevada Government Systems — And How the State Recovered

In August 2025 a coordinated ransomware incident hit the State of Nevada, affecting more than 60 agencies and disrupting websites, phone systems and citizen-facing services. The public after‑action report the state published is unusually transparent: it lays out the attacker’s path, the operational impact, and the recovery choices the state made. That transparency is valuable — it turns a painful event into a practical case study for every IT and security team that wants to harden their defences and improve incident readiness.

The chain of compromise in plain terms

  • Initial access came from a trojanised admin utility downloaded after an employee clicked a malicious search ad and landed on a fake software site.
  • The trojan installed a persistent backdoor that reconnected on user login and survived initial AV removal.
  • The attacker later installed legitimate remote‑monitoring software for screen capture and keystroke logging, then deployed a custom encrypted tunnel and used RDP to move laterally.
  • Critical targets included the password vault server (26 credentials were taken), backup infrastructure (backups were deleted), and the virtualization management server (security settings were altered).
  • At a chosen time the attacker deployed ransomware across VM hosts, causing a statewide outage detected about 20 minutes after deployment.

Where common controls failed

  • Reliance on internet search results without publisher validation enabled a supply‑vector masquerading as a trusted admin tool.
  • Endpoint protection quarantined the initial malware but persistence mechanisms survived and restored remote access.
  • Privileged credential exposure (password vault compromise) and broad management access allowed rapid lateral movement and destructive actions.
  • Backups were accessible and vulnerable to deletion from the network, removing the easiest path to rapid recovery.

What the state did right during response

  • Immediate, transparent playbook execution and rapid mobilization of internal teams limited further damage.
  • The state refused to pay ransom and leaned on internal staff supplemented by external specialists to restore services.
  • A prioritized recovery plan focused on critical services (payroll, public safety communications) and staged restoration of nonessential systems.
  • Forensics and containment were conducted with vendor partners, and evidence collection was preserved for later analysis.

Costs, trade‑offs, and recovery outcomes

  • Over 50 state employees logged 4,212 overtime hours at a direct wage cost of ~$259k. That internal effort reduced contractor expenditure and likely saved an estimated $478k versus a full external contractor response.
  • External vendor support cost roughly $1.3M across forensics, rebuild and legal work.
  • The state recovered about 90% of the data needed to restore services within 28 days without paying the ransom.
  • The financial and operational burden was substantial, but the outcome demonstrates that a focused internal response — supported by trusted vendors — can be effective.

Key lessons for security teams

  1. Treat search and ad vectors as a supply‑chain risk: validate publishers and prefer official distribution channels or signed installers.
  2. Don’t assume AV removal equals eradication: investigate persistence mechanisms and validate endpoint integrity after any detection.
  3. Protect credentials and vaults: vault servers should be segmented, monitored, and accessible only from hardened jump boxes.
  4. Harden backups: keep immutable or off‑network backups and enforce least privilege for backup access and management.
  5. Prepare playbooks and practice them: tabletop and live drills reduce decision time and error during real incidents.
  6. Emphasize detection of lateral movement: encrypted tunnels, unexpected RDP sessions, and unusual account usage are early red flags.

Practical immediate actions you can take

  • Audit and enforce signed, vetted software sources for admin tools; block downloads from obvious impostor domains.
  • Run privileged access reviews and rotate credentials stored in vaults; enforce multi‑person approvals for critical actions.
  • Ensure backups are segmented, tested for recovery, and protected with immutability or air‑gap strategies.
  • Strengthen endpoint telemetry and hunt for persistence artifacts after any AV event.
  • Run an incident‑response tabletop that includes backup restoration, legal & communication steps, and a rollback plan.

Strategic investments that matter

  • Centralized secrets management and vault hardening with strict access controls and strong logging.
  • Improved telemetry and detection for privileged activity and lateral movement.
  • Immutable backups or isolated recovery targets to reduce the cost and complexity of recovery.
  • Supplier and distribution integrity checks for admin tooling and developer utilities.
  • Staffing and retention strategies to keep internal expertise ready for high‑tempo incident response.

Final thoughts

Nevada’s after‑action report is a useful, practical read for any organization operating complex IT services. The incident reinforces a simple truth: prevention matters, but preparation and resilient recovery capacity matter more when prevention fails. Protecting credentials, hardening backup architecture, and validating software sources are non‑glamorous controls that dramatically reduce blast radius. Equally important are practiced playbooks and the will to reject ransom demands — a policy choice that demands readiness and resources, but can preserve public trust and limit long‑term exposure.

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.