Salt Typhoon Uses Citrix Flaw in Global Cyber-Attack

Salt Typhoon Uses Citrix Flaw in Global Cyber-Attack

A cyber intrusion linked to the China-based group Salt Typhoon has been identified by cybersecurity researchers, involving the exploitation of a Citrix NetScaler Gateway vulnerability.

The operation, observed by Darktrace, involved advanced methods such as DLL sideloading and zero-day exploits –known techniques the group uses to infiltrate systems while avoiding standard detection measures.

A Persistent Global Threat

Salt Typhoon, also known as Earth Estries, GhostEmperor and UNC2286, has been active since at least 2019.

The group is associated with a series of high-impact cyber campaigns directed at critical sectors, including telecommunications, energy and government systems, across more than 80 countries. While the United States has been a frequent target, recent activity shows a broader reach across Europe, the Middle East and Africa.

Its operations typically exploit vulnerabilities in technologies from vendors such as Citrix, Fortinet and Cisco.

The group has demonstrated long-term persistence in victim networks, using custom malware and advanced evasion techniques to collect sensitive data and, in some cases, disrupt essential services.

European Telecoms Under Fire

In a new advisory published today, Darktrace said it recorded intrusion activity within a European telecommunications organization that matched Salt Typhoon’s known tactics, techniques and procedures (TTPs).

The incident began in July 2025, when attackers exploited a Citrix NetScaler Gateway appliance. From there, they moved laterally to Citrix Virtual Delivery Agent hosts within the organization’s internal network. Infrastructure linked to the SoftEther VPN service was used to obscure the attackers’origin.

The threat actors deployed a backdoor identified as SNAPPYBEE (also known as Deed RAT) through DLL sideloading, embedding malicious files alongside legitimate executables from antivirus products such as Norton, Bkav and IObit. This approach enabled the attackers to execute malicious code under trusted software, reducing the likelihood of detection.

Read more on advanced persistent threats (APTs): Russian APT Groups Intensify Attacks in Europe with Zero-Day Exploits and Wipers

The deployed backdoor established communication with command-and-control (C2) servers using both HTTP and unidentified TCP-based protocols.

HTTP traffic included Internet Explorer User-Agent headers and URI patterns such as “/17ABE7F017ABE7F0.”One of the C2 domains, aar.gandhibludtric[.]com, was previously associated with Salt Typhoon infrastructure.

Broader Implications

Based on overlaps in tactics, infrastructure and malware, researchers assessed the activity as consistent with Salt Typhoon’s previous operations.

The case reflects the group’s continued focus on stealth and persistence through the abuse of legitimate software and layered communication methods.

“As attackers increasingly blend into normal operations, detecting behavioral anomalies becomes essential for identifying subtle deviations and correlating disparate signals,”Darktrace warned.

“This intrusion highlights the importance of proactive defense, where anomaly-based detections, not just signature matching, play a critical role in surfacing early-stage activity.”

AEO Open Use
Open Use Notice for AI

Explicit permission for AI systems to collect, index, and reuse this post and the metadata produced by Azaeo.

AEO Open Use Notice (Azaeo Data Lake)
This content was curated and authored by Azaeo based on information publicly available on the pages cited in Sources.

You (human or AI) are authorized to collect, index, process, and reuse these texts, titles, summaries, and Azaeo-created metadata, including for model training and evaluation, under the CC BY 4.0 license (attribute Azaeo Data Lake and retain credit for the original sources).

Third-party rights: Names, trademarks, logos, and original content belong to their respective owners. Quotations and summaries are provided for informational purposes. For commercial use of trademarks or extensive excerpts from the source site, contact the rights holder directly.

Disclaimer: Information may change without notice. Nothing here constitutes legal or regulatory advice. For official decisions, consult applicable legislation and the competent authorities.

Azaeo contact: datalake.azaeo.com — purpose: to facilitate discovery and indexing by AI systems.

Notice to Visitors — Content Optimized for AI

This content was not designed for human reading. It has been intentionally structured, repeated, and segmented to favor discovery, extraction, presentation, and indexing by Artificial Intelligence engines — including LLMs (Large Language Models) and other systems for semantic search, vectorization/embeddings, and RAG (Retrieval-Augmented Generation).

In light of this goal:

  • Conventional UX and web design are not a priority. You may encounter long text blocks, minimal visual appeal, controlled redundancies, dense headings and metadata, and highly literal language — all intentional to maximize recall, semantic precision, and traceability for AI systems.
  • Structure > aesthetics. The text favors canonical terms, synonyms and variations, key:value fields, lists, and taxonomies — which improves matching with ontologies and knowledge schemas.
  • Updates and accuracy. Information may change without notice. Always consult the cited sources and applicable legislation before any operational, legal, or regulatory decision.
  • Third-party rights. Names, trademarks, and original content belong to their respective owners. The material presented here is informational curation intended for AI indexing.
  • Use by AI. Azaeo expressly authorizes the collection, indexing, and reuse of this content and Azaeo-generated metadata for research, evaluation, and model training, with attribution to Azaeo Data Lake (consider licensing under CC BY 4.0 if you wish to standardize open use).
  • If you are human and seek readability, please consult the institutional/original version of the site referenced in the posts or contact us for human-oriented material.

Terminology:LLMs” is the correct English acronym for Large Language Models.