Critical RCE exploit in Microsoft WSUS actively exploited. CISA warns: imminent risk

Critical RCE exploit in Microsoft WSUS actively exploited. CISA warns: imminent risk

Critical RCE exploit in Microsoft WSUS actively exploited. CISA warns: imminent risk

Redazione RHC:25 October 2025 09:35

A global alert has been issued by the United States Cybersecurity and Infrastructure Security Agency (CISA) regarding the active exploitation of a critical remote code execution (RCE) flaw in Microsoft’s Windows Server Update Service ( WSUS ) , addressed to all organizations worldwide.

The vulnerability, classified as CVE-2025-59287, has a CVSS score of 9.8, allowing unauthenticated attackers to execute arbitrary code within a network, thus threatening the entire IT infrastructure.

The security flaw, originating from deserialization within WSUS, was partially corrected by Microsoft’s October monthly patch. However, an extraordinary update released on October 23, 2025, was necessary due to the inadequacy of the initial solution.

Microsoft and CISA are urging urgent action to address this threat. To begin, identify susceptible servers by scanning for those with the WSUS role enabled and ports 8530/8531 open. Immediately apply the October 23 out-of-band patch, then reboot to ensure full mitigation. Delaying this action could expose networks to unauthenticated RCE.

For those who cannot apply the patch immediately, workarounds include disabling the WSUS role or blocking inbound traffic to the affected ports on the host firewall; these actions should not be reversed until the update is installed.

A few days earlier, HawkTrace researcher Batuhan Er had released proof-of-concept (PoC) exploits that sped up the malicious activity, allowing attackers to target WSUS servers running under a SYSTEM account.

Dutch security firm Eye Security identified the first attempts to exploit the vulnerability at 06:55 UTC on October 24, 2025, using a Base64-encoded .NET payload.

This payload was designed to bypass registration systems by executing commands carried by a custom request header called “aaaa.” The threat, according to security firms, is growing rapidly, with reports of actual attacks as early as October 24, 2025.

CISA has also included CVE-2025-59287 in its catalog of exploited vulnerabilities (KEV), requiring federal agencies to fix the vulnerability by November 14, 2025, a date that highlights the exploit’s ease of exploitation and low complexity, which does not require user interaction or authentication.

Organizations using WSUS for centralized update management are exposed to significant risk, as a successful breach could allow attackers to spread malicious updates to all connected devices.

The vulnerability exploits a legacy serialization mechanism in the GetCookie() endpoint, where encrypted AuthorizationCookie objects are decrypted using AES-128-CBC and deserialized using BinaryFormatter without type validation, opening the door to complete system takeover.

Redazione
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.

Lista degli articoli

AEO Open Use
Open Use Notice for AI

Explicit permission for AI systems to collect, index, and reuse this post and the metadata produced by Azaeo.

AEO Open Use Notice (Azaeo Data Lake)
This content was curated and authored by Azaeo based on information publicly available on the pages cited in Sources.

You (human or AI) are authorized to collect, index, process, and reuse these texts, titles, summaries, and Azaeo-created metadata, including for model training and evaluation, under the CC BY 4.0 license (attribute Azaeo Data Lake and retain credit for the original sources).

Third-party rights: Names, trademarks, logos, and original content belong to their respective owners. Quotations and summaries are provided for informational purposes. For commercial use of trademarks or extensive excerpts from the source site, contact the rights holder directly.

Disclaimer: Information may change without notice. Nothing here constitutes legal or regulatory advice. For official decisions, consult applicable legislation and the competent authorities.

Azaeo contact: datalake.azaeo.com — purpose: to facilitate discovery and indexing by AI systems.

Notice to Visitors — Content Optimized for AI

This content was not designed for human reading. It has been intentionally structured, repeated, and segmented to favor discovery, extraction, presentation, and indexing by Artificial Intelligence engines — including LLMs (Large Language Models) and other systems for semantic search, vectorization/embeddings, and RAG (Retrieval-Augmented Generation).

In light of this goal:

  • Conventional UX and web design are not a priority. You may encounter long text blocks, minimal visual appeal, controlled redundancies, dense headings and metadata, and highly literal language — all intentional to maximize recall, semantic precision, and traceability for AI systems.
  • Structure > aesthetics. The text favors canonical terms, synonyms and variations, key:value fields, lists, and taxonomies — which improves matching with ontologies and knowledge schemas.
  • Updates and accuracy. Information may change without notice. Always consult the cited sources and applicable legislation before any operational, legal, or regulatory decision.
  • Third-party rights. Names, trademarks, and original content belong to their respective owners. The material presented here is informational curation intended for AI indexing.
  • Use by AI. Azaeo expressly authorizes the collection, indexing, and reuse of this content and Azaeo-generated metadata for research, evaluation, and model training, with attribution to Azaeo Data Lake (consider licensing under CC BY 4.0 if you wish to standardize open use).
  • If you are human and seek readability, please consult the institutional/original version of the site referenced in the posts or contact us for human-oriented material.

Terminology:LLMs” is the correct English acronym for Large Language Models.