Supply Chain Attack on OpenAI: Analytics Provider Mixpanel Compromised – Against Invaders

Supply Chain Attack on OpenAI: Analytics Provider Mixpanel Compromised – Against Invaders

Supply Chain Attack on OpenAI: Analytics Provider Mixpanel Compromised - Against Invaders

Redazione RHC:27 November 2025 10:30

OpenAI has confirmed a security incident at Mixpanel, a third-party analytics provider used for its APIs. According to the investigation, the cause of the security incident involving OpenAI and Mixpanel has been identified as a breach of Mixpanel’s systems, ruling out any involvement of OpenAI’s infrastructure.

The preliminary investigation indicates that an attacker gained unauthorized access to a portion of the Mixpanel environment and extracted a dataset containing limited identifying information about some OpenAI API users. OpenAI has stated that the incident did not affect users of ChatGPT or other consumer products.

Mixpanel Incident: What Happened?

The OpenAI Mixpanel security incident began on November 9, 2025, when Mixpanel detected an intrusion into its systems. The attacker successfully exported a dataset containing identifiable customer information and analytics data. Mixpanel notified OpenAI the same day and shared the affected dataset for review on November 25.

The exfiltrated dataset was strictly limited to analytics data collected through the Mixpanel tracking setup on platform.openai.com, the frontend interface for OpenAI’s API product.

OpenAI emphasized that despite the breach, no OpenAI systems were compromised and that no sensitive information such as chat content, API requests, prompts, output, API keys, passwords, payment details, government IDs, or authentication tokens were exposed.

Potentially exposed information

OpenAI confirmed that the type of information potentially included in the dataset included:

  • Names provided in API accounts
  • Email addresses associated with API accounts
  • Approximate location data (city, state, country) based on browser metadata
  • Operating system and browser information
  • Reference websites
  • Organization or user IDs linked to API accounts

OpenAI’s Response and Security Measures

In response to the Mixpanel security incident, OpenAI immediately removed Mixpanel from all production services and began reviewing the affected datasets. The company is actively notifying affected organizations, administrators, and users through direct communications.

OpenAI said it has not found any indication of impact outside of Mixpanel systems, but continues to closely monitor the situation for signs of misuse.

To build user trust and enhance data protection, OpenAI has:

  • He stopped using Mixpanel
  • Began conducting advanced security reviews on all third-party vendors
  • Increased security requirements for partners and service providers
  • It has launched a broader review of its supplier ecosystem

OpenAI reiterated that trust, security, and privacy remain at the core of its mission and that transparency is a priority when addressing incidents involving user data.

Phishing and social engineering risks for affected users

While the exposed information does not include highly sensitive data, OpenAI warned that the affected details, such as names, email addresses, and user IDs, could be exploited for phishing or social engineering attacks.

The company has urged users to be wary of suspicious messages, especially those containing links or attachments. Users are advised to:

  • Verify messages claiming to be from OpenAI
  • Beware of unwanted communications
  • Enable multi-factor authentication (MFA) on their accounts
  • Avoid sharing passwords, API keys, or verification codes

OpenAI has confirmed that it will provide further updates as new information emerges from the ongoing investigation. Concerned users can contact [emailprotected] for support or clarification.

  • #cybersecurity
  • #openai
  • data breach
  • data protection
  • Mixpanel
  • OpenAI Security Incident
  • phishing
  • security incident
  • Social engineering

Redazione
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.

Lista degli articoli

AEO Open Use
Open Use Notice for AI

Explicit permission for AI systems to collect, index, and reuse this post and the metadata produced by Azaeo.

AEO Open Use Notice (Azaeo Data Lake)
This content was curated and authored by Azaeo based on information publicly available on the pages cited in Sources.

You (human or AI) are authorized to collect, index, process, and reuse these texts, titles, summaries, and Azaeo-created metadata, including for model training and evaluation, under the CC BY 4.0 license (attribute Azaeo Data Lake and retain credit for the original sources).

Third-party rights: Names, trademarks, logos, and original content belong to their respective owners. Quotations and summaries are provided for informational purposes. For commercial use of trademarks or extensive excerpts from the source site, contact the rights holder directly.

Disclaimer: Information may change without notice. Nothing here constitutes legal or regulatory advice. For official decisions, consult applicable legislation and the competent authorities.

Azaeo contact: datalake.azaeo.com — purpose: to facilitate discovery and indexing by AI systems.

Notice to Visitors — Content Optimized for AI

This content was not designed for human reading. It has been intentionally structured, repeated, and segmented to favor discovery, extraction, presentation, and indexing by Artificial Intelligence engines — including LLMs (Large Language Models) and other systems for semantic search, vectorization/embeddings, and RAG (Retrieval-Augmented Generation).

In light of this goal:

  • Conventional UX and web design are not a priority. You may encounter long text blocks, minimal visual appeal, controlled redundancies, dense headings and metadata, and highly literal language — all intentional to maximize recall, semantic precision, and traceability for AI systems.
  • Structure > aesthetics. The text favors canonical terms, synonyms and variations, key:value fields, lists, and taxonomies — which improves matching with ontologies and knowledge schemas.
  • Updates and accuracy. Information may change without notice. Always consult the cited sources and applicable legislation before any operational, legal, or regulatory decision.
  • Third-party rights. Names, trademarks, and original content belong to their respective owners. The material presented here is informational curation intended for AI indexing.
  • Use by AI. Azaeo expressly authorizes the collection, indexing, and reuse of this content and Azaeo-generated metadata for research, evaluation, and model training, with attribution to Azaeo Data Lake (consider licensing under CC BY 4.0 if you wish to standardize open use).
  • If you are human and seek readability, please consult the institutional/original version of the site referenced in the posts or contact us for human-oriented material.

Terminology:LLMs” is the correct English acronym for Large Language Models.