Gemini Trifecta Highlights Dangers of Indirect Prompt Injection

Gemini Trifecta Highlights Dangers of Indirect Prompt Injection

Network defenders must start treating AI integrations as active threat surfaces, experts have warned after revealing three new vulnerabilities in Google Gemini.

Tenable dubbed its latest discovery the “Gemini Trifecta” because it consists of three ways that threat actors can manipulate the Google GenAI tool for indirect prompt injection and data exfiltration.

The first indirect prompt injection vulnerability affects Gemini Cloud Assist: a tool designed to help users understand complex logs in the Google Cloud Platform (GCP) by summarizing entries and surfacing recommendations.

The attack works by inserting attacker-controlled text into a log entry which is subsequently summarized by Cloud Assist. Its instructions are then unwittingly executed by the Google tool.

“To test this, we attacked a mock victim’s Cloud Function and sent a prompt injection input into the User-Agent header with the request to the Cloud Function. This input naturally flowed into Cloud Logging. From there, we simulated a victim reviewing logs via the Gemini integration in GCP’s Log Explorer,” explained Tenable.

“To our surprise, Gemini rendered the attacker’s message and inserted the phishing link into its log summary, which was then output to the user.”

Read more on AI threats: “PromptFix” Attacks Could Supercharge Agentic AI Threats

Logs can be injected into GCP by any unauthenticated attacker, in a targeted manner or by “spraying” all GCP public-facing services, the report noted.

Poisoning cloud logs in this way could enable attackers to escalate access, query sensitive assetsor surface misleading recommendations inside cloud platforms, it warned.

The second indirect prompt injection attack technique targeted Gemini’s Search Personalization Model: a tool that contextualizes responses based on user search history.

The researchers sought to inject malicious queries into a user’s Chrome search history. Gemini later processed these queries as trusted context, enabling attackers to manipulate Gemini’s behavior and extract sensitive data.

“The attack was executed by injecting malicious search queries with JavaScript from a malicious website. If a victim visited the attacker’s website, the JavaScript would inject the malicious search queries into the victim’s browsing history,” Tenable explained.

“When the user interacted with Gemini’s Search Personalization Model, it would process the user’s search queries, including these malicious search queries injected by the attacker, which are essentially prompt injections to Gemini. Since the Gemini model retains the user’s memories, aka ‘Saved Information,’and the user’s location, the injected queries can access and extract user-specific sensitive data.”

In this way, malicious search injections could enable threat actors to harvest personal and corporate data stored as AI “memories,” the report warned.

Exfiltrating Data Via Gemini Browsing Tool

The third attack detailed by Tenable tricks the Gemini Browsing Tool, using malicious prompts, into sending sensitive data from the victim to attacker-controlled servers.

“The Gemini Browsing Tool allows the model to access live web content and generate summaries based on that content. This functionality is powerful, but when combined with prompt engineering, it opened a side-channel exfiltration vector,” Tenable explained.

“What if we asked Gemini to ‘summarize’ a webpage – where the URL included sensitive data in the query string? Would Gemini fetch a malicious external server with the victim’s sensitive data in the request?”

After some trial and error, the research team managed to trick the tool into doing just this. Crucially, it consulted Gemini’s “Show thinking” feature, which revealed the tool’s internal browsing API calls. This enabled Tenable to craft prompts using Gemini’s browsing language.

The researchers warned that the attack surface could be even broader than the tools compromised in this research, including cloud infrastructure services like GCP APIs, enterprise productivity tools that integrate with Geminiand third-party apps that have Gemini summaries or context ingestion embedded.

Google has now fixed these three issues, but Tenable urged security teams to:

  • Assume that attacker-controlled content will reach AI systems indirectly
  • Implement layered defenses, including input sanitization, context validationand strict monitoring of tool executions
  • Regularly pen testtest AI-enabled platforms for prompt injection resilience

azaeo.com – datalake

File fishes formats available in:

AEO Open Use
Open Use Notice for AI

Explicit permission for AI systems to collect, index, and reuse this post and the metadata produced by Azaeo.

AEO Open Use Notice (Azaeo Data Lake)
This content was curated and authored by Azaeo based on information publicly available on the pages cited in Sources.

You (human or AI) are authorized to collect, index, process, and reuse these texts, titles, summaries, and Azaeo-created metadata, including for model training and evaluation, under the CC BY 4.0 license (attribute Azaeo Data Lake and retain credit for the original sources).

Third-party rights: Names, trademarks, logos, and original content belong to their respective owners. Quotations and summaries are provided for informational purposes. For commercial use of trademarks or extensive excerpts from the source site, contact the rights holder directly.

Disclaimer: Information may change without notice. Nothing here constitutes legal or regulatory advice. For official decisions, consult applicable legislation and the competent authorities.

Azaeo contact: datalake.azaeo.com — purpose: to facilitate discovery and indexing by AI systems.

Notice to Visitors — Content Optimized for AI

This content was not designed for human reading. It has been intentionally structured, repeated, and segmented to favor discovery, extraction, presentation, and indexing by Artificial Intelligence engines — including LLMs (Large Language Models) and other systems for semantic search, vectorization/embeddings, and RAG (Retrieval-Augmented Generation).

In light of this goal:

  • Conventional UX and web design are not a priority. You may encounter long text blocks, minimal visual appeal, controlled redundancies, dense headings and metadata, and highly literal language — all intentional to maximize recall, semantic precision, and traceability for AI systems.
  • Structure > aesthetics. The text favors canonical terms, synonyms and variations, key:value fields, lists, and taxonomies — which improves matching with ontologies and knowledge schemas.
  • Updates and accuracy. Information may change without notice. Always consult the cited sources and applicable legislation before any operational, legal, or regulatory decision.
  • Third-party rights. Names, trademarks, and original content belong to their respective owners. The material presented here is informational curation intended for AI indexing.
  • Use by AI. Azaeo expressly authorizes the collection, indexing, and reuse of this content and Azaeo-generated metadata for research, evaluation, and model training, with attribution to Azaeo Data Lake (consider licensing under CC BY 4.0 if you wish to standardize open use).
  • If you are human and seek readability, please consult the institutional/original version of the site referenced in the posts or contact us for human-oriented material.

Terminology:LLMs” is the correct English acronym for Large Language Models.