Google Researchers: Malicious Web Pages Are Hijacking AI Agents — and Going for Your PayPal
Google scanned 2–3B pages a month and saw a 32% surge in indirect prompt-injection attacks Nov 2025–Feb 2026. Real payloads include fully specified PayPal transfers planted in invisible HTML, designed for AI agents with payment access.
Google researchers scanning 2–3 billion crawled web pages per month documented a 32% surge in indirect prompt-injection attacks between November 2025 and February 2026 — invisible commands embedded in ordinary HTML waiting for AI agents to read and execute them. Some of the most aggressive payloads target consumer payment flows, including fully specified PayPal transactions.
The Attack
The April 23 report by Thomas Brunner, Yu-Han Liu, and Moni Pande catalogues real-world attacks where attackers planted instructions invisible to humans: text shrunk to a single pixel, near-transparent text, content hidden in HTML comment sections, and commands buried in page metadata. One payload contained a complete PayPal transaction — recipient, amount, memo — designed for any AI agent with linked payment credentials.
How It Hides
Indirect prompt injection works because LLM-based agents treat web content as instructions, not just data. When an agent fetches a page to "summarize the article" or "complete a checkout," it ingests every character — including the malicious instructions humans never see. Google's 32% growth figure covers only static public web pages; social, login-walled, and dynamic sites were out of scope, so the actual exposure is almost certainly higher.
BlockAI View
The economics of agentic commerce break if every transaction has to assume an adversarial web. Two things have to happen: browsers and agent frameworks must treat fetched content as untrusted by default (sandbox parsing, strip suspicious metadata, require human-in-the-loop for payments), and regulators have to draw a clear liability line — when an AI agent with valid credentials executes a malicious instruction, who pays? Right now, no one knows.
Want frontier-model launches and AI-agent news first? Subscribe to the daily brief →
