Google Researchers: Malicious Web Pages Are Hijacking AI Agents — and Going for Your PayPal

Google scanned 2–3B pages a month and saw a 32% surge in indirect prompt-injection attacks Nov 2025–Feb 2026. Real payloads include fully specified PayPal transfers planted in invisible HTML, designed for AI agents with payment access.

AI agent prompt injection cover — malicious web pages targeting PayPal
Illustration: BlockAI News · Source: Google research / Decrypt, April 27 2026

Google researchers scanning 2–3 billion crawled web pages per month documented a 32% surge in indirect prompt-injection attacks between November 2025 and February 2026 — invisible commands embedded in ordinary HTML waiting for AI agents to read and execute them. Some of the most aggressive payloads target consumer payment flows, including fully specified PayPal transactions.

The Attack

The April 23 report by Thomas Brunner, Yu-Han Liu, and Moni Pande catalogues real-world attacks where attackers planted instructions invisible to humans: text shrunk to a single pixel, near-transparent text, content hidden in HTML comment sections, and commands buried in page metadata. One payload contained a complete PayPal transaction — recipient, amount, memo — designed for any AI agent with linked payment credentials.

How It Hides

Indirect prompt injection works because LLM-based agents treat web content as instructions, not just data. When an agent fetches a page to "summarize the article" or "complete a checkout," it ingests every character — including the malicious instructions humans never see. Google's 32% growth figure covers only static public web pages; social, login-walled, and dynamic sites were out of scope, so the actual exposure is almost certainly higher.

Malicious Web Pages Are Hijacking AI Agents, And Some Are Going After Your PayPal
Decrypt walkthrough of Google's research and the legal void around AI-agent liability.

BlockAI View

The economics of agentic commerce break if every transaction has to assume an adversarial web. Two things have to happen: browsers and agent frameworks must treat fetched content as untrusted by default (sandbox parsing, strip suspicious metadata, require human-in-the-loop for payments), and regulators have to draw a clear liability line — when an AI agent with valid credentials executes a malicious instruction, who pays? Right now, no one knows.

Want frontier-model launches and AI-agent news first? Subscribe to the daily brief →

Keep Reading

K-Pop Firm K Wave Media Dumps $485 Million Bitcoin Treasury Plan for AI Infrastructure

K-Pop Firm K Wave Media Dumps $485 Million Bitcoin Treasury Plan for AI Infrastructure

K Wave Media, the Nasdaq-listed K-Pop entertainment company, announced on May 4 that it is abandoning its $485 million Bitcoin treasury strategy and redirecting all capital toward an AI infrastructure build-out. The board approved a full strategic transformation: divest the legacy K-Pop operating business, rebrand to "Talivar Technologies," and deploy the $485 million — representing the remaining commitment from its Securities Purchase Agreement with Anson Funds, originally earmarked for Bitcoin purchases — into data centers, compute infrastructure, and critical AI technologies. Shares fell approximately 25% on the

Read full story →

Stay Ahead of the Market

Daily AI & crypto briefings — straight to your inbox, your phone, and your timeline.