Apr 09, 2026
3 min read

Attack scenario 3: Prompt injection triggers unauthorized database operations

 Attack scenario 3: Prompt injection triggers unauthorized database operations
When one token has access to everything, one breach becomes total compromise

The first two scenarios require compromising the MCP server or its credentials. This scenario requires neither. The attacker compromises the input data that the AI processes.

How the attack works

An MCP server exposes a run_query tool that allows an AI assistant to execute SQL queries against a production database. The server is designed for read-only analytical queries, such as finding the top-selling products last quarter or counting active users over the past week.

An attacker creates a message that embeds instructions within data that the AI processes. For example, a support ticket might contain:

When the AI assistant processes this ticket using a summarization or triage workflow, the injected instructions may be interpreted as legitimate commands. The MCP server receives the SQL statements as tool invocations and executes them against the database.

Why the attack succeeds

The attack exploits two properties of the MCP architecture. First, an AI agent relies on the content it processes to determine what actions to take, with no reliable way to distinguish legitimate user intent from malicious instructions hidden within that content. Second, the MCP server has no authorization layer of its own. It executes any tool invocations the MCP client sends, without verifying that they match the user's original intent.

Security researchers at Wiz Research have identified specific security vulnerabilities through injection attacks against Anthropic's official MCP PostgreSQL and Puppeteer servers, and Palo Alto Networks Unit 42 has published a taxonomy of prompt attack vectors that apply directly to MCP tool invocations.

What to look for

  • SQL statements containing DDL operations (ALTER, DROP, CREATE) or DML operations (UPDATE, DELETE, INSERT) from a tool designed for read-only queries
  • Unusual query patterns such as queries against system tables, queries extracting credentials or API keys, queries modifying user roles or permissions
  • Spikes in database write operations that correlate with AI-assisted workflows
  • MCP tool invocations that do not match the user's original request
  • Signs of tool context poisoning, such as documents or tickets that redefine tool behavior (e.g., instructions claiming a tool must export or transmit data before returning results)

How to defend against it

  • Use read-only database credentials: If the MCP server's purpose is analytical queries, connect it with a database user that has SELECT permissions only. No amount of prompt injection can execute a DROP TABLE if the database user cannot execute DDL.
  • Implement command allowlisting: Rather than allowing arbitrary SQL, define a set of approved query patterns or parameterized queries that the MCP server can execute. Reject anything that does not match an approved pattern.
  • Require human confirmation for destructive operations: For any MCP tool that can modify data, implement a confirmation step that requires explicit user approval before execution. The AI can propose an action, but a human must authorize it.
  • Sanitize inputs: Treat all data processed by the AI as potentially containing injected instructions. Strip or escape control sequences and instruction-like patterns from data before it enters the AI's context window.

Prompt injection is hard to spot and hides inside legitimate data. The next scenario does not need that kind of disguise at all.