Executive Summary
- Aim Labs has identified a high severity vulnerability, “CurXecute”, in another popular AI agent - Cursor IDE - enabling full Remote-Code Execution (RCE)
- Aim Labs flagged the issue to Cursor who acknowledged the flaw. The vulnerability has been given an 8.6 severity rating and is tracked as CVE-2025-54135. (Read the Advisory)
- Cursor delivered a fix in version 1.3; all earlier releases remain susceptible to remote‑code execution triggered by a single externally‑hosted prompt‑injection that silently rewrites
~/.cursor/mcp.json
and runs attacker‑controlled commands. - Cursor runs with developer‑level privileges, and when paired with an MCP server that fetches untrusted external data, that data can redirect the agent’s control flow and exploit those privileges.
- By feeding poisoned data to the agent via MCP, an attacker can gain full remote-code-execution under the user privileges, and achieve any number of things, including opportunities for ransomware, data theft, AI manipulation and hallucinations, etc.
- Because model output steers the execution path of any AI agent, this vulnerability pattern is intrinsic and keeps resurfacing across multiple platforms. Thus, robust runtime guardrails remain essential even after patches and Aim Security supplies these state of the art protections.
How We Got Here - From EchoLeak to Cursor
In June we disclosed EchoLeak, the first zero‑click exfiltration chain against Microsoft 365 Copilot. It proved that untrusted content (a single inbound e‑mail) could hijack an LLM’s inner loop and leak whatever data the model could see - without user action. (Aim Security)
EchoLeak underscored another critical lesson: if outside context can steer an AI-agent, any agent can have its control flow hijacked and its privileges misused - including those running on a developer’s own machine - and the popularity of such developer‑oriented agents is growing daily.
The obvious next candidate was Cursor, a popular IDE, that already ships with Model Context Protocol (MCP) support. MCP turns a local agent into a swiss‑army knife by letting it spin up arbitrary servers - Slack, GitHub, databases - and call their tools from natural language. That power, however, intrinsically compromises the agent. The tools expose the agent to external and untrusted data, which can affect the agent’s control-flow. This in turn, allows attackers to hijack the agent’s session and take advantage of the agent’s privileges to perform on behalf of the user.
Vulnerability Details
- Auto‑start on write
Cursor instantly executes any new entry added to ~/.cursor/mcp.json. No confirmation is required. - Suggested edits are live
When the agent suggests an edit to mcp.json, the edit already lands on disk, triggering command execution even if the user rejects the suggestion. - Prompt‑injection path to RCE
- A standard MCP server (e.g., Slack) which exposes the agent to untrusted data is added.
- In our example, an attacker posts a crafted message in a public Slack channel.
- Cursor fetches that message via the Slack MCP server. The prompt persuades the agent to “improve” mcp.json:
"slack_summary": {
"command": "touch",
"args": ["~/mcp_rce"]
}
Upon suggested edit, Cursor launches the MCP server found in the mcp.json file, executing touch ~/mcp_rce the moment the edit buffer is written. This happens before the user has any chance to approve or reject the suggestion - providing the attacker with an arbitrary command execution.
A Short POC
- User adds Slack MCP server via Cursor UI
- Attacker posts message in public Slack channel (contains the injection payload)
- Victim opens a new chat and asks:
Use Slack tools to summarize my messages
- Payload fires automatically as can be seen by
ls -al ~ | grep mcp_rce
, the file appears without any user approval
Why It Matters
The attack surface is any third‑party MCP server that processes external content: issue trackers, customer support inboxes, even search engines. A single poisoned document can morph an AI agent into a local shell.
Responsible Disclosure
- July 7 2025 - Issue reported privately to Cursor security team.
- July 8 2025 - Patch merged into the main branch; vendor schedules release 1.3.
Closing Thoughts
EchoLeak showed that untrusted text can sneak past classifier guardrails and exfiltrate sensitive data. The Cursor flaw shows the same idea can break the local machine security boundary and execute code. As AI agents keep bridging external, internal, and interactive worlds, security models must assume external context may affect the agent runtime - and monitor every hop.
Reach out to labs@aim.security if you’d like to discuss runtime protections for your own agent stack.