Skip to content

2.4 · Changelog

Subagents, Skills, and Image Generation

Agents are solving increasingly complex, long-running tasks across your codebase. This release introduces new agent harness improvements for better context management, as well as many quality-of-life fixes in the editor and CLI.

Subagents

Subagents are independent agents specialized to handle discrete parts of a parent agent's task. They run in parallel, use their own context, and can be configured with custom prompts, tool access, and models.

The result is faster overall execution, more focused context in your main conversation, and specialized expertise for each subtask.

Cursor includes default subagents for researching your codebase, running terminal commands, and executing parallel work streams. These will automatically start improving the quality of your agent conversations in the editor and the Cursor CLI.

Optionally, you can define custom subagents. Learn more in our docs.

Skills

Cursor now supports Agent Skills in the editor and CLI. Agents can discover and apply skills when domain-specific knowledge and workflows are relevant. You can also invoke a skill using the slash command menu.

Define skills in SKILL.md files, which can include custom commands, scripts, and instructions for specializing the agent’s capabilities based on the task at hand.

Compared to always-on, declarative rules, skills are better for dynamic context discovery and procedural “how-to” instructions. This gives agents more flexibility while keeping context focused.

Image generation

Generate images directly from Cursor's agent. Describe the image in text or upload a reference to guide the underlying image generation model (Google Nano Banana Pro).

Images are returned as an inline preview and saved to your project's assets/ folder by default. This is useful for creating UI mockups, product assets, and visualizing architecture diagrams.

Cursor Blame

On the Enterprise plan, Cursor Blame extends traditional git blame with AI attribution, so you can see exactly what was AI-generated versus human-written.

When reviewing or revisiting code, each line links to a summary of the conversation that produced it, giving you the context and reasoning behind the change.

Cursor Blame distinguishes between code from Tab completions, agent runs (broken down by model), and human edits. It also lets you track AI usage patterns across your team's codebase.

Clarification questions from the agent

The interactive Q&A tool used by agents in Plan and Debug mode now lets agents ask clarifying questions in any conversation.

While waiting for your response, the agent can continue reading files, making edits, or running commands, then incorporate your answer as soon as it arrives.

You can also build custom subagents and skills that use this tool by instructing them to "use the ask question tool."

Subagents, Skills, and Image Generation · Cursor