Cursor Release Notes
Last updated: Feb 19, 2026
- Feb 18, 2026
- Date parsed from source:Feb 18, 2026
- First seen by Releasebot:Feb 19, 2026
CLI Improvements and Mermaid ASCII Diagrams
New CLI adds cloud handoff for plans, inline Mermaid ASCII diagram rendering, and quality‑of‑life upgrades. Plan mode now shows a persistent decision menu with cloud or local builds and keyboard shortcuts; diagrams render in the terminal with a toggle to view the source.
This release introduces the ability to hand off plans from the CLI to the cloud, in-line rendering of ASCII diagrams, and many quality-of-life improvements.
Plan mode improvements in CLI
When a plan is generated, the CLI now shows a persistent decision menu. You can choose to build in the cloud or build locally to execute the plan.
Typing /plan takes you back to your current plan and its action menu. We've also added keyboard shortcuts in the prompt bar so you can use arrow keys to navigate options, Enter to execute the selected option, and Shift+Enter as a shortcut for "Build in cloud."
Mermaid ASCII diagrams in CLI
Mermaid code blocks now render inline as ASCII diagrams in your CLI conversation. Flowcharts, sequence diagrams, state machines, class diagrams, and ER diagrams can all be displayed directly in the terminal.
Ctrl+O allows you to switch between the rendered diagram and the original mermaid source to see both representations.
Other improvements
We've also made lots of improvements to the CLI focused on tooling, quality of life, and reliability.
General (6)
Tools (5)
UX (14)
Reliability and Bug Fixes (17)
Original source Report a problem - Feb 17, 2026
- Date parsed from source:Feb 17, 2026
- First seen by Releasebot:Feb 18, 2026
Extend Cursor with plugins
Cursor unveils a new plugin system that lets agents connect to external tools, extend functionality, and orchestrate the full product lifecycle from design to payments. Prebuilt plugins from Amplitude, AWS, Figma, Stripe, and more ship via the Marketplace, with private team marketplaces coming soon.
Plugins for the full development lifecycle
Cursor now supports plugins, allowing agents to connect to external tools and learn new knowledge. Plugins bundle capabilities like MCP servers, skills, subagents, rules, and hooks that extend agents with custom functionality.
We’re starting with a highly curated set from partners such as Amplitude, AWS, Figma, Linear, and Stripe. These plugins span the product development lifecycle, allowing Cursor to deploy services, implement payments, run advanced testing, and more.
You can discover and install prebuilt plugins on the Cursor Marketplace or create your own and share them with the community.
Plugins allow Cursor to more effectively use the tools your team already relies on. This lets you orchestrate the entire product development lifecycle in the same place you generate code.
Plan and design
Access issues, projects, and documents with the Linear plugin.
Translate designs into code with the Figma plugin.
Subscriptions and payments
Build payment integrations with the Stripe plugin.
The Stripe plugin lets Cursor understand how Stripe integrations should be built. It created the products, prices, and payment link using Stripe's APIs, then shipped a working app almost immediately. It's a much faster way to build and test Stripe integrations.
Gus Nguyen
Senior Software Engineer, StripeServices and infrastructure
Deploy and manage infrastructure in Cursor with the AWS, Cloudflare, and Vercel plugins.
We're working to make AWS the most agent-friendly cloud. Cursor plugins represent a new way to package and distribute AWS capabilities—combining skills, MCP servers, and deployment workflows to help our customers ship faster on AWS.
James Greenfield
VP, AWS Platform ExperienceData and analytics
Query production data and surface insights with the Databricks, Snowflake, Amplitude, and Hex plugins.
With the Amplitude plugin, Cursor can pull in rich behavioral context, analyze growth dashboards, synthesize customer feedback, and turn those insights into concrete recommendations. I can even have Cursor draft a PR immediately.
Frank Lee
Principal Product Manager, AmplitudeBuild and share your own plugins
You can also build your own plugins and share them on the Cursor Marketplace. A plugin combines one or more primitives that agents use to execute tasks:
- Skills: domain-specific prompts and code that agents can discover and run
- Subagents: specialized agents that allow Cursor to complete tasks in parallel
- MCP servers: services that connect Cursor to external tools or data sources
- Hooks: custom scripts that let you observe and control agent behavior
- Rules: system-level instructions to uphold coding standards and preferences
We’re accepting plugin submissions and look forward to seeing what the community builds. We’ve also published some Cursor-built plugins. Check out the Cursor Team Kit to install our favorite internal workflows for CI, code review, and testing.
Coming soon
We're working on private team marketplaces so organizations can share plugins internally with central governance and security controls.
Learn more about plugins in our docs.
Original source Report a problem All of your release notes in one place
Join Releasebot and get updates from Cursor and hundreds of other software products.
- Feb 17, 2026
- Date parsed from source:Feb 17, 2026
- First seen by Releasebot:Feb 18, 2026
2.5
Cursor launches a plugin marketplace with an installer, expanding capabilities across design, databases, payments and analytics. It adds granular sandbox network and filesystem controls for secure, policy-driven execution. Subagents now run asynchronously and can spawn subagents, boosting performance for big tasks.
This release introduces plugins for extending Cursor, improvements to core agent capabilities like subagents, and fine-grained network controls for sandboxed commands.
Plugins package skills, subagents, MCP servers, hooks, and rules, into a single install. The Cursor Marketplace lets you discover and install plugins to extend Cursor with pre-built capabilities.
Our initial partners include Amplitude, AWS, Figma, Linear, Stripe, and more. These plugins cover workflows across design, databases, payments, analytics, and deployment.
Browse plugins at cursor.com/marketplace or install directly in the editor with /add-plugin.
Read more in our announcement.
Sandbox network access controls
The sandbox now supports granular network access controls, as well as controls for access to directories and files on your local filesystem. Define exactly which domains the agent is allowed to reach while running sandboxed commands:
- User config only: restricted to domains in your sandbox.json
- User config with defaults: restricted to your allowlist plus Cursor's built-in defaults
- Allow all: unrestricted network access within the sandbox
Admins on the Enterprise plan can enforce network allowlists and denylists from the admin dashboard, ensuring organization-wide egress policies apply to all agent sandbox sessions.
Async subagents
Previously, all subagents ran synchronously, blocking the parent agent until they complete. Subagents can now run asynchronously, allowing the parent to continue working while subagents run in the background.
Subagents can also spawn their own subagents, creating a tree of coordinated work. This allows Cursor to take on bigger tasks like multi-file features, large refactors, and challenging bugs.
We've also made some performance improvements to subagents since our last release. They now run with lower latency, better streaming feedback, and more responsive parallel execution.
Original source Report a problem - Feb 12, 2026
- Date parsed from source:Feb 12, 2026
- First seen by Releasebot:Feb 14, 2026
Long-running Agents in Research Preview
Long-running agents
Cursor can now work autonomously over longer horizons to complete larger, more complex tasks. Long-running agents plan first and finish more difficult work without human intervention.
In research preview and internal testing, long-running agents completed work that was previously too hard for regular agents. This led to larger, more complete PRs with fewer obvious follow-ups.
Cursor's long-running agent is now available at cursor.com/agents for Ultra, Teams, and Enterprise plans.
Read more in our announcement.
Original source Report a problem - Feb 12, 2026
- Date parsed from source:Feb 12, 2026
- First seen by Releasebot:Feb 13, 2026
Expanding our long-running agents research preview
Cursor unveils long-running agents preview now available for all Ultra, Teams, and Enterprise users at cursor.com/agents. Built to tackle long-horizon tasks with a custom harness, these agents complete larger PRs and multi-hour runs, advancing toward self-driving codebases.
Cursor's long-running agents research preview is now available at cursor.com/agents for all Ultra, Teams, and Enterprise users.
The long-running agent is the result of our research on agents working autonomously on more ambitious projects, including the work we shared last month on how Cursor built a web browser.
During that experiment, we saw frontier models fail in predictable ways on long-horizon tasks. We addressed these limitations by creating a custom harness that enables agents to take on more difficult work and see it through to completion.
We released a version of this harness last week as part of a research preview. The results show that long-running agents produced substantially larger PRs with merge rates comparable to other agents.Long-running agents produced substantially larger PRs with comparable merge rates
Talking with participants in our research preview, we heard that long-running agents successfully completed a range of tasks that were previously out of reach for agents. A few example runs from the research preview include:
- Building an all-new chat platform integrated with an existing open-source tool (runtime: 36 hours)
- Implementing a mobile app based on an existing web app (runtime: 30 hours)
- Refactoring an authentication and RBAC system (runtime: 25 hours)
Making models more capable
Successfully completing difficult tasks requires frontier intelligence and the right harness. By working with every frontier model and building a custom harness for each, we are in a unique position to build the best scaffolding that leverages the strengths of different models. We found that there are a couple general principles that help us achieve better performance.
Planning before executionWhen iterating directly with a model, tight prompt-response loops let you monitor the agent and nudge it back on course when needed. When the agent goes off and works on a larger task autonomously, a slightly wrong assumption can turn into a completely incorrect solution by the end.
Following through on tasks
Long-running agents in Cursor propose a plan and wait for approval instead of immediately jumping into execution, recognizing that upfront alignment reduces the need for follow-ups.Frontier models can write great code, but often forget the big picture of their task, lose track of what they're doing, or stop at partial completion.
Long-running agents use a plan and multiple different agents checking each other's work in order to follow through on larger, more complex tasks.Findings to date
Initial participants in the research preview used long-running agents to implement large features, refactor complex systems, fix challenging bugs, overhaul performance, and create high-coverage tests.
"I shipped two architecture overhauls. It's an incredible tool for "I don't know if this is possible but I'm curious to see" type work. I can run five in parallel, for everything from creating Mac window managers to stuffing CEF into Tauri."
Theo Browne
CEO, T3 ChatAgents commonly ran for more than a day, producing PRs that merged with minimal follow-up work. Users could step away, focus on other work, close their laptop, and come back to working solutions.
"I planned for this project to take an entire quarter to accomplish. With Cursor long-running agents, that timeline compressed to just a couple days. And I could do two or three additional projects. I can kick-off a 52-hour task that I don't have to babysit and come back to a big PR with 151k lines of code."
Zack Jackson
Infra Architect, RspackCompared to synchronous agents, long-running agents were more thorough in their approach and wrote code that was more production-ready.
"The magical part of the new harness is allowing the same model to make something production-ready. I tested the same bug-fix prompt locally vs. with a long-running agent, both with Codex 5.3. The local agent fixed it fairly quickly, but the long-running one went further to find edge cases, fix similar occurrences, and create high-coverage tests."
Tejas Haveri
CTO, DevAccel-LabsUsing long-running agents at Cursor
For the last month, we've been testing the limits of long-running agents internally. We’ve used them for experiments to see how far we could push them, as well as for production work on Cursor itself. Here are a few tasks we gave long-running agents that we have since merged.
- Video renderer optimization
We asked an agent to optimize a video renderer whose performance was bottlenecking deployment. It completed a full migration to Rust and implemented custom kernels, reproducing identical visual output by working purely from the original logic. - Policy-driven network access for sandboxed code
We needed JSON-driven network policy controls and a local HTTP proxy for sandboxed processes. The proxy needed to be correct across protocols, enforce policy consistently, and fail safely without allowing blocked traffic. The long-running agent created a ten-thousand line PR that had very few issues when we ran a large test suite against it. Follow-up work consisted mainly of changes we didn't specify in our initial request. - Sudo support in Cursor CLI
Some tasks break CLI agents the moment they hit sudo, especially tasks related to system administration or ops. We asked a long-running agent to implement secure sudo password prompting, which required stitching together multiple subsystems and reasoning about Unix auth flows. It produced a working implementation that Cursor CLI now uses.
Toward self-driving codebases
Long-running agents in Cursor are an early milestone on the path toward self-driving codebases, where agents can handle more work with less human intervention. It's now possible to delegate larger tasks and come back hours or days later to working solutions.
Original source Report a problem
We are working on improving collaboration across long-running agents so they can break up bigger projects into parallel work streams and take on even more ambitious projects with less human intervention.
We’re also working to develop new tools to handle the volume of code now being generated. As the cost of code generation continues to fall, we'll need new approaches to deploying that code to production safely.
Try long-running agents today at cursor.com/agents. - Feb 11, 2026
- Date parsed from source:Feb 11, 2026
- First seen by Releasebot:Feb 12, 2026
Increased usage for agents
Cursor expands usage by adding two pools Auto+Composer and API, boosts limits for Auto and Composer 1.5 across all individual plans, with a limited time 6x boost. A new usage-visibility page helps track both pools and API credits; limits remain per cycle.
From autocomplete to agents
We've raised limits for Auto and Composer 1.5 for all individual plans.
There are now two usage pools:
- Auto + Composer: We include significantly more usage for Auto and Composer 1.5.
- API: We charge the API price of the model (unchanged with this update).
Composer 1.5 now has 3x the usage limit of Composer 1. For a limited time (through February 16), we're increasing that limit to 6x more.
Over the past few months, we've seen a major shift toward coding with agents.
Developers are asking Cursor to make ambitious changes across their entire codebase. We want Cursor to support daily agentic coding, and we recognize that developers have different priorities.
Some are comfortable always paying for the newest models, while others are looking for the right balance between speed, intelligence, and affordability. By introducing two usage pools and increasing limits for Auto + Composer, we're supporting both approaches.
A new model for agentic coding
Training our own models like Composer-1.5 allows us to include significantly more usage in a sustainable way.
We have found Composer 1.5 to be a highly capable model, scoring above Sonnet 4.5 on Terminal-Bench 2.0, but below the best frontier models.
1
[Composer 1.5 benchmark results on Terminal-Bench 2.0]
We expect to continue finding ways to offer increasingly intelligent and cost-effective models, alongside the latest frontier models.
Improved usage visibility
To further improve usage visibility, we’ve added a new page in the editor where you can monitor your limits with two different usage pools:
- Auto + Composer: We include significantly more usage when Auto or composer-1.5 is selected.
- API: Individual plans include at least $20 of usage each month (more on higher tiers) with the option to pay for additional usage as needed. These usage limits have not changed with this update.
[Usage settings page showing the two usage pools]
Try our latest model
We encourage you to try Composer 1.5 with our updated usage limits.
Both usage pools reset with your monthly usage cycle. These limits are now live for all individual plans (Pro, Pro Plus, and Ultra).
- Terminal-Bench 2.0 is an agent evaluation benchmark for terminal use maintained by the Laude Institute. Anthropic model scores use the Claude Code harness and OpenAI model scores use the Simple Codex harness. Our Cursor score was computed using the official Harbor evaluation framework (the designated harness for Terminal-Bench 2.0) with default benchmark settings. We ran 2 iterations per model-agent pair and report the average. More details on the benchmark can be found at the official Terminal Bench website. For other models besides Composer 1.5, we took the max score between the official leaderboard score and the score recorded running in our infrastructure.
↩
Original source Report a problem - Feb 9, 2026
- Date parsed from source:Feb 9, 2026
- First seen by Releasebot:Feb 10, 2026
Introducing Composer 1.5
Composer 1.5 upgrades release modeling with faster, smarter coding and new self-summarization for long tasks. It scales RL 20x, improves real‑world coding benchmarks, and balances speed with deep thinking for hard problems while staying interactive.
Composer 1.5
A few months ago, we released our first agentic coding model, Composer 1. Since then, we've made significant improvements to the model’s coding ability.
Our new release, Composer 1.5, strikes a strong balance between speed and intelligence for daily use. Composer 1.5 was built by scaling reinforcement learning 20x further on the same pretrained model. The compute used in our post-training of Composer 1.5 even surpasses the amount used to pretrain the base model.
We see continued improvements on coding ability as we scale. Measured by our internal benchmark of real-world coding problems, we find that the model quickly surpasses Composer 1 and continues to climb in performance. The improvements are most significant on challenging tasks.
Composer 1.5 is a thinking model. In the process of responding to queries, the model generates thinking tokens to reason about the user’s codebase and plan next steps. We find that these thinking stages are critical to the model’s intelligence. At the same time, we wanted to keep Composer 1.5 fast and interactive for day-to-day use. To achieve a balance, the model is trained to respond quickly on easy problems with minimal thinking, while on hard problems it will think until it has found a satisfying answer.
To handle longer running tasks, Composer 1.5 has the ability to self-summarize. This allows the model to continue exploring for a solution even when it runs out of available context. We train self-summarization into Composer 1.5 as part of RL by asking it to produce a useful summary when context runs out in training. This may trigger several times recursively on hard examples. We find that self-summarization allows the model to maintain its original accuracy as context length varies.
Composer 1.5 is a significantly stronger model than Composer 1 and we recommend it for interactive use. Its training demonstrates that RL for coding can be continually scaled with predictable intelligence improvements.
Learn more about Composer 1.5 pricing here.
Original source Report a problem - Jan 27, 2026
- Date parsed from source:Jan 27, 2026
- First seen by Releasebot:Jan 28, 2026
Securely indexing large codebases
Cursor speeds semantic search with teammate index reuse and a Merkle-tree delta sync, slashing time-to-first-query from hours to seconds on large repos. New onboarding uses a simhash to copy a close existing index, then syncs in background for instant results.
Semantic search
Semantic search is one of the biggest drivers of agent performance. In our recent evaluation, it improved response accuracy by 12.5% on average, produced code changes that were more likely to be retained in codebases, and raised overall request satisfaction.
To power semantic search, Cursor builds a searchable index of your codebase when you open a project. For small projects, this happens almost instantly. But large repositories with tens of thousands of files can take hours to process if indexed naively, and semantic search isn't available until at least 80% of that work is finished.
We looked for ways to speed up indexing based on the simple observation that most teams work from near-identical copies of the same codebase. In fact, clones of the same codebase average 92% similarity across users within an organization.
This means that rather than rebuilding every index from scratch when someone joins or switches machines, we can securely reuse a teammate's existing index. This cuts time-to-first-query from hours to seconds on the largest repos.
Building the first index
Cursor builds its first view of a codebase using a Merkle tree, which lets it detect exactly which files and directories have changed without reprocessing everything. The Merkle tree features a cryptographic hash of every file, along with hashes of each folder that are based on the hashes of its children.
Small client-side edits change only the hashes of the edited file itself and the hashes of the parent directories up to the root of the codebase. Cursor compares those hashes to the server's version to see exactly where the two Merkle trees diverge. Entries whose hashes differ get synced. Entries that match are skipped. Any entry missing on the client is deleted from the server, and any entry missing on the server is added. The sync process never modifies files on the client side.
The Merkle tree approach significantly reduces the amount of data that needs to be transferred on each sync. In a workspace with fifty thousand files, just the filenames and SHA-256 hashes add up to roughly 3.2 MB. Without the tree, you would move that data on every update. With the tree, Cursor walks only the branches where hashes differ.
When a file changes, Cursor splits it into syntactic chunks. These chunks are converted into the embeddings that enable semantic search. Creating embeddings is the expensive step, which is why Cursor does it asynchronously in the background.
Most edits leave most chunks unchanged. Cursor caches embeddings by chunk content. Unchanged chunks hit the cache, and agent responses stay fast without paying that cost again at inference time. The resulting index is fast to update and light to maintain.
Finding the best index to reuse
The indexing pipeline above uploads every file when a codebase is new to Cursor. New users inside an organization don't need to go through that entire process though.
When a new user joins, the client computes the Merkle tree for a new codebase and derives a value called a similarity hash (simhash) from that tree. This is a single value that acts as a summary of the file content hashes in the codebase.
The client uploads the simhash to the server. The server then uses it as a vector to search in a vector database composed of all the other current simhashes for all other indexes in Cursor in the same team (or from the same user) as the client. For each result returned by the vector database, we check whether it matches the client similarity hash above a threshold value. If it does, we use that index as the initial index for the new codebase.
This copy happens in the background. In the meantime, the client is allowed to make new semantic searches against the original index being copied, resulting in a very quick time-to-first-query for the client.
But this only works if two constraints hold. Results need to reflect the user's local codebase, even when it differs from the copied index. And the client can never see results for code it doesn't already have.
Proving access
To guarantee that files won't leak across copies of the codebase, we reuse the cryptographic properties of the Merkle tree.
Each node in the tree is a cryptographic hash of the content beneath it. You can only compute that hash if you have the file. When a workspace starts from a copied index, the client uploads its full Merkle tree along with the similarity hash. This associates a hash with each encrypted path in the codebase.
The server stores this tree as a set of content proofs. During search, the server filters results by checking those hashes against the client's tree. If the client can't prove it has a file, the result is dropped.
This allows the client to query immediately and see results only for code it shares with the copied index. The background sync reconciles the remaining differences. Once the client and server Merkle tree roots match, the server deletes the content proofs and future queries run against the fully synced index.
Faster onboarding
Reusing teammate indexes improves setup time for repos of all sizes. The effect compounds with the size of the repo:
- For the median repo, time-to-first-query drops from 7.87 seconds to 525 milliseconds
- At the 90th percentile it falls from 2.82 minutes to 1.87 seconds
- At the 99th percentile it falls from 4.03 hours to 21 seconds.
These changes remove a major source of repeated work and let Cursor understand even very large codebases in seconds, not hours.
Original source Report a problem - Jan 26, 2026
- Date parsed from source:Jan 26, 2026
- First seen by Releasebot:Jan 27, 2026
Dropbox uses Cursor to index over 550,000 files and build an AI-native SDLC
Dropbox scales AI coding with Cursor across its vast monorepo, boosting velocity as over 90% of engineers now use AI tools weekly. The rollout drives faster PR throughput and shorter cycle times, with AI embedded across writing, reviewing, testing, and migrations.
Organic adoption
By 2024, Dropbox engineers had already started experimenting with Cursor. Initially they shared what they were learning through informal channels, such as Slack conversations and quick internal write-ups. Dasdan noticed the activity and nurtured it by creating a group of AI champions. He helped them amplify the way they were working while removing barriers to adoption.
Speeding up deployment meant removing every point of friction. Signing up for these tools needed to feel like a single click.
Ali Dasdan
CTO, DropboxThe effect was immediate. As access became easier, more engineers tried the tools, shared what they were learning, and adoption accelerated on its own.
Indexing the monorepo
Once interest in Cursor moved beyond individual experiments, the next question was whether it could handle Dropbox's entire monorepo.
Every Cursor deployment begins the same way: by indexing the codebase. Cursor scans each file that isn't ignored, breaks the code into structured chunks, and generates embeddings that capture how those pieces relate to one another. The result is a semantic index that the models use when generating or editing code.
At Dropbox's scale, this step was critical. Indexing gave Cursor the context it needed to follow the codebase's structure and generate changes that fit naturally within it. It also made the codebase more accessible to Dropbox engineers themselves. Through Cursor, they gained a clearer map of how different parts of the codebase fit together, empowering leaders and letting new hires ramp faster.
People can actually understand the existing codebase really well and far, far faster.Ali Dasdan
CTO, DropboxA measurable impact on velocity
More than 90 percent of Dropbox engineers now use AI tools weekly, with Cursor as a primary driver of that activity.
The effects have been almost immediate. Dropbox measures engineering performance through an internal framework that emphasizes speed, effectiveness, and quality. Since adopting Cursor, PR throughput and cycle time have moved into the upper tier of industry benchmarks.
Engineers feel the change in their day-to-day work. Cursor appears in nearly every step of development, from writing and reviewing code to testing, documentation, and migrations. They can move through the codebase faster, reinforcing the principle that set Dropbox's Cursor adoption in motion: speed is everything.
We are reexamining and redesigning every part of how we build software in the context of AI.
Ali Dasdan
CTO, DropboxIf you're interested in embedding AI in every phase of the software development lifecycle, please
Original source Report a problem
reach out to our team
to get started with a Cursor trial. - Jan 22, 2026
- Date parsed from source:Jan 22, 2026
- First seen by Releasebot:Jan 23, 2026
- Modified by Releasebot:Jan 23, 2026
2.4
Cursor unveils agent harness boosts, smarter subagents, and extensible skills for the editor and CLI. New image generation, AI attribution with Cursor Blame on Enterprise, and interactive clarification questions enable faster, focused code tasks.
Overview
Agents are solving increasingly complex, long-running tasks across your codebase. This release introduces new agent harness improvements for better context management, as well as many quality-of-life fixes in the editor and CLI.
Subagents
Subagents are independent agents specialized to handle discrete parts of a parent agent's task. They run in parallel, use their own context, and can be configured with custom prompts, tool access, and models.
The result is faster overall execution, more focused context in your main conversation, and specialized expertise for each subtask.
Cursor includes default subagents for researching your codebase, running terminal commands, and executing parallel work streams. These will automatically start improving the quality of your agent conversations in the editor and the
Cursor CLI.Optionally, you can define custom subagents. Learn more in our
docs.Skills
Cursor now supports
Agent Skills
in the editor and
CLI. Agents can discover and apply skills when domain-specific knowledge and workflows are relevant. You can also invoke a skill using the slash command menu.Define skills in
SKILL.md
files, which can include custom commands, scripts, and instructions for specializing the agent’s capabilities based on the task at hand.Compared to always-on, declarative
rules
, skills are better for
dynamic context discovery
and procedural “how-to” instructions. This gives agents more flexibility while keeping context focused.Image generation
Generate images directly from Cursor's agent. Describe the image in text or upload a reference to guide the underlying image generation model (Google Nano Banana Pro).
Images are returned as an inline preview and saved to your project's
assets/
folder by default. This is useful for creating UI mockups, product assets, and visualizing architecture diagrams.Cursor Blame
On the Enterprise plan,
Cursor Blame
extends traditional git blame with AI attribution, so you can see exactly what was AI-generated versus human-written.When reviewing or revisiting code, each line links to a summary of the conversation that produced it, giving you the context and reasoning behind the change.
Cursor Blame distinguishes between code from Tab completions, agent runs (broken down by model), and human edits. It also lets you track AI usage patterns across your team's codebase.
Clarification questions from the agent
The interactive Q&A tool used by agents in Plan and Debug mode now lets agents ask clarifying questions in any conversation.
While waiting for your response, the agent can continue reading files, making edits, or running commands, then incorporate your answer as soon as it arrives.
You can also build custom subagents and skills that use this tool by instructing them to "use the ask question tool."
Original source Report a problem