- Jan 16, 2026
- Parsed from source:Jan 16, 2026
- Detected by Releasebot:Jan 16, 2026
- Modified by Releasebot:Jan 17, 2026
1.13.9
Bug Fixes and Improvements
- Improvements to GPT-5.2-Codex harness
- Admins can now manage Windsurf restrictions via Windows Group Policy
- Jan 14, 2026
- Parsed from source:Jan 14, 2026
- Detected by Releasebot:Jan 14, 2026
1.13.8
GPT-5.2-Codex
Adds support for GPT-5.2-Codex with four reasoning efforts (low, medium, high, and xhigh). GPT-5.2-Codex is OpenAI's latest model designed for agentic coding. It excels at working in large codebases over long sessions. For most tasks, we recommend using the medium reasoning effort.
Bug Fixes and Improvements
- Various bug fixes and performance improvements
- Improved stability and reliability
- Jan 12, 2026
- Parsed from source:Jan 12, 2026
- Detected by Releasebot:Jan 14, 2026
1.13.6
New Features and Improvements
- Windsurf now supports Agent Skills for Cascade.
Bug Fixes and Improvements
- Various bug fixes and performance improvements
- Improved stability and reliability
- Dec 27, 2025
- Parsed from source:Dec 27, 2025
- Detected by Releasebot:Dec 28, 2025
1.13.5
Gemini 3 Flash launches for all users, blending Pro-grade reasoning with Flash speed for coding and agentic workflows. It promises near-instant responses, superior coding intelligence, and deep multimodal capabilities, plus bug fixes.
Gemini 3 Flash release
Gemini 3 Flash is now available for all users. This model combines Gemini 3 Pro-grade reasoning with Flash-level speed and efficiency, making it ideal for agentic workflows and coding tasks.
- Blazing Fast Responses: Experience near-instant feedback with 3x faster performance than previous generations, perfect for iterative development compared to Gemini 3 Pro
- Superior Coding Intelligence: Outperforms even Pro-tier models on key coding benchmarks (78% on SWE-bench Verified), providing more accurate code generation and debugging
- Deep Multimodal Understanding: Easily process complex video, data extraction, and visual Q&A tasks with frontier-level reasoning
Bug Fixes and Improvements
- Various bug fixes and performance improvements
- Tab fixes and improvements
- Dec 24, 2025
- Parsed from source:Dec 24, 2025
- Detected by Releasebot:Dec 25, 2025
1.13.3
Wave 13 Merry Shipmas brings parallel multi-agent sessions, Git worktrees, and side-by-side Cascade panes to Windsurf for a streamlined workflow. SWE-1.5 Free becomes default, adds a dedicated terminal profile and opt-in legacy terminal, plus a new context window indicator and Cascade hooks.
Wave 13: Merry Shipmas
Wave 13 brings first-class support for parallel, multi-agent sessions in Windsurf, along with Git worktrees, side-by-side Cascade panes, and a dedicated terminal profile for more reliable agent execution.
SWE-1.5 Free
Our near-frontier model, SWE-1.5, is now available for free to all users for the next 3 months. SWE-1.5 Free has the full intelligence of SWE-1.5, with the same coding performance on SWE-Bench-Pro, but delivered at standard throughput speeds. The original variant of SWE-1.5 hosted on Cerebras will continue to be available for paid users. SWE-1.5 Free will replace SWE-1 as the default model in Windsurf starting today.
Git Worktree Support
Windsurf now supports Git worktrees, letting you spawn multiple Cascade sessions in the same repository without conflicts. Git worktrees check out different branches into separate directories while sharing the same Git history.
Multi-Cascade Panes & Tabs
You can already run multiple Cascade sessions in Windsurf at the same time. Now, you can view and interact with them in separate panes and tabs within the same window. This lets you monitor progress and compare outputs of sessions side-by-side, or even turn Windsurf into a big Cascade dashboard.
Cascade Dedicated Terminal (Beta)
Windsurf introduces a new approach for letting agents execute terminal commands. Instead of your default shell, Cascade will now run commands in a dedicated zsh shell specifically configured for reliability. The Cascade Dedicated Terminal can use the environment variables you set in your .zshrc configuration and is interactive, which means you can answer any prompts from shell scripts without having to break your flow. This should improve the reliability and speed of shell commands, especially for users with complicated prompts (e.g., powerlevel10k).
In this version, the Cascade Dedicated Terminal will be opt-in for Windsurf Stable on macOS. If you are experiencing issues with the older legacy terminal, we recommend switching to the new terminal early. We expect to make this feature the default in the future, while maintaining the legacy terminal extension for backwards compatibility purposes. You can opt-in in the Windsurf User Settings -> Disable Windsurf Legacy Terminal Profile.
Context Window Indicator
When a model's context window grows too long, earlier context can be dropped without warning and performance can degrade. Cascade already extends the window by occasionally summarizing messages and clearing history. This release adds a visual indicator to see how much of your context window is currently in use, helping you anticipate limits and decide when to start a new session.
Cascade Hooks
Execute custom commands at key points during Cascade's workflow, including on model response for auditing purposes.
System-level Rules & Workflows
Enterprises can deploy rules and workflows via MDM policies, allowing organizations to place rules and workflows files on users' machines.
Bug Fixes and Improvements
- Enhanced diff zone behavior with configurable scroll-to-next-hunk settings (default off)
- Preserved colors and styling in Cascade terminal output
- Multiple fixes to the Model Context Protocol implementation
- Supports lowering permissions for Cascade's Web Fetch tool
- Fix race condition in the dedicated terminal implementation
- Support force killing commands in the dedicated terminal
- Improved markdown completion
- Fix opening old Cascade diffs
- Dec 12, 2025
- Parsed from source:Dec 12, 2025
- Detected by Releasebot:Dec 13, 2025
1.12.44
Patch Fixes and Improvements
Reduce occurrence of "prompt is too long" errors
Request all supported scopes if no scopes are provided in MCP OAuth config
- Dec 12, 2025
- Parsed from source:Dec 12, 2025
- Detected by Releasebot:Dec 17, 2025
1.12.47
New Promo label for newly available LLM models with special pricing highlights a live product update. This release bundles fixes across Agents, UI rendering, platform messaging and onboarding, delivering more reliable tool use and smoother UX.
Features
Added a new "Promo" label to LLM models that are newly available or have special discount pricing
Bug fixes and improvements
Agents & Tool Execution
- Fixed Command-I functionality
- Fixed Ctrl+C during tool execution not working properly
- Fixed Go (fallback) processes not being killed properly
- Fixed handling of parallel tool call errors
- Improved MCP tool call visibility (show tool name, args, etc)
- Fixed fallback diff handling for nonexistent files in code actions
UI & Rendering
- Fixed incorrect indentation from code blocks in terminal rendering
- Fixed nested lists not rendering on new line in terminal markdown
- Fixed content spacing issues
- Fixed streaming flashes
- Enhanced code block file path display to hide line numbers for whole files
- Improved citation and language parsing in code blocks with a more robust regex pattern
- Updated the UI for code block title bars to properly handle long paths with truncation
- Improved the auto-run command menu interface and its display logic
- Added loading indicators when thinking or during long running operations
- Fix opening old Cascade diffs
Platform & Messaging
- Fixed rate limit error message to say "no credits were used" instead of "credits have been refunded"
- Fixed continuously rechecking for updates on macOS
- Added a user-facing message when API providers are exhausted
Workspace & Onboarding
- Allowed clicking items in the Windsurf onboarding pane
- Respect gitignore patterns in the workspace directory tree
- Dec 11, 2025
- Parsed from source:Dec 11, 2025
- Detected by Releasebot:Dec 12, 2025
GPT-5.2
Windsurf rolls out GPT-5.2 with limited time 0x credit access and makes it the default across Windsurf and core Devin workloads. The upgrade promises a major leap in agentic coding and performance. Also ships bug fixes and stability improvements.
Release Notes
GPT-5.2 is now available in Windsurf. This model will be available for 0x credits in Windsurf (to paid users) for a limited time.
GPT-5.2 represents the biggest leap for GPT models in agentic coding since GPT-5 and is a SOTA coding model in its price range. The version bump undersells the jump in intelligence. We`re excited to make it the default across Windsurf and several core Devin workloads.- Jeff Wang, CEO of Windsurf
Download the latest version on Windsurf to try it out!
Bug fixes and improvements
- General Windsurf stability and performance improvements
- General Tab (Supercomplete) improvements and stability
- Fixes issues with Cascade running commands that could not be cancelled during certain long-running processes
- Dec 10, 2025
- Parsed from source:Dec 10, 2025
- Detected by Releasebot:Dec 11, 2025
- Modified by Releasebot:Dec 13, 2025
1.12.41
Cascade now lets you configure hooks on user prompts to log and block policy-violating prompts. MCPs gain GitLab support, GitHub OAuth, and per-MCP prompts with toggles. Diff Zones fixes, Tab autocomplete improvements, and overall stability boosts represent shipped updates.
Features & Tools
Cascade Hooks on User Prompts
Users can now configure Cascade Hooks on user prompts for logging all user prompts and blocking policy-violating prompts.
MCP Servers
- Added support for GitLab remote MCP.
- Added OAuth support for GitHub remote MCP.
- Fixed an issue where every MCP would reauth on opening Windsurf.
- Added support for MCP prompts
- Added toggles to enable/disable MCPs in the Cascade header
Diff Zones
- Fixes issues with diff zones not rendering correctly or jumping to the end of a file when editing
Tab (Supercomplete)
- Improves reliability of Tab autocomplete
- Makes Tab more responsive and faster in the appropiate circumstances
Bug fixes and improvements
- General stability and performance improvements
- Fixes issues with login timing out too quickly during onboarding
- Dec 4, 2025
- Parsed from source:Dec 4, 2025
- Detected by Releasebot:Dec 5, 2025
GPT-5.1-Codex Max
Introducing GPT-5.1-Codex Max in three reasoning tiers (Low, Medium, High)
Low variant available at no cost to paid users for a limited time.
Original source Report a problem