Developer Platform Release Notes

Last updated: Feb 20, 2026

  • Feb 17, 2026
    • Date parsed from source:
      Feb 17, 2026
    • First seen by Releasebot:
      Feb 20, 2026

    Developer Platform by Cloudflare

    Containers - Docker-in-Docker support added to Containers and Sandboxes

    Sandboxes and Containers now support running Docker for "Docker-in-Docker" setups. This is particularly useful when your end users or agents want to run a full sandboxed development environment.

    This allows you to:

    • Develop containerized applications with your Sandbox
    • Run isolated test environments for images
    • Build container images as part of CI/CD workflows
    • Deploy arbitrary images supplied at runtime within a container

    For Sandbox SDK users, see the Docker-in-Docker guide for instructions on combining Docker with the SandboxSDK. For general Containers usage, see the Containers FAQ.

    Original source Report a problem
  • Feb 17, 2026
    • Date parsed from source:
      Feb 17, 2026
    • First seen by Releasebot:
      Feb 17, 2026
    • Modified by Releasebot:
      Feb 20, 2026

    Developer Platform by Cloudflare

    Agents, Workers - Agents SDK v0.5.0: Protocol message control, retry utilities, data parts, and @cloudflare/ai-chat v0.1.0

    The Agents SDK release adds built‑in retry utilities with per‑task and class defaults, plus per‑connection protocol message controls. It ships @cloudflare/ai-chat v0.1.0 with data parts, tool approval persistence, and incremental persistence along with notable fixes.

    Retry utilities

    A new this.retry() method lets you retry any async operation with exponential backoff and jitter. You can pass an optional shouldRetry predicate to bail early on non-retryable errors.

    JavaScript

    class MyAgent extends Agent {
      async onRequest(request) {
        const data = await this.retry(() => callUnreliableService(), {
          maxAttempts: 4,
          shouldRetry: (err) => !(err instanceof PermanentError),
        });
        return Response.json(data);
      }
    }
    

    TypeScript

    class MyAgent extends Agent {
      async onRequest(request: Request) {
        const data = await this.retry(() => callUnreliableService(), {
          maxAttempts: 4,
          shouldRetry: (err) => !(err instanceof PermanentError),
        });
        return Response.json(data);
      }
    }
    

    Retry options are also available per-task on queue(), schedule(), scheduleEvery(), and addMcpServer():

    JavaScript

    // Per-task retry configuration, persisted in SQLite alongside the task
    await this.schedule(
      Date.now() + 60_000,
      "sendReport",
      { userId: "abc" },
      {
        retry: { maxAttempts: 5 },
      },
    );
    // Class-level retry defaults
    class MyAgent extends Agent {
      static options = {
        retry: { maxAttempts: 3 },
      };
    }
    

    TypeScript

    // Per-task retry configuration, persisted in SQLite alongside the task
    await this.schedule(Date.now() + 60_000, "sendReport", { userId: "abc" }, {
      retry: { maxAttempts: 5 },
    });
    // Class-level retry defaults
    class MyAgent extends Agent {
      static options = {
        retry: { maxAttempts: 3 },
      };
    }
    

    Retry options are validated eagerly at enqueue/schedule time, and invalid values throw immediately. Internal retries have also been added for workflow operations (terminateWorkflow, pauseWorkflow, and others) with Durable Object-aware error detection.

    Per-connection protocol message control

    Agents automatically send JSON text frames (identity, state, MCP server lists) to every WebSocket connection. You can now suppress these per-connection for clients that cannot handle them — binary-only devices, MQTT clients, or lightweight embedded systems.

    JavaScript

    class MyAgent extends Agent {
      shouldSendProtocolMessages(connection, ctx) {
        // Suppress protocol messages for MQTT clients
        const subprotocol = ctx.request.headers.get("Sec-WebSocket-Protocol");
        return subprotocol !== "mqtt";
      }
    }
    

    TypeScript

    class MyAgent extends Agent {
      shouldSendProtocolMessages(connection: Connection, ctx: ConnectionContext) {
        // Suppress protocol messages for MQTT clients
        const subprotocol = ctx.request.headers.get("Sec-WebSocket-Protocol");
        return subprotocol !== "mqtt";
      }
    }
    

    Connections with protocol messages disabled still fully participate in RPC and regular messaging. Use isConnectionProtocolEnabled(connection) to check a connection's status at any time. The flag persists across Durable Object hibernation.
    See Protocol messages for full documentation.

    @cloudflare/ai-chat v0.1.0

    The first stable release of @cloudflare/ai-chat ships alongside this release with a major refactor of AIChatAgent internals — new ResumableStream class, WebSocket ChatTransport, and simplified SSE parsing — with zero breaking changes. Existing code using AIChatAgent and useAgentChat works as-is.

    Key new features:

    • Data parts — Attach typed JSON blobs (data-*) to messages alongside text. Supports reconciliation (type+id updates in-place), append, and transient parts (ephemeral via onData callback). See Data parts.
    • Tool approval persistence — The needsApproval approval UI now survives page refresh and DO hibernation. The streaming message is persisted to SQLite when a tool enters approval-requested state.
    • maxPersistedMessages — Cap SQLite message storage with automatic oldest-message deletion.
    • body option on useAgentChat — Send custom data with every request (static or dynamic).
    • Incremental persistence — Hash-based cache to skip redundant SQL writes.
    • Row size guard — Automatic two-pass compaction when messages approach the SQLite 2 MB limit.
    • autoContinueAfterToolResult defaults to true — Client-side tool results and tool approvals now automatically trigger a server continuation, matching server-executed tool behavior. Set autoContinueAfterToolResult: false in useAgentChat to restore the previous behavior.

    Notable bug fixes:

    • Resolved stream resumption race conditions
    • Resolved an issue where setMessages functional updater sent empty arrays
    • Resolved an issue where client tool schemas were lost after DO hibernation
    • Resolved InvalidPromptError after tool approval (approval.id was dropped)
    • Resolved an issue where message metadata was not propagated on broadcast/resume paths
    • Resolved an issue where clearAll() did not clear in-memory chunk buffers
    • Resolved an issue where reasoning-delta silently dropped data when reasoning-start was missed during stream resumption

    Synchronous queue and schedule getters

    • getQueue(), getQueues(), getSchedule(), dequeue(), dequeueAll(), and dequeueAllByCallback() were unnecessarily async despite only performing synchronous SQL operations. They now return values directly instead of wrapping them in Promises. This is backward compatible — existing code using await on these methods will continue to work.

    Other improvements

    • Fix TypeScript "excessively deep" error — A depth counter on CanSerialize and IsSerializableParam types bails out to true after 10 levels of recursion, preventing the "Type instantiation is excessively deep" error with deeply nested types like AI SDK CoreMessage[].
    • POST SSE keepalive — The POST SSE handler now sends event: ping every 30 seconds to keep the connection alive, matching the existing GET SSE handler behavior. This prevents POST response streams from being silently dropped by proxies during long-running tool calls.
    • Widened peer dependency ranges — Peer dependency ranges across packages have been widened to prevent cascading major bumps during 0.x minor releases. @cloudflare/ai-chat and @cloudflare/codemode are now marked as optional peer dependencies.

    Upgrade
    To update to the latest version:

    npm i agents@latest @cloudflare/ai-chat@latest
    
    Original source Report a problem
  • All of your release notes in one feed

    Join Releasebot and get updates from Cloudflare and hundreds of other software products.

  • Feb 15, 2026
    • Date parsed from source:
      Feb 15, 2026
    • First seen by Releasebot:
      Feb 20, 2026

    Developer Platform by Cloudflare

    Workers - New Best Practices guide for Workers

    Cloudflare releases a new Workers Best Practices guide to help developers build fast, secure, and observable Workers. The guide covers compatibility, bindings, streaming, observability, and secure token generation to optimize performance and reliability.

    Workers Best Practices

    A new Workers Best Practices guide provides opinionated recommendations for building fast, reliable, observable, and secure Workers. The guide draws on production patterns, Cloudflare internal usage, and best practices observed from developers building on Workers.

    Key guidance includes:

    • Keep your compatibility date current and enable nodejs_compat — Ensure you have access to the latest runtime features and Node.js built-in modules.
    wrangler.jsonc
    {
      "name": "my-worker",
      "main": "src/index.ts",
      // Set this to today's date
      "compatibility_date": "2026-02-20",
      "compatibility_flags": ["nodejs_compat"],
    }
    
    wrangler.toml
    name = "my-worker"
    main = "src/index.ts"
    # Set this to today's date
    compatibility_date = "2026-02-20"
    compatibility_flags = [ "nodejs_compat" ]
    
    • Generate binding types with wrangler types — Never hand-write your Env interface. Let Wrangler generate it from your actual configuration to catch mismatches at compile time.

    • Stream request and response bodies — Avoid buffering large payloads in memory. Use TransformStream and pipeTo to stay within the 128 MB memory limit and improve time-to-first-byte.

    • Use bindings, not REST APIs — Bindings to KV, R2, D1, Queues, and other Cloudflare services are direct, in-process references with no network hop and no authentication overhead.

    • Use Queues and Workflows for background work — Move long-running or retriable tasks out of the critical request path. Use Queues for simple fan-out and buffering, and Workflows for multi-step durable processes.

    • Enable Workers Logs and Traces — Configure observability before deploying to production so you have data when you need to debug.

    • Avoid global mutable state — Workers reuse isolates across requests. Storing request-scoped data in module-level variables causes cross-request data leaks.

    • Always await or waitUntil your Promises — Floating promises cause silent bugs and dropped work.

    • Use Web Crypto for secure token generation — Never use Math.random() for security-sensitive operations.

    To learn more, refer to Workers Best Practices.

    Original source Report a problem
  • Feb 13, 2026
    • Date parsed from source:
      Feb 13, 2026
    • First seen by Releasebot:
      Feb 14, 2026

    Developer Platform by Cloudflare

    Workers, Agents, Workers AI - Introducing GLM-4.7-Flash on Workers AI, @cloudflare/tanstack-ai, and workers-ai-provider v3.1.1

    Cloudflare unveils GLM-4.7-Flash on Workers AI with edge-ready agents, new TanStack AI adapters, and workers-ai-provider v3.1.1. A multilingual 131k context model enables multi-turn tool calls and seamless integration with TanStack AI and Vercel SDK for edge apps.

    GLM-4.7-Flash on Workers AI

    We're excited to announce GLM-4.7-Flash on Workers AI, a fast and efficient text generation model optimized for multilingual dialogue and instruction-following tasks, along with the brand-new @cloudflare/tanstack-ai package and workers-ai-provider v3.1.1.
    You can now run AI agents entirely on Cloudflare. With GLM-4.7-Flash's multi-turn tool calling support, plus full compatibility with TanStack AI and the Vercel AI SDK, you have everything you need to build agentic applications that run completely at the edge.

    GLM-4.7-Flash — Multilingual Text Generation Model

    @cf/zai-org/glm-4.7-flash is a multilingual model with a 131,072 token context window, making it ideal for long-form content generation, complex reasoning tasks, and multilingual applications.

    Key Features and Use Cases:

    • Multi-turn Tool Calling for Agents: Build AI agents that can call functions and tools across multiple conversation turns
    • Multilingual Support: Built to handle content generation in multiple languages effectively
    • Large Context Window: 131,072 tokens for long-form writing, complex reasoning, and processing long documents
    • Fast Inference: Optimized for low-latency responses in chatbots and virtual assistants
    • Instruction Following: Excellent at following complex instructions for code generation and structured tasks

    Use GLM-4.7-Flash through the Workers AI binding (env.AI.run()), the REST API at /run or /v1/chat/completions, AI Gateway, or via workers-ai-provider for the Vercel AI SDK.
    Pricing is available on the model page or pricing page.

    @cloudflare/tanstack-ai v0.1.1 — TanStack AI adapters for Workers AI and AI Gateway

    We've released @cloudflare/tanstack-ai, a new package that brings Workers AI and AI Gateway support to TanStack AI. This provides a framework-agnostic alternative for developers who prefer TanStack's approach to building AI applications.
    Workers AI adapters support four configuration modes — plain binding (env.AI), plain REST, AI Gateway binding (env.AI.gateway(id)), and AI Gateway REST — across all capabilities:

    • Chat (createWorkersAiChat) — Streaming chat completions with tool calling, structured output, and reasoning text streaming.
    • Image generation (createWorkersAiImage) — Text-to-image models.
    • Transcription (createWorkersAiTranscription) — Speech-to-text.
    • Text-to-speech (createWorkersAiTts) — Audio generation.
    • Summarization (createWorkersAiSummarize) — Text summarization.
      AI Gateway adapters route requests from third-party providers — OpenAI, Anthropic, Gemini, Grok, and OpenRouter — through Cloudflare AI Gateway for caching, rate limiting, and unified billing.

    To get started:
    npm install @cloudflare/tanstack-ai @tanstack/ai

    workers-ai-provider v3.1.1 — transcription, speech, reranking, and reliability

    The Workers AI provider for the Vercel AI SDK now supports three new capabilities beyond chat and image generation:

    • Transcription (provider.transcription(model)) — Speech-to-text with automatic handling of model-specific input formats across binding and REST paths.
    • Text-to-speech (provider.speech(model)) — Audio generation with support for voice and speed options.
    • Reranking (provider.reranking(model)) — Document reranking for RAG pipelines and search result ordering.
    import { createWorkersAI } from "workers-ai-provider";
    import {
      experimental_transcribe,
      experimental_generateSpeech,
      rerank,
    } from "ai";
    
    const workersai = createWorkersAI({ binding: env.AI });
    
    const transcript = await experimental_transcribe({
      model: workersai.transcription("@cf/openai/whisper-large-v3-turbo"),
      audio: audioData,
      mediaType: "audio/wav",
    });
    
    const speech = await experimental_generateSpeech({
      model: workersai.speech("@cf/deepgram/aura-1"),
      text: "Hello world",
      voice: "asteria",
    });
    
    const ranked = await rerank({
      model: workersai.reranking("@cf/baai/bge-reranker-base"),
      query: "What is machine learning?",
      documents: ["ML is a branch of AI.", "The weather is sunny."],
    });
    

    This release also includes a comprehensive reliability overhaul (v3.0.5):

    • Fixed streaming — Responses now stream token-by-token instead of buffering all chunks, using a proper TransformStream pipeline with backpressure.
    • Fixed tool calling — Resolved issues with tool call ID sanitization, conversation history preservation, and a heuristic that silently fell back to non-streaming mode when tools were defined.
    • Premature stream termination detection — Streams that end unexpectedly now report finishReason: "error" instead of silently reporting "stop".
    • AI Search support — Added createAISearch as the canonical export (renamed from AutoRAG). createAutoRAG still works with a deprecation warning.

    To upgrade:
    npm install workers-ai-provider@latest ai

    Resources

    • @cloudflare/tanstack-ai on npm
    • workers-ai-provider on npm
    • GitHub repository
    Original source Report a problem
  • Feb 13, 2026
    • Date parsed from source:
      Feb 13, 2026
    • First seen by Releasebot:
      Feb 14, 2026

    Developer Platform by Cloudflare

    Workers VPC - Origin CA certificate support for Workers VPC

    Workers VPC now accepts Cloudflare Origin CA certificates for HTTPS to private services, expanding trusted TLS options beyond public CAs. Encrypt traffic between the tunnel and origin without provisioning public certificates, enabling secure private connections.

    Workers VPC now supports Cloudflare Origin CA certificates when connecting to your private services over HTTPS. Previously, Workers VPC only trusted certificates issued by publicly trusted certificate authorities (for example, Let's Encrypt, DigiCert).

    With this change, you can use free Cloudflare Origin CA certificates on your origin servers within private networks and connect to them from Workers VPC using the https scheme. This is useful for encrypting traffic between the tunnel and your service without needing to provision certificates from a public CA.

    For more information, refer to Supported TLS certificates.

    Original source Report a problem
  • Feb 11, 2026
    • Date parsed from source:
      Feb 11, 2026
    • First seen by Releasebot:
      Feb 11, 2026

    Developer Platform by Cloudflare

    Workers - Workers are no longer limited to 1000 subrequests

    Cloudflare Workers now remove the old 1000 subrequest cap, enabling more fetch calls per request. Paid plans can raise subrequests up to 10 million via Wrangler config, while free plans stay capped at 50 external and 1000 Cloudflare service subrequests. You can also set lower limits.

    Subrequests limits

    Workers no longer have a limit of 1000 subrequests per invocation, allowing you to make more fetch() calls or requests
    to Cloudflare services on every incoming request. This is especially important for long-running Workers requests, such as
    open websockets on Durable Objects or long-running Workflows, as these could often exceed this limit and error.

    By default, Workers on paid plans are now limited to 10,000 subrequests per invocation, but this
    limit can be increased up to 10 million by setting the new subrequests limit in your Wrangler configuration file.

    wrangler.jsonc

    {
    "limits": {
    "subrequests": 50000,
    },
    }
    

    wrangler.toml

    [limits]
    subrequests = 50_000
    

    Workers on the free plan remain limited to 50 external subrequests and 1000 subrequests to Cloudflare services per invocation.

    To protect against runaway code or unexpected costs, you can also set a lower limit for both subrequests and CPU usage.

    wrangler.jsonc

    {
    "limits": {
    "subrequests": 10,
    "cpu_ms": 1000,
    },
    }
    

    wrangler.toml

    [limits]
    subrequests = 10
    cpu_ms = 1_000
    

    For more information, refer to the Wrangler configuration documentation for limits and subrequest limits.

    Original source Report a problem
  • Feb 11, 2026
    • Date parsed from source:
      Feb 11, 2026
    • First seen by Releasebot:
      Feb 11, 2026

    Developer Platform by Cloudflare

    Workers - Improved React Server Components support in the Cloudflare Vite plugin

    Cloudflare's Vite plugin now integrates with the official @vitejs/plugin-rsc for React Server Components, adding a childEnvironments option to run multiple environments inside one Worker. This enables the parent to import modules from a child environment for a typical RSC setup.

    The Cloudflare Vite plugin integration

    The Cloudflare Vite plugin now integrates seamlessly @vitejs/plugin-rsc, the official Vite plugin for React Server Components.

    A childEnvironments option has been added to the plugin config to enable using multiple environments within a single Worker.

    The parent environment can then import modules from a child environment in order to access a separate module graph.

    For a typical RSC use case, the plugin might be configured as in the following example:

    export default defineConfig({
      plugins: [
        cloudflare({
          viteEnvironment: {
            name: "rsc",
            childEnvironments: ["ssr"],
          },
        }),
      ],
    });
    

    @vitejs/plugin-rsc provides the lower level functionality that frameworks, such as React Router, build upon.

    The GitHub repository includes a basic Cloudflare example.

    Original source Report a problem
  • Feb 9, 2026
    • Date parsed from source:
      Feb 9, 2026
    • First seen by Releasebot:
      Feb 10, 2026

    Developer Platform by Cloudflare

    Agents, Workers - Agents SDK v0.4.0: Readonly connections, MCP security improvements, x402 v2 migration, and custom MCP OAuth providers

    Agents SDK introduces readonly connections with new hooks to lock spectator views and protect state. It also enables custom MCP OAuth providers, ships MCP SDK 1.26.0 with security guards, and migrates x402 to v2 with network and config updates.

    The latest release of the Agents SDK brings readonly connections, MCP protocol and security improvements, x402 payment protocol v2 migration, and the ability to customize OAuth for MCP server connections.

    Readonly connections

    Agents can now restrict WebSocket clients to read-only access, preventing them from modifying agent state. This is useful for dashboards, spectator views, or any scenario where clients should observe but not mutate.

    New hooks: shouldConnectionBeReadonly, setConnectionReadonly, isConnectionReadonly. Readonly connections block both client-side setState() and mutating @callable() methods, and the readonly flag survives hibernation.

    JavaScript

    class MyAgent extends Agent {
      shouldConnectionBeReadonly(connection) {
        // Make spectators readonly
        return connection.url.includes("spectator");
      }
    }
    

    TypeScript

    class MyAgent extends Agent {
      shouldConnectionBeReadonly(connection) {
        // Make spectators readonly
        return connection.url.includes("spectator");
      }
    }
    

    Custom MCP OAuth providers

    The new createMcpOAuthProvider method on the Agent class allows subclasses to override the default OAuth provider used when connecting to MCP servers. This enables custom authentication strategies such as pre-registered client credentials or mTLS, beyond the built-in dynamic client registration.

    JavaScript

    class MyAgent extends Agent {
      createMcpOAuthProvider(callbackUrl) {
        return new MyCustomOAuthProvider(this.ctx.storage, this.name, callbackUrl);
      }
    }
    

    TypeScript

    class MyAgent extends Agent {
      createMcpOAuthProvider(callbackUrl: string): AgentMcpOAuthProvider {
        return new MyCustomOAuthProvider(this.ctx.storage, this.name, callbackUrl);
      }
    }
    

    MCP SDK upgrade to 1.26.0

    Upgraded the MCP SDK to 1.26.0 to prevent cross-client response leakage. Stateless MCP Servers should now create a new McpServer instance per request instead of sharing a single instance. A guard is added in this version of the MCP SDK which will prevent connection to a Server instance that has already been connected to a transport. Developers will need to modify their code if they declare their McpServer instance as a global variable.

    MCP OAuth callback URL security fix

    Added callbackPath option to addMcpServer to prevent instance name leakage in MCP OAuth callback URLs. When sendIdentityOnConnect is false, callbackPath is now required — the default callback URL would expose the instance name, undermining the security intent. Also fixes callback request detection to match via the state parameter instead of a loose /callback URL substring check, enabling custom callback paths.

    Deprecate onStateUpdate in favor of onStateChanged

    onStateChanged is a drop-in rename of onStateUpdate (same signature, same behavior). onStateUpdate still works but emits a one-time console warning per class. validateStateChange rejections now propagate a CF_AGENT_STATE_ERROR message back to the client.

    x402 v2 migration

    Migrated the x402 MCP payment integration from the legacy x402 package to @x402/core and @x402/evm v2.

    Breaking changes for x402 users:

    • Peer dependencies changed: replace x402 with @x402/core and @x402/evm
    • PaymentRequirements type now uses v2 fields (e.g. amount instead of maxAmountRequired)
    • X402ClientConfig.account type changed from viem.Account to ClientEvmSigner (structurally compatible with privateKeyToAccount())
    • npm uninstall x402
    • npm install @x402/core @x402/evm

    Network identifiers now accept both legacy names and CAIP-2 format:

    • // Legacy name (auto-converted)
      {
        "network": "base-sepolia"
      }
      
    • // CAIP-2 format (preferred)
      {
        "network": "eip155:84532"
      }
      

    Other x402 changes:

    • X402ClientConfig.network is now optional — the client auto-selects from available payment requirements
    • Server-side lazy initialization: facilitator connection is deferred until the first paid tool invocation
    • Payment tokens support both v2 (PAYMENT-SIGNATURE) and v1 (X-PAYMENT) HTTP headers
    • Added normalizeNetwork export for converting legacy network names to CAIP-2 format
    • Re-exports PaymentRequirements, PaymentRequired, Network, FacilitatorConfig, and ClientEvmSigner from agents/x402

    Other improvements

    • Fix useAgent and AgentClient crashing when using basePath routing
    • CORS handling delegated to partyserver's native support (simpler, more reliable)
    • Client-side onStateUpdateError callback for handling rejected state updates

    Upgrade

    To update to the latest version:

    npm i agents@latest
    
    Original source Report a problem
  • Feb 9, 2026
    • Date parsed from source:
      Feb 9, 2026
    • First seen by Releasebot:
      Feb 10, 2026

    Developer Platform by Cloudflare

    Agents - Interactive browser terminals in Sandboxes

    Sandbox now adds PTY passthrough for browser terminals via WebSocket, enabling shells in browser UIs. Each session can host multiple isolated terminals with separate working dirs and envs. A new xterm.js addon provides automatic reconnection, buffered replay, and resize forwarding. Upgrade to latest to enable these terminal capabilities.

    The Sandbox SDK PTY passthrough

    The Sandbox SDK now supports PTY (pseudo-terminal) passthrough, enabling browser-based terminal UIs to connect to sandbox shells via WebSocket.

    sandbox.terminal(request)
    

    The new terminal() method proxies a WebSocket upgrade to the container's PTY endpoint, with output buffering for replay on reconnect.

    JavaScript

    // Worker: proxy WebSocket to container terminal
    return sandbox.terminal(request, { cols: 80, rows: 24 });
    

    TypeScript

    // Worker: proxy WebSocket to container terminal
    return sandbox.terminal(request, { cols: 80, rows: 24 });
    

    Multiple terminals per sandbox

    Each session can have its own terminal with an isolated working directory and environment, so users can run separate shells side-by-side in the same container.

    JavaScript

    // Multiple isolated terminals in the same sandbox
    const dev = await sandbox.getSession("dev");
    return dev.terminal(request);
    

    TypeScript

    // Multiple isolated terminals in the same sandbox
    const dev = await sandbox.getSession("dev");
    return dev.terminal(request);
    

    xterm.js addon

    The new @cloudflare/sandbox/xterm export provides a SandboxAddon for xterm.js with automatic reconnection (exponential backoff + jitter), buffered output replay, and resize forwarding.

    JavaScript
    import { SandboxAddon } from "@cloudflare/sandbox/xterm";
    const addon = new SandboxAddon({
      getWebSocketUrl: ({ sandboxId, origin }) =>
        `${origin}/ws/terminal?id=${sandboxId}`,
      onStateChange: (state, error) => updateUI(state),
    });
    terminal.loadAddon(addon);
    addon.connect({ sandboxId: "my-sandbox" });
    
    TypeScript
    import { SandboxAddon } from "@cloudflare/sandbox/xterm";
    const addon = new SandboxAddon({
      getWebSocketUrl: ({ sandboxId, origin }) =>
        `${origin}/ws/terminal?id=${sandboxId}`,
      onStateChange: (state, error) => updateUI(state),
    });
    terminal.loadAddon(addon);
    addon.connect({ sandboxId: "my-sandbox" });
    

    Upgrade

    To update to the latest version:

    npm i @cloudflare/sandbox@latest
    
    Original source Report a problem
  • Feb 9, 2026
    • Date parsed from source:
      Feb 9, 2026
    • First seen by Releasebot:
      Feb 10, 2026

    Developer Platform by Cloudflare

    AI Search - AI Search now with more granular controls over indexing

    AI Search now supports reindexing individual files without a full rescan and lets you crawl a single sitemap URL to limit indexing. Reindex items from Overview > Indexed Items and tailor crawls via Settings > Parsing options > Specific sitemaps for targeted updates.

    Reindex individual files without a full sync

    Updated a file or need to retry one that errored? When you know exactly which file changed, you can now reindex it directly instead of rescanning your entire data source.

    Go to Overview > Indexed Items and select the sync icon next to any file to reindex it immediately.

    Crawl only the sitemap you need

    By default, AI Search crawls all sitemaps listed in your robots.txt, up to the maximum files per index limit. If your site has multiple sitemaps but you only want to index a specific set, you can now specify a single sitemap URL to limit what the crawler visits.

    For example, if your robots.txt lists both blog-sitemap.xml and docs-sitemap.xml, you can specify just https://example.com/docs-sitemap.xml to index only your documentation.

    Configure your selection anytime in Settings > Parsing options > Specific sitemaps, then trigger a sync to apply the changes.

    Learn more about indexing controls and website crawling configuration.

    Original source Report a problem

Related products