Last updated: Nov 6, 2025

  • Nov 5, 2025
    • Parsed from source:
      Nov 5, 2025
    • Detected by Releasebot:
      Nov 6, 2025

    Storage by Cloudflare

    D1 can restrict data localization with jurisdictions

    D1 Jurisdiction Setting

    You can now set a jurisdiction when creating a D1 database to guarantee where your database runs and stores data. Jurisdictions can help you comply with data localization regulations such as GDPR. Supported jurisdictions include eu and fedramp.

    A jurisdiction can only be set at database creation time via wrangler, REST API or the UI and cannot be added/updated after the database already exists.

    To learn more, visit D1's data location documentation.

    Original source Report a problem
  • Oct 31, 2025
    • Parsed from source:
      Oct 31, 2025
    • Detected by Releasebot:
      Nov 3, 2025

    Storage by Cloudflare

    Workers WebSocket message size limit increased from 1 MiB to 32 MiB

    WebSocket message size limit for Workers

    Workers, including those using Durable Objects and Browser Rendering, may now process WebSocket messages up to 32 MiB in size. Previously, this limit was 1 MiB.

    This change allows Workers to handle use cases requiring large message sizes, such as processing Chrome Devtools Protocol messages.

    For more information, please see the Durable Objects startup limits.

    Original source Report a problem
  • Oct 16, 2025
    • Parsed from source:
      Oct 16, 2025
    • Detected by Releasebot:
      Oct 28, 2025

    Storage by Cloudflare

    View and edit Durable Object data in UI with Data Studio (Beta)

    Cloudflare launches Data Studio for Durable Objects with a UI editor to view and write storage from the dashboard. SQLite-backed Durable Objects gain easier data access for prototyping and debugging. Admin-only access; queries are audited and billed normally.

    Go to Durable Objects

    You can now view and write to each Durable Object's storage using a UI editor on the Cloudflare dashboard. Only Durable Objects using SQLite storage can use Data Studio.

    Data Studio

    Data Studio unlocks easier data access with Durable Objects for prototyping application data models to debugging production storage usage. Before, querying your Durable Objects data required deploying a Worker.

    To access a Durable Object, you can provide an object's unique name or ID generated by Cloudflare. Data Studio requires you to have at least the Workers Platform Admin role, and all queries are captured with audit logging for your security and compliance needs. Queries executed by Data Studio send requests to your remote, deployed objects and incur normal usage billing.

    To learn more, visit the Data Studio documentation. If you have feedback or suggestions for the new Data Studio, please share your experience on Discord ↗

    Original source Report a problem
  • Oct 6, 2025
    • Parsed from source:
      Oct 6, 2025
    • Detected by Releasebot:
      Nov 3, 2025

    Storage by Cloudflare

    R2 Data Catalog table-level compaction

    Enable compaction for Apache Iceberg tables

    You can now enable compaction for individual Apache Iceberg ↗ tables in R2 Data Catalog, giving you fine-grained control over different workloads.

    This allows you to:

    • Apply different target file sizes per table
    • Disable compaction for specific tables
    • Optimize based on table-specific access patterns

    Learn more at Manage catalogs.

    Original source Report a problem
  • Sep 25, 2025
    • Parsed from source:
      Sep 25, 2025
    • Detected by Releasebot:
      Nov 3, 2025
    • Modified by Releasebot:
      Nov 6, 2025

    Storage by Cloudflare

    R2 Data Catalog now supports compaction

    New automatic compaction for Apache Iceberg tables in R2 Data Catalog boosts query performance by merging small files. Enable it in R2 bucket settings or via Wrangler; start with manage catalogs and review best practices for compaction.

    Automatic compaction for Apache Iceberg tables in R2 Data Catalog

    You can now enable automatic compaction for Apache Iceberg ↗ tables in R2 Data Catalog to improve query performance.

    Compaction is the process of taking a group of small files and combining them into fewer larger files. This is an important maintenance operation as it helps ensure that query performance remains consistent by reducing the number of files that needs to be scanned.

    To enable automatic compaction in R2 Data Catalog, find it under R2 Data Catalog in your R2 bucket settings in the dashboard.

    Or with Wrangler, run:

    To get started with compaction, check out manage catalogs . For best practices and limitations, refer to about compaction .

    Original source Report a problem
  • Sep 11, 2025
    • Parsed from source:
      Sep 11, 2025
    • Detected by Releasebot:
      Nov 3, 2025

    Storage by Cloudflare

    D1 automatically retries read-only queries

    D1 now auto-retries read-only queries with safe rollback if a retry would write. It exposes total_attempts in response metadata and explains retry success varies by error type. This marks an active capability rollout with anticipated refinements ahead.

    D1 read-only query retries

    D1 now detects read-only queries and automatically attempts up to two retries to execute those queries in the event of failures with retryable errors. You can access the number of execution attempts in the returned response metadata property total_attempts.

    At the moment, only read-only queries are retried, that is, queries containing only the following SQLite keywords: SELECT, EXPLAIN, WITH. Queries containing any SQLite keyword ↗ that leads to database writes are not retried.

    The retry success ratio among read-only retryable errors varies from 5% all the way up to 95%, depending on the underlying error and its duration (like network errors or other internal errors).

    The retry success ratio among all retryable errors is lower, indicating that there are write-queries that could be retried. Therefore, we recommend D1 users to continue applying retries in their own code for queries that are not read-only but are idempotent according to the business logic of the application.

    D1 ensures that any retry attempt does not cause database writes, making the automatic retries safe from side-effects, even if a query causing changes slips through the read-only detection. D1 achieves this by checking for modifications after every query execution, and if any write occurred due to a retry attempt, the query is rolled back.

    The read-only query detection heuristics are simple for now, and there is room for improvement to capture more cases of queries that can be retried, so this is just the beginning.

    Original source Report a problem
  • Aug 26, 2025
    • Parsed from source:
      Aug 26, 2025
    • Detected by Releasebot:
      Nov 3, 2025

    Storage by Cloudflare

    List all vectors in a Vectorize index with the new list-vectors operation

    New feature: list-vectors operation

    You can now list all vector identifiers in a Vectorize index using the new list-vectors operation. This enables bulk operations, auditing, and data migration workflows through paginated requests that maintain snapshot consistency.

    The operation is available via Wrangler CLI and REST API. Refer to the list-vectors best practices guide for detailed usage guidance.

    Original source Report a problem
  • Aug 22, 2025
    • Parsed from source:
      Aug 22, 2025
    • Detected by Releasebot:
      Nov 3, 2025

    Storage by Cloudflare

    Workers KV completes hybrid storage provider rollout for improved performance, fault-tolerance

    Workers KV rolls out global performance upgrades delivering dramatic read latency reductions. A new metadata layer and hybrid storage architecture boosts redundancy and speed across Europe, Asia, Middle East, and Africa. p95 dropped from 150ms to 50ms and p99 from 350ms to 250ms.

    Performance improvements in Workers KV

    Workers KV has completed rolling out performance improvements across all KV namespaces, providing a significant latency reduction on read operations for all KV users. This is due to architectural changes to KV's underlying storage infrastructure, which introduces a new metadata later and substantially improves redundancy.

    The new hybrid architecture delivers substantial latency reductions throughout Europe, Asia, Middle East, Africa regions. Over the past 2 weeks, we have observed the following:

    • p95 latency: Reduced from ~150ms to ~50ms (67% decrease)
    • p99 latency: Reduced from ~350ms to ~250ms (29% decrease)
    Original source Report a problem
  • Aug 21, 2025
    • Parsed from source:
      Aug 21, 2025
    • Detected by Releasebot:
      Nov 3, 2025
    • Modified by Releasebot:
      Nov 6, 2025

    Storage by Cloudflare

    New getByName() API to access Durable Objects

    Durable Objects

    You can now create a client (a Durable Object stub ) to a Durable Object with the new getByName method, removing the need to convert Durable Object names to IDs and then create a stub.

    Each Durable Object has a globally-unique name, which allows you to send requests to a specific object from anywhere in the world. Thus, a Durable Object can be used to coordinate between multiple clients who need to work together. You can have billions of Durable Objects, providing isolation between application tenants.

    To learn more, visit the Durable Objects API Documentation or the getting started guide.

    Original source Report a problem
  • Aug 19, 2025
    • Parsed from source:
      Aug 19, 2025
    • Detected by Releasebot:
      Nov 3, 2025
    • Modified by Releasebot:
      Nov 6, 2025

    Storage by Cloudflare

    Subscribe to events from Cloudflare services with Queues

    Cloudflare introduces event subscriptions to publish account events to queues, enabling custom workflows via Workers, Wrangler, or HTTP. Subscribe to events from services like R2, Workers KV, AI, and Builds to trigger actions across your account.

    Event subscriptions

    You can now subscribe to events from other Cloudflare services (for example, Workers KV, Workers AI, Workers, ) and consume those events via Queues, allowing you to build custom workflows, integrations, and logic in response to account activity.

    Event subscriptions allow you to receive messages when events occur across your Cloudflare account. Cloudflare products can publish structured events to a queue, which you can then consume with Workers or pull via HTTP from anywhere.

    To create a subscription, use the dashboard or Wrangler:

    An event is a structured record of something happening in your Cloudflare account – like a Workers AI batch request being queued, a Worker build completing, or an R2 bucket being created. Events follow a consistent structure:

    Current event sources include R2, Workers KV, Workers AI, Workers Builds, Vectorize, Super Slurper, and Workflows. More sources and events are on the way.

    For more information on event subscriptions, available events, and how to get started, refer to our documentation.

    Original source Report a problem

Related products