Storage Release Notes

Last updated: Dec 19, 2025

  • Dec 18, 2025
    • Parsed from source:
      Dec 18, 2025
    • Detected by Releasebot:
      Dec 19, 2025

    Storage by Cloudflare

    R2 Data Catalog now supports automatic snapshot expiration

    R2 Data Catalog adds automatic Iceberg snapshot expiration to keep tables fast and cheap. Old snapshots are pruned by age and count, boosting performance and reducing storage, with guards to keep recent history. This pairs with automatic compaction for low maintenance upkeep.

    R2 Data Catalog now supports automatic snapshot expiration for Apache Iceberg tables

    In Apache Iceberg, a snapshot is metadata that represents the state of a table at a given point in time. Every mutation creates a new snapshot which enable powerful features like time travel queries and rollback capabilities but will accumulate over time.

    Without regular cleanup, these accumulated snapshots can lead to:

    • Metadata overhead
    • Slower table operations
    • Increased storage costs.

    Snapshot expiration in R2 Data Catalog automatically removes old table snapshots based on your configured retention policy, improving performance and storage costs.

    Snapshot expiration uses two parameters to determine which snapshots to remove:

    • --older-than-days: age threshold in days
    • --retain-last: minimum snapshot count to retain

    Both conditions must be met before a snapshot is expired, ensuring you always retain recent snapshots even if they exceed the age threshold.

    This feature complements automatic compaction, which optimizes query performance by combining small data files into larger ones. Together, these automatic maintenance operations keep your Iceberg tables performant and cost-efficient without manual intervention.

    To learn more about snapshot expiration and how to configure it, visit our table maintenance documentation or see how to manage catalogs.

    Original source Report a problem
  • Dec 15, 2025
    • Parsed from source:
      Dec 15, 2025
    • Detected by Releasebot:
      Dec 16, 2025

    Storage by Cloudflare

    New Best Practices guide for Durable Objects

    New Rules of Durable Objects guide debuts with best practices for design, storage, concurrency, and anti-patterns, plus a refreshed testing guide using Vitest pool workers. Learn one Durable Object per logical unit, SQLite storage with RPC, and wake-friendly WebSockets to cut costs.

    Rules of Durable Objects guide

    A new Rules of Durable Objects guide is now available, providing opinionated best practices for building effective Durable Objects applications. This guide covers design patterns, storage strategies, concurrency, and common anti-patterns to avoid.

    Key guidance includes:

    • Design around your "atom" of coordination — Create one Durable Object per logical unit (chat room, game session, user) instead of a global singleton that becomes a bottleneck.
    • Use SQLite storage with RPC methods — SQLite-backed Durable Objects with typed RPC methods provide the best developer experience and performance.
    • Understand input and output gates — Learn how Cloudflare's runtime prevents data races by default, how write coalescing works, and when to use blockConcurrencyWhile().
    • Leverage Hibernatable WebSockets — Reduce costs for real-time applications by allowing Durable Objects to sleep while maintaining WebSocket connections.

    The testing documentation has also been updated with modern patterns using @cloudflare/vitest-pool-workers, including examples for testing SQLite storage, alarms, and direct instance access:

    Original source Report a problem
  • Dec 4, 2025
    • Parsed from source:
      Dec 4, 2025
    • Detected by Releasebot:
      Dec 5, 2025

    Storage by Cloudflare

    Connect to remote databases during local development with wrangler dev

    You can now connect directly to remote databases and databases requiring TLS with wrangler dev. This lets you run your Worker code locally while connecting to remote databases, without needing to use wrangler dev --remote.

    The localConnectionString field and CLOUDFLARE_HYPERDRIVE_LOCAL_CONNECTION_STRING_ environment variable can be used to configure the connection string used by wrangler dev.

    Learn more about local development with Hyperdrive.

    Original source Report a problem
  • December 2025
    • No date parsed from source.
    • Detected by Releasebot:
      Dec 11, 2025

    Storage by Cloudflare

    Billing for SQLite Storage

    Storage billing for SQLite-backed Durable Objects starts January 2026, with a target date of January 7. View usage on the Durable Objects page and prune data to reduce costs before billing kicks in. Free plan users won’t be charged; paid plan users pay per SQLite storage pricing.

    Storage billing for SQLite-backed Durable Objects will be enabled in January 2026, with a target date of January 7, 2026 (no earlier).

    To view your SQLite storage usage, go to the Durable Objects page

    Go to Durable Objects

    If you do not want to incur costs, please take action such as optimizing queries or deleting unnecessary stored data in order to reduce your SQLite storage usage ahead of the January 7th target. Only usage on and after the billing target date will incur charges.

    Developers on the Workers Paid plan with Durable Object's SQLite storage usage beyond included limits will incur charges according to SQLite storage pricing announced in September 2024 with the public beta. Developers on the Workers Free plan will not be charged.

    Compute billing for SQLite-backed Durable Objects has been enabled since the initial public beta. SQLite-backed Durable Objects currently incur charges for requests and duration, and no changes are being made to compute billing.

    For more information about SQLite storage pricing and limits, refer to the Durable Objects pricing documentation.

    Original source Report a problem
  • Nov 21, 2025
    • Parsed from source:
      Nov 21, 2025
    • Detected by Releasebot:
      Nov 26, 2025

    Storage by Cloudflare

    Mount R2 buckets in Containers

    Containers now support mounting R2 buckets as FUSE volumes, letting apps use standard file system operations on datasets, models, configs, and large assets without bloating images. Install tigrisfs, s3fs, or gcsfuse in your image and mount at startup for seamless access.

    Containers now support mounting R2 buckets as FUSE (Filesystem in Userspace) volumes

    Containers now support mounting R2 buckets as FUSE (Filesystem in Userspace) volumes, allowing applications to interact with R2 using standard filesystem operations.

    Common use cases include:

    • Bootstrapping containers with datasets, models, or dependencies for sandboxes and agent environments
    • Persisting user configuration or application state without managing downloads
    • Accessing large static files without bloating container images or downloading at startup

    FUSE adapters like tigrisfs, s3fs, and gcsfuse can be installed in your container image and configured to mount buckets at startup.

    See the Mount R2 buckets with FUSE example for a complete guide on mounting R2 buckets and/or other S3-compatible storage buckets within your containers.

    Original source Report a problem
  • Nov 5, 2025
    • Parsed from source:
      Nov 5, 2025
    • Detected by Releasebot:
      Nov 6, 2025

    Storage by Cloudflare

    D1 can restrict data localization with jurisdictions

    D1 Jurisdiction Setting

    You can now set a jurisdiction when creating a D1 database to guarantee where your database runs and stores data. Jurisdictions can help you comply with data localization regulations such as GDPR. Supported jurisdictions include eu and fedramp.

    A jurisdiction can only be set at database creation time via wrangler, REST API or the UI and cannot be added/updated after the database already exists.

    To learn more, visit D1's data location documentation.

    Original source Report a problem
  • Oct 31, 2025
    • Parsed from source:
      Oct 31, 2025
    • Detected by Releasebot:
      Nov 3, 2025

    Storage by Cloudflare

    Workers WebSocket message size limit increased from 1 MiB to 32 MiB

    WebSocket message size limit for Workers

    Workers, including those using Durable Objects and Browser Rendering, may now process WebSocket messages up to 32 MiB in size. Previously, this limit was 1 MiB.

    This change allows Workers to handle use cases requiring large message sizes, such as processing Chrome Devtools Protocol messages.

    For more information, please see the Durable Objects startup limits.

    Original source Report a problem
  • Oct 16, 2025
    • Parsed from source:
      Oct 16, 2025
    • Detected by Releasebot:
      Oct 28, 2025

    Storage by Cloudflare

    View and edit Durable Object data in UI with Data Studio (Beta)

    Cloudflare launches Data Studio for Durable Objects with a UI editor to view and write storage from the dashboard. SQLite-backed Durable Objects gain easier data access for prototyping and debugging. Admin-only access; queries are audited and billed normally.

    Go to Durable Objects

    You can now view and write to each Durable Object's storage using a UI editor on the Cloudflare dashboard. Only Durable Objects using SQLite storage can use Data Studio.

    Data Studio

    Data Studio unlocks easier data access with Durable Objects for prototyping application data models to debugging production storage usage. Before, querying your Durable Objects data required deploying a Worker.

    To access a Durable Object, you can provide an object's unique name or ID generated by Cloudflare. Data Studio requires you to have at least the Workers Platform Admin role, and all queries are captured with audit logging for your security and compliance needs. Queries executed by Data Studio send requests to your remote, deployed objects and incur normal usage billing.

    To learn more, visit the Data Studio documentation. If you have feedback or suggestions for the new Data Studio, please share your experience on Discord ↗

    Original source Report a problem
  • Oct 6, 2025
    • Parsed from source:
      Oct 6, 2025
    • Detected by Releasebot:
      Nov 3, 2025

    Storage by Cloudflare

    R2 Data Catalog table-level compaction

    Enable compaction for Apache Iceberg tables

    You can now enable compaction for individual Apache Iceberg ↗ tables in R2 Data Catalog, giving you fine-grained control over different workloads.

    This allows you to:

    • Apply different target file sizes per table
    • Disable compaction for specific tables
    • Optimize based on table-specific access patterns

    Learn more at Manage catalogs.

    Original source Report a problem
  • Sep 25, 2025
    • Parsed from source:
      Sep 25, 2025
    • Detected by Releasebot:
      Nov 3, 2025
    • Modified by Releasebot:
      Dec 20, 2025

    Storage by Cloudflare

    R2 Data Catalog now supports compaction

    R2 Data Catalog adds automatic compaction for Apache Iceberg tables to boost query performance by merging small files. Enable it in bucket settings or via Wrangler, with guidance in manage catalogs and compaction docs.

    You can now enable automatic compaction for Apache Iceberg ↗ tables in R2 Data Catalog to improve query performance

    Compaction is the process of taking a group of small files and combining them into fewer larger files. This is an important maintenance operation as it helps ensure that query performance remains consistent by reducing the number of files that needs to be scanned.

    To enable automatic compaction in R2 Data Catalog, find it under R2 Data Catalog in your R2 bucket settings in the dashboard.

    Or with Wrangler, run:

    To get started with compaction, check out manage catalogs. For best practices and limitations, refer to about compaction.

    Original source Report a problem

Related products