- Jan 9, 2026
- Parsed from source:Jan 9, 2026
- Detected by Releasebot:Jan 10, 2026
Get notified when your Workers builds succeed or fail
Cloudflare Workers adds event driven Build notifications via Event Subscriptions. Publish build events to a Queue and route alerts to Slack, Discord, email, or any webhook. Deploy the template to your account for real time build status, errors, and metadata.
Notifications for Workers' Builds
You can now receive notifications when your Workers' builds start, succeed, fail, or get cancelled using Event Subscriptions.
Workers Builds publishes events to a Queue that your Worker can read messages from, and then send notifications wherever you need — Slack, Discord, email, or any webhook endpoint.
You can deploy this Worker ↗ to your own Cloudflare account to send build notifications to Slack:Deploy to Cloudflare
The template includes:
- Build status with Preview/Live URLs for successful deployments
- Inline error messages for failed builds
- Branch, commit hash, and author name
Slack notifications showing build events
For setup instructions, refer to the template README ↗ or the Event Subscriptions documentation.
Original source Report a problem - Dec 18, 2025
- Parsed from source:Dec 18, 2025
- Detected by Releasebot:Dec 19, 2025
- Modified by Releasebot:Jan 9, 2026
R2 Data Catalog now supports automatic snapshot expiration
R2 Data Catalog adds automatic snapshot expiration for Apache Iceberg tables to trim old metadata, speed up queries, and cut storage costs. Configure retention with older-than-days and retain-last to keep recent snapshots safe. This pairs with automatic compaction for fully managed maintenance.
R2 Data Catalog now supports automatic snapshot expiration for Apache Iceberg tables.
In Apache Iceberg, a snapshot is metadata that represents the state of a table at a given point in time. Every mutation creates a new snapshot which enable powerful features like time travel queries and rollback capabilities but will accumulate over time.
Without regular cleanup, these accumulated snapshots can lead to:- Metadata overhead
- Slower table operations
- Increased storage costs.
Snapshot expiration in R2 Data Catalog automatically removes old table snapshots based on your configured retention policy, improving performance and storage costs.
Enable catalog-level snapshot expiration
Expire snapshots older than 7 days, always retain at least 10 recent snapshots
npx wrangler r2 bucket catalog snapshot-expiration enable my-bucket \ --older-than-days 7 \ --retain-last 10Snapshot expiration uses two parameters to determine which snapshots to remove:
- --older-than-days: age threshold in days
- --retain-last: minimum snapshot count to retain
Both conditions must be met before a snapshot is expired, ensuring you always retain recent snapshots even if they exceed the age threshold.
This feature complements automatic compaction, which optimizes query performance by combining small data files into larger ones. Together, these automatic maintenance operations keep your Iceberg tables performant and cost-efficient without manual intervention.
To learn more about snapshot expiration and how to configure it, visit our table maintenance documentation or see how to manage catalogs.
Original source Report a problem - Dec 15, 2025
- Parsed from source:Dec 15, 2025
- Detected by Releasebot:Dec 16, 2025
New Best Practices guide for Durable Objects
New Rules of Durable Objects guide debuts with best practices for design, storage, concurrency, and anti-patterns, plus a refreshed testing guide using Vitest pool workers. Learn one Durable Object per logical unit, SQLite storage with RPC, and wake-friendly WebSockets to cut costs.
Rules of Durable Objects guide
A new Rules of Durable Objects guide is now available, providing opinionated best practices for building effective Durable Objects applications. This guide covers design patterns, storage strategies, concurrency, and common anti-patterns to avoid.
Key guidance includes:
- Design around your "atom" of coordination — Create one Durable Object per logical unit (chat room, game session, user) instead of a global singleton that becomes a bottleneck.
- Use SQLite storage with RPC methods — SQLite-backed Durable Objects with typed RPC methods provide the best developer experience and performance.
- Understand input and output gates — Learn how Cloudflare's runtime prevents data races by default, how write coalescing works, and when to use blockConcurrencyWhile().
- Leverage Hibernatable WebSockets — Reduce costs for real-time applications by allowing Durable Objects to sleep while maintaining WebSocket connections.
The testing documentation has also been updated with modern patterns using @cloudflare/vitest-pool-workers, including examples for testing SQLite storage, alarms, and direct instance access:
Original source Report a problem - Dec 4, 2025
- Parsed from source:Dec 4, 2025
- Detected by Releasebot:Dec 5, 2025
- Modified by Releasebot:Jan 9, 2026
Connect to remote databases during local development with wrangler dev
Wrangler dev now lets you connect to remote and TLS databases for local testing, so you can run Worker code against live data without the --remote flag. Configure with localConnectionString and the CLOUDFLARE_HYPERDRIVE_LOCAL_CONNECTION_STRING_<BINDING_NAME> env var. This boosts Hyperdrive local development.
Local development with Hyperdrive
You can now connect directly to remote databases and databases requiring TLS with wrangler dev. This lets you run your Worker code locally while connecting to remote databases, without needing to use wrangler dev --remote.
The localConnectionString field and CLOUDFLARE_HYPERDRIVE_LOCAL_CONNECTION_STRING_ environment variable can be used to configure the connection string used by wrangler dev.
{ "hyperdrive" : [ { "binding" : "HYPERDRIVE" , "id" : "your-hyperdrive-id" , "localConnectionString" : "postgres://user:[email protected]:5432/database?sslmode=require" } ] }Learn more about local development with Hyperdrive.
Original source Report a problem - December 2025
- No date parsed from source.
- Detected by Releasebot:Dec 11, 2025
Billing for SQLite Storage
Storage billing for SQLite-backed Durable Objects starts January 2026, with a target date of January 7. View usage on the Durable Objects page and prune data to reduce costs before billing kicks in. Free plan users won’t be charged; paid plan users pay per SQLite storage pricing.
Storage billing for SQLite-backed Durable Objects will be enabled in January 2026, with a target date of January 7, 2026 (no earlier).
To view your SQLite storage usage, go to the Durable Objects page
Go to Durable Objects
If you do not want to incur costs, please take action such as optimizing queries or deleting unnecessary stored data in order to reduce your SQLite storage usage ahead of the January 7th target. Only usage on and after the billing target date will incur charges.
Developers on the Workers Paid plan with Durable Object's SQLite storage usage beyond included limits will incur charges according to SQLite storage pricing announced in September 2024 with the public beta. Developers on the Workers Free plan will not be charged.
Compute billing for SQLite-backed Durable Objects has been enabled since the initial public beta. SQLite-backed Durable Objects currently incur charges for requests and duration, and no changes are being made to compute billing.
For more information about SQLite storage pricing and limits, refer to the Durable Objects pricing documentation.
Original source Report a problem - Nov 21, 2025
- Parsed from source:Nov 21, 2025
- Detected by Releasebot:Nov 26, 2025
- Modified by Releasebot:Jan 9, 2026
Mount R2 buckets in Containers
Containers gain R2 bucket mounting via FUSE for seamless filesystem access. Use cases cover bootstrapping datasets, persisting config, and serving large files without bloated images, with tigrisfs as a ready option and startup scripts included.
Containers now support mounting R2 buckets as FUSE (Filesystem in Userspace) volumes
Containers now support mounting R2 buckets as FUSE (Filesystem in Userspace) volumes, allowing applications to interact with R2 using standard filesystem operations.
Common use cases include:
- Bootstrapping containers with datasets, models, or dependencies for sandboxes and agent environments
- Persisting user configuration or application state without managing downloads
- Accessing large static files without bloating container images or downloading at startup
FUSE adapters like tigrisfs, s3fs, and gcsfuse can be installed in your container image and configured to mount buckets at startup.
FROM alpine:3.20 # Install FUSE and dependencies RUN apk update && \ apk add --no-cache ca-certificates fuse curl bash # Install tigrisfs RUN ARCH=$(uname -m) && \ if [ "$ARCH" = "x86_64" ]; then ARCH="amd64" ; fi && \ if [ "$ARCH" = "aarch64" ]; then ARCH="arm64" ; fi && \ VERSION=$(curl -s https://api.github.com/repos/tigrisdata/tigrisfs/releases/latest | grep -o '"tag_name": "[^"]*' | cut -d '"' -f4) && \ curl -L "https://github.com/tigrisdata/tigrisfs/releases/download/${VERSION}/tigrisfs_${VERSION#v}_linux_${ARCH}.tar.gz" -o /tmp/tigrisfs.tar.gz && \ tar -xzf /tmp/tigrisfs.tar.gz -C /usr/local/bin/ && \ rm /tmp/tigrisfs.tar.gz && \ chmod +x /usr/local/bin/tigrisfs # Create startup script that mounts bucket RUN printf '#!/bin/sh \ set -e \ mkdir -p /mnt/r2 \ R2_ENDPOINT="https://${R2_ACCOUNT_ID}.r2.cloudflarestorage.com" \ /usr/local/bin/tigrisfs --endpoint "${R2_ENDPOINT}" -f "${BUCKET_NAME}" /mnt/r2 & \ sleep 3 \ ls -lah /mnt/r2 \ ' > /startup.sh && chmod +x /startup.sh CMD ["/startup.sh"]See the Mount R2 buckets with FUSE example for a complete guide on mounting R2 buckets and/or other S3-compatible storage buckets within your containers.
Original source Report a problem - Nov 5, 2025
- Parsed from source:Nov 5, 2025
- Detected by Releasebot:Nov 6, 2025
- Modified by Releasebot:Jan 9, 2026
D1 can restrict data localization with jurisdictions
New D1 feature lets you set a jurisdiction at database creation to guarantee data location and help with GDPR compliance. Supported options include eu and fedramp and the setting is available at creation via wrangler, API, or UI only.
D1 Jurisdiction
You can now set a jurisdiction when creating a D1 database to guarantee where your database runs and stores data. Jurisdictions can help you comply with data localization regulations such as GDPR. Supported jurisdictions include eu and fedramp.
A jurisdiction can only be set at database creation time via wrangler, REST API or the UI and cannot be added/updated after the database already exists.
npx wrangler@latest d1 create db-with-jurisdiction --jurisdiction eu curl -X POST "https://api.cloudflare.com/client/v4/accounts/<account_id>/d1/database" \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ --data '{"name": "db-wth-jurisdiction", "jurisdiction": "eu" }'To learn more, visit D1's data location documentation.
Original source Report a problem - Oct 31, 2025
- Parsed from source:Oct 31, 2025
- Detected by Releasebot:Nov 3, 2025
Workers WebSocket message size limit increased from 1 MiB to 32 MiB
WebSocket message size limit for Workers
Workers, including those using Durable Objects and Browser Rendering, may now process WebSocket messages up to 32 MiB in size. Previously, this limit was 1 MiB.
This change allows Workers to handle use cases requiring large message sizes, such as processing Chrome Devtools Protocol messages.
For more information, please see the Durable Objects startup limits.
Original source Report a problem - Oct 16, 2025
- Parsed from source:Oct 16, 2025
- Detected by Releasebot:Oct 28, 2025
View and edit Durable Object data in UI with Data Studio (Beta)
Cloudflare launches Data Studio for Durable Objects with a UI editor to view and write storage from the dashboard. SQLite-backed Durable Objects gain easier data access for prototyping and debugging. Admin-only access; queries are audited and billed normally.
Go to Durable Objects
You can now view and write to each Durable Object's storage using a UI editor on the Cloudflare dashboard. Only Durable Objects using SQLite storage can use Data Studio.
Data Studio
Data Studio unlocks easier data access with Durable Objects for prototyping application data models to debugging production storage usage. Before, querying your Durable Objects data required deploying a Worker.
To access a Durable Object, you can provide an object's unique name or ID generated by Cloudflare. Data Studio requires you to have at least the Workers Platform Admin role, and all queries are captured with audit logging for your security and compliance needs. Queries executed by Data Studio send requests to your remote, deployed objects and incur normal usage billing.
To learn more, visit the Data Studio documentation. If you have feedback or suggestions for the new Data Studio, please share your experience on Discord ↗
Original source Report a problem - Oct 6, 2025
- Parsed from source:Oct 6, 2025
- Detected by Releasebot:Nov 3, 2025
- Modified by Releasebot:Jan 9, 2026
R2 Data Catalog table-level compaction
You can now enable compaction for individual Apache Iceberg ↗ tables in R2 Data Catalog, giving you fine-grained control over different workloads.
Enable compaction for a specific table (no token required)
npx wrangler r2 bucket catalog compaction enable <BUCKET> <NAMESPACE> <TABLE> --target-size 256This allows you to:
- Apply different target file sizes per table
- Disable compaction for specific tables
- Optimize based on table-specific access patterns
Learn more at Manage catalogs.
Original source Report a problem