Databricks Release Notes
Last updated: Feb 7, 2026
- Feb 6, 2026
- Date parsed from source:Feb 6, 2026
- First seen by Releasebot:Feb 7, 2026
Deploy Databricks apps from Git repositories (Beta)
You can now deploy Databricks apps directly from Git repositories without uploading files to the workspace. Configure a repository for your app and deploy from any branch, tag, or commit. See Deploy from a Git repository.
Original source Report a problem - Feb 6, 2026
- Date parsed from source:Feb 6, 2026
- First seen by Releasebot:Feb 7, 2026
Query tags for SQL warehouses (Public Preview)
Query tags
You can now apply custom key-value tags to SQL workloads on Databricks SQL warehouses for grouping, filtering, and cost attribution. Query tags appear in the system.query.history table and on the Query History page of the Databricks UI, allowing you to attribute warehouse costs by business context and identify sources of long-running queries. Tags can be set using session configuration parameters, the SET QUERY_TAGS SQL statement, or through connectors including dbt, Power BI, Tableau, Python, Node.js, Go, JDBC, and ODBC. See Query tags.
Original source Report a problem All of your release notes in one feed
Join Releasebot and get updates from Databricks and hundreds of other software products.
- Feb 5, 2026
- Date parsed from source:Feb 5, 2026
- First seen by Releasebot:Feb 7, 2026
Google Ads connector (Beta)
The managed Google Ads connector in Lakeflow Connect allows you to ingest data from Google Ads into Databricks. See Google Ads connector.
Original source Report a problem - Feb 5, 2026
- Date parsed from source:Feb 5, 2026
- First seen by Releasebot:Feb 7, 2026
Decrypt query history system table fields (Public Preview)
For workspaces enabled for customer-managed keys (CMK) for managed services, the system catalog's data encryption settings can be configured to decrypt the statement_text and error_message fields in the query history system table. See Reading encrypted fields.
Original source Report a problem - Feb 5, 2026
- Date parsed from source:Feb 5, 2026
- First seen by Releasebot:Feb 5, 2026
Applying filters, masks, tags, and comments to pipline-created datasets is now GA
Using CREATE, ALTER, or the Lakeflow UI to modify ETL and ingestion pipelines (Lakeflow Spark Declarative Pipelines and Lakeflow Connect) is now GA. You can modify pipelines to apply row filters, columns masks, table and column tags, column comments, and (for materialized views only) table comments.
Original source Report a problem
See ALTER STREAMING TABLE and ALTER MATERIALIZED VIEW. For general information about using ALTER with Lakeflow Spark Declarative Pipelines, see Use ALTER statements with pipeline datasets. - Feb 5, 2026
- Date parsed from source:Feb 5, 2026
- First seen by Releasebot:Feb 5, 2026
Anthropic Claude Opus 4.6 now available as a Databricks-hosted model
Mosaic AI Model Serving now supports Anthropic Claude Opus 4.6 as a Databricks-hosted model.
To access this model, use:
Foundation Model APIs pay-per-token
- Query reasoning models
- Query vision models
Batch inference workloads using AI Functions
- Feb 5, 2026
- Date parsed from source:Feb 5, 2026
- First seen by Releasebot:Feb 5, 2026
Default SQL warehouse settings (General Availability)
Default SQL warehouse settings are now generally available
Workspace administrators can set a default SQL warehouse that is automatically selected in SQL authoring surfaces, including the SQL editor, AI/BI dashboards, AI/BI Genie, Alerts, and Catalog Explorer. Individual users can also override the workspace default by setting their own user-level default warehouse. See Set a default SQL warehouse for the workspace and Set a user-level default warehouse.
Original source Report a problem - Feb 5, 2026
- Date parsed from source:Feb 5, 2026
- First seen by Releasebot:Feb 5, 2026
View warehouse activity details (Beta)
You can now view detailed annotations on the Running clusters chart in the SQL warehouse monitoring UI to understand why warehouses remain active. The Activity details toggle displays color-coded bars that show query activity, fetching queries, open sessions, and idle states. Hover over bars to see metadata, or click on fetching activity to filter the query history table. See Monitor a SQL warehouse.
Original source Report a problem - Feb 4, 2026
- Date parsed from source:Feb 4, 2026
- First seen by Releasebot:Feb 5, 2026
Connect Databricks Assistant to MCP servers
You can now connect Databricks Assistant in agent mode to external tools and data sources through the Model Context Protocol (MCP). The Assistant can use any MCP servers that have been added to your workspace and that you have permission to use.
See Connect Databricks Assistant to MCP servers.
Original source Report a problem - Feb 3, 2026
- Date parsed from source:Feb 3, 2026
- First seen by Releasebot:Feb 4, 2026
Select tables and create pivot tables in Google Sheets
You can now directly select Databricks tables from the catalog explorer and import data as pivot tables in Google Sheets using the Databricks Connector. See Connect to Databricks from Google Sheets.
Original source Report a problem