- Aug 5, 2024
- Parsed from source:Aug 5, 2024
- Detected by Releasebot:Dec 9, 2025
Drizzle Kit v0.23.2 release
Drizzle Kit v0.23.2 release
Aug 5, 2024
- Fixed a bug in PostgreSQL with push and introspect where the schemaFilter object was passed. It was detecting enums even in schemas that were not defined in the schemaFilter.
- Fixed the drizzle-kit up command to work as expected, starting from the sequences release.
- Aug 5, 2024
- Parsed from source:Aug 5, 2024
- Detected by Releasebot:Dec 9, 2025
DrizzleORM v0.32.2 release
Fix AWS Data API type hints bugs in RQB
Fix set transactions in MySQL bug
Add forwaring dependencies within useLiveQuery, fixes #2651
Export additional types from SQLite package, like AnySQLiteUpdate
- Jul 23, 2024
- Parsed from source:Jul 23, 2024
- Detected by Releasebot:Dec 9, 2025
DrizzleORM v0.32.1 release
Fix typings for indexes and allow creating indexes on 3+ columns mixing columns and expressions
Added support for “limit 0” in all dialects - closes #2011
Make inArray and notInArray accept empty list, closes #1295
fix typo in lt typedoc
fix wrong example in README.md
- Jul 10, 2024
- Parsed from source:Jul 10, 2024
- Detected by Releasebot:Dec 9, 2025
DrizzleORM v0.32.0 release
Drizzle delivers across ORM and drizzle-kit with MySQL returning IDs, PostgreSQL sequences and identity columns, generated columns for multiple dialects, and enhanced migrations. A new push --force and flexible migration prefixes simplify risky changes and tool integration.
Release notes for [email protected] and [email protected]
It’s not mandatory to upgrade both packages, but if you want to use the new features in both queries and migrations, you will need to upgrade both packages
New Features
🎉 MySQL $returningId() function
MySQL itself doesn’t have native support for RETURNING after using INSERT. There is only one way to do it for primary keys with autoincrement (or serial) types, where you can access insertId and affectedRows fields. We’ve prepared an automatic way for you to handle such cases with Drizzle and automatically receive all inserted IDs as separate objectsAlso with Drizzle, you can specify a primary key with $default function that will generate custom primary keys at runtime. We will also return those generated keys for you in the $returningId() call
If there is no primary keys -> type will be {}[] for such queries
🎉 PostgreSQL Sequences
You can now specify sequences in Postgres within any schema you need and define all the available propertiesExample
import { pgSchema, pgSequence } from "drizzle-orm/pg-core"; // No params specified export const customSequence = pgSequence("name"); // Sequence with params export const customSequence = pgSequence("name", { startWith: 100, maxValue: 10000, minValue: 100, cycle: true, cache: 10, increment: 2 }); // Sequence in custom schema export const customSchema = pgSchema('custom_schema'); export const customSequence = customSchema.sequence("name");🎉 PostgreSQL Identity Columns
Source: As mentioned, the serial type in Postgres is outdated and should be deprecated. Ideally, you should not use it. Identity columns are the recommended way to specify sequences in your schema, which is why we are introducing the identity columns featureExample
import { pgTable, integer, text } from 'drizzle-orm/pg-core'; export const ingredients = pgTable("ingredients", { id: integer("id").primaryKey().generatedAlwaysAsIdentity({ startWith: 1000 }), name: text("name").notNull(), description: text("description"), });You can specify all properties available for sequences in the .generatedAlwaysAsIdentity() function. Additionally, you can specify custom names for these sequences
PostgreSQL docs reference.
🎉 PostgreSQL Generated Columns
You can now specify generated columns on any column supported by PostgreSQL to use with generated columnsExample with generated column for tsvector
Note: we will add tsVector column type before latest release
import { SQL, sql } from "drizzle-orm"; import { customType, index, integer, pgTable, text } from "drizzle-orm/pg-core"; const tsVector = customType<{ data: string }>({ dataType() { return "tsvector"; }, }); export const test = pgTable("test", { id: integer("id").primaryKey().generatedAlwaysAsIdentity(), content: text("content"), contentSearch: tsVector("content_search", { dimensions: 3 }).generatedAlwaysAs((): SQL => sql`to_tsvector('english', ${test.content})`), }, (t) => ({ idx: index("idx_content_search").using("gin", t.contentSearch), }));In case you don’t need to reference any columns from your table, you can use just sql template or a string
export const users = pgTable("users", { id: integer("id"), name: text("name"), generatedName: text("gen_name").generatedAlwaysAs(sql`hello world!`), generatedName1: text("gen_name1").generatedAlwaysAs("hello world!"), });🎉 MySQL Generated Columns
You can now specify generated columns on any column supported by MySQL to use with generated columns
You can specify both stored and virtual options, for more info you can check MySQL docs
Also MySQL has a few limitation for such columns usage, which is described here
Drizzle Kit will also have limitations for push command:- You can’t change the generated constraint expression and type using push. Drizzle-kit will ignore this change. To make it work, you would need to drop the column, push, and then add a column with a new expression. This was done due to the complex mapping from the database side, where the schema expression will be modified on the database side and, on introspection, we will get a different string. We can’t be sure if you changed this expression or if it was changed and formatted by the database. As long as these are generated columns and push is mostly used for prototyping on a local database, it should be fast to drop and create generated columns. Since these columns are generated, all the data will be restored
- generate should have no limitations
Example
export const users = mysqlTable("users", { id: int("id"), id2: int("id2"), name: text("name"), generatedName: text("gen_name").generatedAlwaysAs((): SQL => sql`${schema2.users.name} || 'hello'`, { mode: "stored" }), generatedName1: text("gen_name1").generatedAlwaysAs((): SQL => sql`${schema2.users.name} || 'hello'`, { mode: "virtual" }), });In case you don’t need to reference any columns from your table, you can use just sql template or a string in .generatedAlwaysAs()
🎉 SQLite Generated Columns
You can now specify generated columns on any column supported by SQLite to use with generated columns
You can specify both stored and virtual options, for more info you can check SQLite docs
Also SQLite has a few limitation for such columns usage, which is described here
Drizzle Kit will also have limitations for push and generate command:- You can’t change the generated constraint expression with the stored type in an existing table. You would need to delete this table and create it again. This is due to SQLite limitations for such actions. We will handle this case in future releases (it will involve the creation of a new table with data migration).
- You can’t add a stored generated expression to an existing column for the same reason as above. However, you can add a virtual expression to an existing column.
- You can’t change a stored generated expression in an existing column for the same reason as above. However, you can change a virtual expression.
- You can’t change the generated constraint type from virtual to stored for the same reason as above. However, you can change from stored to virtual.
New Drizzle Kit features
🎉 Migrations support for all the new orm features
PostgreSQL sequences, identity columns and generated columns for all dialects🎉 New flag --force for drizzle-kit push
You can auto-accept all data-loss statements using the push command. It’s only available in CLI parameters. Make sure you always use it if you are fine with running data-loss statements on your database🎉 New migrations flag prefix
You can now customize migration file prefixes to make the format suitable for your migration tools:- index is the default type and will result in 0001_name.sql file names;
- supabase and timestamp are equal and will result in 20240627123900_name.sql file names;
- unix will result in unix seconds prefixes 1719481298_name.sql file names;
- none will omit the prefix completely;
Example: Supabase migrations format
import { defineConfig } from "drizzle-kit"; export default defineConfig({ dialect: "postgresql", migrations: { prefix: 'supabase' } });
- Jul 8, 2024
- Parsed from source:Jul 8, 2024
- Detected by Releasebot:Dec 9, 2025
DrizzleORM v0.31.4 release
DrizzleORM v0.31.4 release
Jul 8, 2024
- Mark prisma clients package as optional - thanks @Cherry
- Jul 8, 2024
- Parsed from source:Jul 8, 2024
- Detected by Releasebot:Dec 9, 2025
DrizzleORM v0.31.3 release
DrizzleORM v0.31.3 release
Jul 8, 2024
Bug fixed
- 🛠️ Fixed RQB behavior for tables with same names in different schemas
- 🛠️ Fixed [BUG]: Mismatched type hints when using RDS Data API - #2097
New Prisma-Drizzle extension
import { PrismaClient } from '@prisma/client'; import { drizzle } from 'drizzle-orm/prisma/pg'; import { User } from './drizzle'; const prisma = new PrismaClient().$extends(drizzle()); const users = await prisma.$drizzle.select().from(User);For more info, check docs: /docs/prisma
Original source Report a problem - Jun 7, 2024
- Parsed from source:Jun 7, 2024
- Detected by Releasebot:Dec 9, 2025
DrizzleORM v0.31.2 release
Added support for TiDB Cloud Serverless driver
Original source Report a problemimport { connect } from '@tidbcloud/serverless'; import { drizzle } from 'drizzle-orm/tidb-serverless'; const client = connect({ url : '...' }); const db = drizzle(client); await db.select().from(...); - Jun 4, 2024
- Parsed from source:Jun 4, 2024
- Detected by Releasebot:Dec 9, 2025
DrizzleORM v0.31.1 release
Drizzle ORM adds Expo SQLite live queries with a native useLiveQuery hook in v0.31.1, auto re-running queries on DB changes for both SQL-like and Drizzle queries. API stays compatible and returns data, error, updatedAt for streamlined React use.
New Features
Live Queries 🎉
For a full explanation about Drizzle + Expo welcome to discussions
As of v0.31.1 Drizzle ORM now has native support for Expo SQLite Live Queries! We’ve implemented a native useLiveQuery React Hook which observes necessary database changes and automatically re-runs database queries. It works with both SQL-like and Drizzle Queries:
import { useLiveQuery , drizzle } from 'drizzle-orm/expo-sqlite'; import { openDatabaseSync } from 'expo-sqlite'; import { users } from './schema'; import { Text } from 'react-native'; const expo = openDatabaseSync('db.db', { enableChangeListener : true }); // <-- enable change listeners const db = drizzle(expo); const App = () => { // Re-renders automatically when data changes const { data } = useLiveQuery(db.select().from(users)); // const { data, error, updatedAt } = useLiveQuery(db.query.users.findFirst()); // const { data, error, updatedAt } = useLiveQuery(db.query.users.findMany()); return <Text>{JSON.stringify(data)}</Text>; }; export default App;We’ve intentionally not changed the API of ORM itself to stay with conventional React Hook API, so we have useLiveQuery(databaseQuery) as opposed to db.select().from(users).useLive() or db.query.users.useFindMany()
We’ve also decided to provide data , error and updatedAt fields as a result of hook for concise explicit error handling following practices of React Query and Electric SQL
Original source Report a problem - May 31, 2024
- Parsed from source:May 31, 2024
- Detected by Releasebot:Dec 9, 2025
DrizzleORM v0.31.0 release
DrizzleORM 0.31.0 adds pg_vector index support, new PostgreSQL types (point, line, geometry), and PostGIS basics with a refreshed indexes API. Drizzle Kit upgrade integration and extension filtering improve push/generate workflows alongside notable fixes.
DrizzleORM v0.31.0 release
May 31, 2024
Breaking changes
Note: [email protected] can be used with [email protected] or higher. The same applies to Drizzle Kit. If you run a Drizzle Kit command, it will check and prompt you for an upgrade (if needed). You can check for Drizzle Kit updates below.
PostgreSQL indexes API was changed
The previous Drizzle+PostgreSQL indexes API was incorrect and was not aligned with the PostgreSQL documentation. The good thing is that it was not used in queries, and drizzle-kit didn’t support all properties for indexes. This means we can now change the API to the correct one and provide full support for it in drizzle-kit
Previous API
- No way to define SQL expressions inside .on.
- .using and .on in our case are the same thing, so the API is incorrect here.
- .asc(), .desc(), .nullsFirst(), and .nullsLast() should be specified for each column or expression on indexes, but not on an index itself.
Current API
// First example, with `.on()` index('name').on(table.column1.asc(), table.column2.nullsFirst(), ...) or.onOnly(table.column1.desc().nullsLast(), table.column2, ...).concurrently().where(sql``).with({ fillfactor: '70' }) // Second Example, with `.using()` index('name').using('btree', table.column1.asc(), sql`lower(${table.column2})`, table.column1.op('text_ops')).where(sql``).with({ fillfactor: '70' })New Features
🎉 “pg_vector” extension support
There is no specific code to create an extension inside the Drizzle schema. We assume that if you are using vector types, indexes, and queries, you have a PostgreSQL database with the pg_vector extension installed.
You can now specify indexes for pg_vector and utilize pg_vector functions for querying, ordering, etc.
Let’s take a few examples of pg_vector indexes from the pg_vector docs and translate them to DrizzleL2 distance, Inner product and Cosine distance
// CREATE INDEX ON items USING hnsw (embedding vector_l2_ops); // CREATE INDEX ON items USING hnsw (embedding vector_ip_ops); // CREATE INDEX ON items USING hnsw (embedding vector_cosine_ops); const table = pgTable('items', { embedding: vector('embedding', { dimensions: 3 }) }, (table) => ({ l2: index('l2_index').using('hnsw', table.embedding.op('vector_l2_ops')), ip: index('ip_index').using('hnsw', table.embedding.op('vector_ip_ops')), cosine: index('cosine_index').using('hnsw', table.embedding.op('vector_cosine_ops')) }))L1 distance, Hamming distance and Jaccard distance - added in pg_vector 0.7.0 version
// CREATE INDEX ON items USING hnsw (embedding vector_l1_ops); // CREATE INDEX ON items USING hnsw (embedding bit_hamming_ops); // CREATE INDEX ON items USING hnsw (embedding bit_jaccard_ops); const table = pgTable('table', { embedding: vector('embedding', { dimensions: 3 }) }, (table) => ({ l1: index('l1_index').using('hnsw', table.embedding.op('vector_l1_ops')), hamming: index('hamming_index').using('hnsw', table.embedding.op('bit_hamming_ops')), bit: index('bit_jaccard_index').using('hnsw', table.embedding.op('bit_jaccard_ops')) }))For queries, you can use predefined functions for vectors or create custom ones using the SQL template operator.
You can also use the following helpers:import { l2Distance, l1Distance, innerProduct, cosineDistance, hammingDistance, jaccardDistance } from 'drizzle-orm' l2Distance(table.column, [3, 1, 2]) // table.column <-> '[3, 1, 2]' l1Distance(table.column, [3, 1, 2]) // table.column <+> '[3, 1, 2]' innerProduct(table.column, [3, 1, 2]) // table.column <#> '[3, 1, 2]' cosineDistance(table.column, [3, 1, 2]) // table.column <=> '[3, 1, 2]' hammingDistance(table.column, '101') // table.column <~> '101' jaccardDistance(table.column, '101') // table.column <%> '101'If pg_vector has some other functions to use, you can replicate implimentation from existing one we have. Here is how it can be done
export function l2Distance(column: SQLWrapper | AnyColumn, value: number[] | string[] | TypedQueryBuilder<any> | string,): SQL { if (is(value, TypedQueryBuilder<any>) || typeof value === 'string') { return sql`${column} <-> ${value}`; } return sql`${column} <-> ${JSON.stringify(value)}`; }Name it as you wish and change the operator. This example allows for a numbers array, strings array, string, or even a select query. Feel free to create any other type you want or even contribute and submit a PR
Examples
Let’s take a few examples of pg_vector queries from the pg_vector docs and translate them to Drizzleimport { l2Distance } from 'drizzle-orm'; // SELECT * FROM items ORDER BY embedding <-> '[3,1,2]' LIMIT 5; db.select().from(items).orderBy(l2Distance(items.embedding, [3, 1, 2])) // SELECT embedding <-> '[3,1,2]' AS distance FROM items; db.select({ distance: l2Distance(items.embedding, [3, 1, 2]) }) // SELECT * FROM items ORDER BY embedding <-> (SELECT embedding FROM items WHERE id = 1) LIMIT 5; const subquery = db.select({ embedding: items.embedding }).from(items).where(eq(items.id, 1)); db.select().from(items).orderBy(l2Distance(items.embedding, subquery)).limit(5) // SELECT (embedding <#> '[3,1,2]') * -1 AS inner_product FROM items; db.select({ innerProduct: sql`(${maxInnerProduct(items.embedding, [3, 1, 2])}) * -1` }).from(items) // and more!🎉 New PostgreSQL types: point, line
You can now use point and line from PostgreSQL Geometric Types
Type point has 2 modes for mappings from the database: tuple and xy.- tuple will be accepted for insert and mapped on select to a tuple. So, the database Point(1,2) will be typed as [1,2] with drizzle.
- xy will be accepted for insert and mapped on select to an object with x, y coordinates. So, the database Point(1,2) will be typed as { x: 1, y: 2 } with drizzle
Type line has 2 modes for mappings from the database: tuple and abc.
- tuple will be accepted for insert and mapped on select to a tuple. So, the database Line3 will be typed as [1,2,3] with drizzle.
- abc will be accepted for insert and mapped on select to an object with a, b, and c constants from the equation Ax + By + C = 0. So, the database Line3 will be typed as { a: 1, b: 2, c: 3 } with drizzle.
🎉 Basic “postgis” extension support
There is no specific code to create an extension inside the Drizzle schema. We assume that if you are using postgis types, indexes, and queries, you have a PostgreSQL database with the postgis extension installed.
geometry type from postgis extension:const items = pgTable('items', { geo: geometry('geo', { type: 'point' }), geoObj: geometry('geo_obj', { type: 'point', mode: 'xy' }), geoSrid: geometry('geo_options', { type: 'point', mode: 'xy', srid: 4000 }), });mode Type geometry has 2 modes for mappings from the database: tuple and xy.
- tuple will be accepted for insert and mapped on select to a tuple. So, the database geometry will be typed as [1,2] with drizzle.
- xy will be accepted for insert and mapped on select to an object with x, y coordinates. So, the database geometry will be typed as { x: 1, y: 2 } with drizzle
The current release has a predefined type: point, which is the geometry(Point) type in the PostgreSQL PostGIS extension. You can specify any string there if you want to use some other type
Drizzle Kit updates: [email protected]
Release notes here are partially duplicated from [email protected]
New Features
🎉 Support for new types
Drizzle Kit can now handle:- point and line from PostgreSQL
- vector from the PostgreSQL pg_vector extension
- geometry from the PostgreSQL PostGIS extension
🎉 New param in drizzle.config - extensionsFilters
The PostGIS extension creates a few internal tables in the public schema. This means that if you have a database with the PostGIS extension and use push or introspect, all those tables will be included in diff operations. In this case, you would need to specify tablesFilter, find all tables created by the extension, and list them in this parameter.
We have addressed this issue so that you won’t need to take all these steps. Simply specify extensionsFilters with the name of the extension used, and Drizzle will skip all the necessary tables.
Currently, we only support the postgis option, but we plan to add more extensions if they create tables in the public schema.
The postgis option will skip the geography_columns, geometry_columns, and spatial_ref_sys tablesImprovements
- Update zod schemas for database credentials and write tests to all the positive/negative cases
- support full set of SSL params in kit config, provide types from node:tls connection
- Normilized SQLite urls for libsql and better-sqlite3 drivers
Those drivers have different file path patterns, and Drizzle Kit will accept both and create a proper file path format for each - Updated MySQL and SQLite index-as-expression behavior
In this release MySQL and SQLite will properly map expressions into SQL query. Expressions won’t be escaped in string but columns will be
Bug Fixes
- [BUG]: multiple constraints not added (only the first one is generated) - #2341
- Drizzle Studio: Error: Connection terminated unexpectedly - #435
- Unable to run sqlite migrations local - #432
- error: unknown option ‘—config’ - #423
How push and generate works for indexes
Limitations
You should specify a name for your index manually if you have an index on at least one expression
Exampleindex().on(table.id, table.email) // will work well and name will be autogeneretaed index('my_name').on(table.id, table.email) // will work well // but index().on(sql`lower(${table.email})`) // error index('my_name').on(sql`lower(${table.email})`) // will work wellPush won’t generate statements if these fields(list below) were changed in an existing index:
- expressions inside .on() and .using()
- .where() statements
- operator classes .op() on columns
If you are using push workflows and want to change these fields in the index, you would need to:
- Comment out the index
- Push
- Uncomment the index and change those fields
- Push again
For the generate command, drizzle-kit will be triggered by any changes in the index for any property in the new drizzle indexes API, so there are no limitations here.
Original source Report a problem - May 1, 2024
- Parsed from source:May 1, 2024
- Detected by Releasebot:Dec 9, 2025
DrizzleORM v0.30.10 release
DrizzleORM v0.30.10 release
May 1, 2024
New Features
🎉 .if() function added to all WHERE expressions
Select all posts with views greater than 100
async function someFunction(views = 0) { await db.select().from(posts).where(gt(posts.views, views).if(views > 100)); }Bug Fixes
- Fixed internal mappings for sessions.all, .values, .execute functions in AWS DataAPI
Read get started guide with AWS DataAPI in the documentation.
Original source Report a problem