Schema Migrations
How schema changes are detected, applied, and backfilled.
Schema migrations happen automatically when codegen pushes an updated schema.json to the server. The server diffs the new schema against the existing database and applies changes. This page documents how that process works end-to-end.
Overview
Developer changes schema.ts
POST /api/schema → server
Store schema hash (unless breaking changes)
Schema hash
Before doing any work, the server computes a deterministic hash of the schema definition and compares it to the stored hash in _valet_meta. If they match, the entire migration is skipped.
The hash includes:
- Table names (sorted for stability)
- Field names, types, and optional flags
- Sync mode and filter configuration
- Backfill definitions
This means pushing the same schema twice is a no-op.
Table creation
New tables (not yet in the database) are created with all columns:
CREATE TABLE IF NOT EXISTS "todos" (
"_id" TEXT PRIMARY KEY NOT NULL,
"_version" INTEGER NOT NULL DEFAULT 0,
"title" TEXT,
"completed" INTEGER,
"priority" INTEGER
)Every table gets _id (primary key) and _version (optimistic locking counter) automatically.
If the table's sync mode is not "none", change tracking triggers are created:
-- Increment global version and record the change
CREATE TRIGGER "todos_valet_insert" AFTER INSERT ON "todos"
BEGIN
UPDATE _valet_version SET version = version + 1 WHERE id = 1;
INSERT INTO _valet_changes (db_version, "table", row_id, operation)
VALUES ((SELECT version FROM _valet_version WHERE id = 1), 'todos', NEW._id, 'insert');
END;
-- Similar triggers for UPDATE and DELETEColumn migration
For existing tables, the server diffs columns:
ALTER TABLE flow
Triggers must be dropped before ALTER TABLE (SQLite limitation with triggers that reference the table):
Type mismatches
If a column exists but with a different type (e.g. priority was INTEGER but schema says TEXT), this is reported as a breaking change. The migration is not applied and the schema hash is not updated, so the next push will retry.
SQLite does not support ALTER TABLE ALTER COLUMN, so type changes require manual intervention (create new column, migrate data, drop old column).
Removed columns
If the database has a column that's not in the schema, a warning is emitted but the column is not dropped. This is intentional — dropping columns is destructive and SQLite doesn't support ALTER TABLE DROP COLUMN in older versions.
Backfills
Backfills transform existing data when a new column is added or a column's meaning changes. Defined in schema:
defineTable({
title: v.string(),
priority: v.number(),
}).backfill('priority', (row) => row.priority ?? 5)Execution flow
Note: Backfills only run on rows where the target column is
NULL. This means:
- Backfills are additive-only — they populate missing values but never overwrite existing ones
- If a column already has a value (even if incorrect), the backfill will skip that row
- This behavior ensures backfills are safe to retry and won't corrupt manually-set data
- To re-run a backfill on rows with existing values, you must manually set those values to
NULLfirst
Idempotency
Backfills are tracked in _valet_meta using the key backfill:{table}:{column}. Once a backfill has run, it will never run again even if the same schema is pushed. This means backfill code should be deterministic — it runs exactly once per column.
Error handling
If a backfill handler throws an error for a particular row, the entire backfill fails and the error is reported in the SchemaDiff. The _valet_meta key is not written, so the backfill will be retried on the next schema push.
SchemaDiff
apply_schema() returns a SchemaDiff describing what happened:
The diff is returned as JSON in the HTTP response to POST /api/schema, so the CLI can display migration results to the developer.
Client-side schema handling
The client's SyncEngine also tracks schema changes. When the client initializes or reconnects:
initializeTables()creates local SQLite tables matching the schema- Missing columns are added via
ALTER TABLE ADD COLUMN - A schema hash is stored locally to detect changes
- Client-side backfills are compiled from the schema and executed against local data
This ensures the local database schema stays in sync with the server, even after app updates.