Process management, registry, and proxy implementation.
The orchestrator is a single long-running Rust process that spawns, monitors, and proxies traffic to per-project valet-server instances. It lives in valet-orchestrator/src/ across four files.
valet-orchestrator/src/
├── main.rs — CLI args, initialization, background task spawning
├── registry.rs — SQLite-backed project + deploy key store
├── process_manager.rs — Process lifecycle, health checks, port allocation
└── proxy.rs — HTTP/WS reverse proxy, route dispatch, dashboard
Entry point. Parses CLI args (via clap), wires up the other three modules, and starts background tasks.
Args (clap)
├── --port (u16, default 5555, env VALET_PORT)
├── --host (String, default 127.0.0.1, env VALET_HOST)
├── --server-binary (Option<String>, auto-detected if not set)
├── --idle-timeout (u64, default 300, env VALET_IDLE_TIMEOUT)
├── --reap-interval (u64, default 30)
├── --show-logs (bool, default false)
├── --db-path (String, default valet-orchestrator.db)
├── --data-dir (Option<String>, env VALET_DATA_DIR)
├── --open (bool, default false)
├── --seed-project (Vec<String>, also reads VALET_SEED_PROJECT env)
├── --health-check-interval (u64, default 15)
├── --health-failure-threshold (u32, default 3)
└── --max-restart-attempts (u32, default 5)
Startup sequence:
1. Parse args + env vars
2. Open or create ProjectRegistry
└── SQLite file, or in-memory if --open
3. Seed projects from --seed-project / VALET_SEED_PROJECT
4. Create ProcessManager with config
5. Spawn background tasks (tokio::spawn)
├── spawn_idle_reaper — reap_idle() on interval, kills 0-conn idle processes
├── spawn_health_checker — check_all_health() on interval, restarts on threshold
└── spawn_history_sampler — records connection counts for sparkline data
6. Start HTTP server via run_proxy()
All background tasks and proxy handlers share ProcessManager through Arc<Mutex<ProcessManager>>.
Persistent store of registered projects and their deploy keys.
ProjectRegistry
├── conn: Mutex<rusqlite::Connection>
│
├── SQLite table
└── projects(id TEXT PK, deploy_key TEXT NOT NULL, created_at TEXT)
│
├── open(path) / in_memory() — create or open registry
├── create_project(id) → deploy_key (256-bit random hex)
├── seed_project(id, key) — idempotent insert with known key
├── validate_deploy_key(id, key) → bool
└── project_exists(id) → bool
RegistryError
├── AlreadyExists(String)
├── NotFound(String)
├── Db(rusqlite::Error)
└── LockPoisoned
Deploy keys are 256-bit random values encoded as hex. All queries use parameterized statements.
Manages the lifecycle of valet-server child processes: spawning, health checking, connection tracking, idle cleanup, and config replay.
ProcessManager
├── config: ProcessManagerConfig
├── processes: HashMap<String, ManagedProcess>
├── allocated_ports: HashSet<u16>
└── log_buffer: Arc<Mutex<LogBuffer>>
ProcessManagerConfig
├── server_binary: PathBuf
├── port_range: 6001..6999
├── idle_timeout: Duration
├── startup_timeout: Duration
├── health_check_interval / timeout: Duration
├── health_failure_threshold: u32
├── max_restart_attempts: u32
└── data_dir: Option<PathBuf>
ManagedProcess
├── project_id: String
├── status: ProcessStatus — Starting | Warm | Unhealthy | Stopping
├── port: u16
├── child: tokio::Child
├── last_activity: Instant
├── connection_count: u32
├── health_failures: u32
├── restart_count: u32
└── connection_history: VecDeque<u32> — ring buffer for sparklines
get_or_spawn(project_id) → port
├── If Warm: return existing port, touch activity
└── If not running:
spawn(project_id)
├── allocate_port() — find unused port in 6001-6999
├── tokio::Command::new(...) — start valet-server child process
├── wait_for_ready(port) — poll /ws until WS handshake succeeds
├── replay_config(id, port) — POST schema.json + functions.json from data_dir
└── set status = Warm
kill(project_id)
├── Send SIGTERM
├── Wait with timeout
└── SIGKILL fallback
restart(project_id)
└── kill() then spawn()
check_all_health() — called by spawn_health_checker
├── For each Warm process:
├── health_check(port) — attempt WS connection to /ws
├── On failure: increment health_failures
└── If health_failures >= threshold: restart()
└── If restart_count >= max_restart_attempts: set Unhealthy
└── Returns list of restarted projects
reap_idle() — called by spawn_idle_reaper
├── For each Warm process:
└── If connection_count == 0 && idle > idle_timeout: kill()
└── Returns list of reaped projects
LogBuffer
├── entries: VecDeque<LogEntry> — ring buffer, evicts oldest
├── max_entries: usize
├── push(project_id, line)
└── since(id, project?) → Vec<LogEntry>
LogEntry { id: u64, timestamp: u64, project_id: String, line: String }
Wrapped in SharedLogBuffer (with RwLock) for async access from proxy handlers.
HTTP/WebSocket reverse proxy. Routes incoming requests to the right handler based on path.
Incoming HTTP request
├── parse_route(path) → Route
├── CORS preflight? → 204 with headers
├── Auth check (unless --open mode)
└── extract_bearer_token() → registry.validate_deploy_key()
│
├── Route::ProjectWs(id) — /projects/<id>/ws
└── handle_ws_upgrade()
├── get_or_spawn(id) → port
├── Upgrade to WebSocket
├── Connect to child at localhost:<port>/ws
├── relay_websockets() — bidirectional message forwarding
└── increment/decrement connection count on open/close
│
├── Route::ProjectHttp(id, path) — /projects/<id>/api/*
└── handle_http_proxy()
├── get_or_spawn(id) → port
└── Forward request to localhost:<port>/api/<path>
│
├── Route::Status — /status
└── ProcessManager.status() → JSON
│
├── Route::Logs — /logs?since=<id>&project=<name>
└── LogBuffer.since() → JSON
│
├── Route::CreateProject — POST /projects
├── Requires admin_key (unless --open)
└── ProjectRegistry.create_project() → deploy_key
│
├── Route::Dashboard — / (serve embedded HTML)
├── Route::DashboardAsset(file) — /styles.css, /app.js (serve embedded static assets)
└── Route::NotFound — 404
relay_websockets(client_ws, upstream_ws)
├── Bidirectional message forwarding (futures::select)
├── Ping every 30s, timeout after 10s without pong
├── 3 retries with 500ms delay for upstream connection
└── Closes when either side disconnects
ProxyConfig
└── open_mode: bool — skip all auth checks
All concurrent access flows through Arc wrappers:
Arc<Mutex<ProcessManager>> — proxy handlers + all 3 background tasks
Arc<Mutex<LogBuffer>> — proxy handlers + child process stdout readers
Arc<ProjectRegistry> — proxy handlers (internal Mutex on SQLite connection)
Inputs (from outside):
- WebSocket connections from Valet clients (proxied to child processes)
- HTTP requests from clients and dashboard
- CLI arguments and environment variables
Outputs (to outside):
- Spawned
valet-server child processes on localhost ports 6001-6999
- HTTP POST of
schema.json and functions.json to child processes on startup/restart
- Proxied HTTP responses and WebSocket frames back to clients
- JSON status and log endpoints for the dashboard