product2026

AI-first DAW Workstation

Web-native productivity workstation for streaming generative music and stems

A browser-based workstation built for AI-first music: Web Audio and AudioWorklet with a lock-free float32 ring buffer, Web Worker inference, and a ModelProvider facade for autoregressive completion, text-to-music, and text-to-stem streaming—designed for the same “tab complete” feel as coding assistants, not batch export.

The Challenge

Real-time audio UIs need predictable scheduling; ML inference must never block the audio thread. Foundation models may run on-device, via API, or on a GPU server—the host should stay the same.

The Approach

Solid + Vite for a responsive shell; session and completion-job state in pure TypeScript with tests; mock foundation models first, then swap adapters for HTTP or edge runtimes. Multi-stem generation stays behind a streaming contract so architectures (parallel heads, shared decoders, etc.) can evolve without rewriting the DAW.

Key Highlights

Streaming inference into the graph

Inference runs off the audio thread; chunks feed the ring buffer and stem-level UI previews with cancel/commit semantics.

Foundation-model ready

Mock AR completion, text-to-music, and text-to-stems streams document the integration surface for real checkpoints or APIs.

Test-driven core

Unit tests on timebase, ring buffer, and completion state machines; CI runs typecheck, lint, test, and production build.

Impact

  • Transport + ring-buffer pipeline from worker PCM into AudioWorklet
  • TDD gates on ring buffer, transport math, and completion jobs
  • Extensible ModelProvider for API vs fine-tuned models

Technologies

TypeScriptSolidViteWeb Audio APIAudioWorkletWeb WorkersVitest

Metrics

Web Audio + WorkletWorker streamingMock ModelProvider

Links