Lisa
Voice / assistant stack — public edge on lisa.abhay.ai
Queue API: /gpu-bridge/v1/ (worker polls here; secrets on server).
Discovery: /.well-known/lisa-config.json,
/gpu-bridge/v1/public-config.
LLM (Ollama, OpenAI-compatible): same host, path prefix /llm/ (e.g.
/llm/v1/chat/completions). The real Ollama process lives wherever you point
/etc/nginx/snippets/lisa-ollama-upstream.conf—swap GPU machine or external URL there without changing these page paths.
Assistant Console (static shell)
— HTML/JS only. A policy-aligned setup keeps GPU work on the LAN via the
outbound gpu_bridge worker only; the browser should talk to this host
(or VPN-private APIs), not to Pipecat or tunnels on your home machine.
End-to-end voice/WebSocket through the public tier is not wired yet (ADR-001: outbound GPU worker + public orchestration).