- Add warmup() to LlmProvider interface for eager subprocess startup
- ManagedVllmProvider.warmup() starts vLLM in background on project load
- ProviderRegistry.warmupAll() triggers all managed providers
- NamedProvider proxies warmup() to inner provider
- paginate stage generates LLM-powered descriptive page titles when
available, cached by content hash, falls back to generic "Page N"
- project-mcp-endpoint calls warmupAll() on router creation so vLLM
is loading while the session initializes
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Two bugs fixed:
- GET /api/v1/servers/:cuid now resolves CUID→name before RBAC check,
so name-scoped bindings match correctly
- List endpoints now filter responses via preSerialization hook using
getAllowedScope(), so name-scoped users only see their resources
Also adds fulldeploy.sh orchestrator script.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>