feat: virtual LLMs v1 (registration skeleton) #63
Reference in New Issue
Block a user
Delete Branch "feat/virtual-llm-v1"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Summary
v1 of the virtual-LLM feature. A user's local provider (e.g. `vllm-local`) can publish itself into mcpd's `Llm` registry as a `kind=virtual` row. Inference is relayed through the publishing mcplocal's SSE control channel — mcpd never holds the local URL or API key. When the publisher disappears, the row goes `inactive` after 90 s; after 4 h of inactivity it's auto-deleted.
This is the registration skeleton. Wake-on-demand (v2), virtual agents (v3), LB pool by model (v4), and task queue (v5) come as their own PRs — see `docs/virtual-llms.md` for the staged roadmap.
Stages
How to use it (after merge + deploy)
```fish
In ~/.mcpctl/config.json, opt the provider in:
{ "name": "vllm-local", "type": "openai", "model": "...", "publish": true }
systemctl --user restart mcplocal
mcpctl get llm
NAME KIND STATUS TYPE MODEL TIER ID
qwen3-thinking public active openai qwen3-thinking fast ...
vllm-local virtual active openai Qwen/Qwen2.5-7B-Instruct-AWQ fast ...
mcpctl chat-llm vllm-local
Test plan
🤖 Generated with Claude Code
End-to-end backend wiring. After this stage, an mcplocal client can register a provider, hold the SSE channel open, heartbeat, and have its inference requests fanned through the relay — all without touching the agent layer or the public-LLM path. Routes (new file: routes/virtual-llms.ts): POST /api/v1/llms/_provider-register → returns { providerSessionId, llms[] } GET /api/v1/llms/_provider-stream → SSE channel keyed by x-mcpctl-provider-session header. Emits `event: hello` on open, `event: task` on inference fan-out, `: ping` every 20 s for proxies. POST /api/v1/llms/_provider-heartbeat → bumps lastHeartbeatAt POST /api/v1/llms/_provider-task/:id/result → mcplocal pushes result back; body shape is one of: { error: 'msg' } { chunk: { data, done? } } { status, body } LlmService: - LlmView gains kind/status/lastHeartbeatAt/inactiveSince so route handlers + the upcoming `mcpctl get llm` columns can branch on kind without re-fetching the row. llm-infer.ts: - Detects llm.kind === 'virtual' and delegates to VirtualLlmService.enqueueInferTask. Streaming + non-streaming both supported; on 503 (publisher offline) the existing audit hook still fires with the right status code. - Adds optional `virtualLlms: VirtualLlmService` to LlmInferDeps; absence in test fixtures returns a 500 with a clear "server misconfiguration" message rather than silently falling through to the public path against an empty URL. main.ts: - Constructs VirtualLlmService(llmRepo). - Passes it to registerLlmInferRoutes. - Calls registerVirtualLlmRoutes(app, virtualLlmService). - 60-s GC ticker started after app.listen; clears on graceful shutdown alongside the existing reconcile timer. Tests: 11 new virtual-LLM route assertions (validation paths, service plumbing for register/heartbeat/task-result) + 3 new infer-route assertions (kind=virtual non-streaming relay, 503 path, 500 when virtualLlms dep missing). mcpd suite: 833/833 (was 819, +14). Typecheck clean. The full SSE handshake is exercised by the smoke test in Stage 6; under app.inject the keep-alive blocks until close so unit-level SSE testing isn't worth the complexity here. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>The mcplocal counterpart to mcpd's VirtualLlmService. After this stage, flipping \`publish: true\` on a provider in ~/.mcpctl/config.json makes the provider show up in mcpctl get llm with kind=virtual the next time mcplocal restarts; running an inference against it relays through this client back to the local LlmProvider. Config: - LlmProviderFileEntry gains optional \`publish: boolean\` (default false, so existing setups don't change). Registrar (new file: providers/registrar.ts): - start(): if any provider is opted-in, POSTs to /api/v1/llms/_provider-register with the publishable set, persists the returned providerSessionId to ~/.mcpctl/provider-session for sticky reconnects, then opens the SSE control channel and starts a 30-s heartbeat ticker. - SSE listener parses event/data lines from text/event-stream frames. task frames trigger handleInferTask: convert OpenAI body to CompletionOptions, call provider.complete(), POST the result back as either { status, body } (non-streaming) or two chunk POSTs (streaming: one delta + a [DONE] marker). - Disconnect → exponential backoff reconnect from 5 s up to 60 s. On successful reconnect the persisted sessionId revives the same Llm rows in mcpd (mcpd flips them back to active on heartbeat). - stop() destroys the SSE socket and clears the timer; cleanly handed off from main.ts's existing shutdown handler. Wired into mcplocal main.ts via maybeStartVirtualLlmRegistrar: - Filters opted-in providers, looks up their LlmProvider instances in the registry. - Reads ~/.mcpctl/credentials for mcpdUrl + bearer; absence is a best-effort skip (logs a warning, returns null) — never a boot blocker. v1 caveat documented in the file header: LlmProvider returns a finalized CompletionResult, not a token stream, so streaming requests get a single delta chunk + [DONE]. Real per-token streaming is a v2 concern. Tests: 5 new in tests/registrar.test.ts using a tiny in-process HTTP server. Cover: no-op when nothing opted-in, register POST + sticky sessionId persistence, sticky reconnect from disk, heartbeat ticker fires at the configured interval, register HTTP error surfaces. Workspace suite: 2043/2043 across 152 files (was 2006/149, +5 new tests + the new file gets discovered). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>