Final stage of v1. Smoke (mcplocal/tests/smoke/virtual-llm.smoke.test.ts): - Spins an in-process LlmProvider that returns canned content. - Runs the registrar against the live mcpd in fulldeploy. - Asserts: row appears with kind=virtual / status=active, infer through /api/v1/llms/<name>/infer comes back through the SSE relay with the provider's content + finish_reason, and a 503 appears immediately after registrar.stop() (publisher offline). - Times out / cleanup paths idempotent so re-runs against the same cluster don't litter rows. The 90-s heartbeat-stale flip and 4-h GC are unit-tested — too slow for smoke. Docs: - New docs/virtual-llms.md: when to use this vs creating a regular Llm row, how to opt-in via publish: true, the lifecycle table, the inference-relay sequence, the v1 streaming caveat, the v2-v5 roadmap, and the full /api/v1/llms/_provider-* surface. - agents.md cross-links virtual-llms.md alongside personalities/chat. - README's Agents section gains a "Virtual LLMs" subsection. Workspace suite: 2043/2043 (smoke files run separately). v1 closes. Stage roadmap (each its own future PR): v2 wake-on-demand · v3 virtual agents · v4 LB pool · v5 task queue Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
6.9 KiB
Virtual LLMs
A virtual LLM is an Llm row in mcpd that's registered by an mcplocal
client rather than created by hand with mcpctl create llm. Inference for
a virtual LLM is relayed back through the publishing mcplocal's SSE control
channel — mcpd never needs to know the local URL or hold its API key.
When the publishing mcplocal goes away (or the user shuts down their
laptop) the row decays: active → inactive after 90 s without a
heartbeat, then deleted after 4 h of inactivity. A reconnecting mcplocal
adopts the same row using a sticky providerSessionId it persisted at
first publish.
When to use this
- Local model on a developer laptop that you want everyone on the
team to be able to chat with via
mcpctl chat-llm <name>. The model doesn't need to be reachable from mcpd's k8s pods — only the user's mcplocal does (which is already the case because mcplocal pulls projects from mcpd over HTTPS). - Hibernating models that wake on demand (v2 — see "Roadmap").
- Pool of identical models distributed across user laptops, eligible for load balancing (v4).
If your model is reachable from mcpd's k8s pods over LAN/VPN, you don't
need a virtual LLM — just mcpctl create llm <name> --type openai --url …
and you're done.
Publishing a local provider
mcplocal's local config (~/.mcpctl/config.json) gains a publish: true
opt-in per provider:
{
"llm": {
"providers": [
{
"name": "vllm-local",
"type": "openai",
"model": "Qwen/Qwen2.5-7B-Instruct-AWQ",
"url": "http://127.0.0.1:8000/v1",
"tier": "fast",
"publish": true
}
]
}
}
Restart mcplocal:
systemctl --user restart mcplocal
The registrar:
- Reads
~/.mcpctl/credentialsformcpdUrl+ bearer token. - POSTs to
/api/v1/llms/_provider-registerwith the publishable set. - Persists the returned
providerSessionIdto~/.mcpctl/provider-sessionso the next restart adopts the same mcpd row. - Opens the SSE channel at
/api/v1/llms/_provider-stream. - Heartbeats every 30 s.
- Listens for
event: taskframes and runs them against the localLlmProvider.
If ~/.mcpctl/credentials doesn't exist (e.g. you haven't run
mcpctl auth login), the registrar logs a warning and skips —
publishing is a best-effort feature, not a boot blocker.
Verifying
$ mcpctl get llm
NAME KIND STATUS TYPE MODEL TIER KEY ID
qwen3-thinking public active openai qwen3-thinking fast secret://litellm-key/API_KEY cmofx8y7u…
vllm-local virtual active openai Qwen/Qwen2.5-7B-Instruct-AWQ fast - cmoxz12ab…
$ mcpctl chat-llm vllm-local
─────────────────────────────────────────────────────────
LLM: vllm-local openai → Qwen/Qwen2.5-7B-Instruct-AWQ
Kind: virtual Status: active
─────────────────────────────────────────────────────────
> hello?
Hi! …
You can also chat with public LLMs the same way:
$ mcpctl chat-llm qwen3-thinking
The CLI doesn't care about kind — mcpd's /api/v1/llms/<name>/infer
route branches on it server-side.
Lifecycle in detail
| State | What it means |
|---|---|
active |
Heartbeat received within the last 90 s and the SSE channel is open. |
inactive |
Either the SSE closed or the heartbeat watchdog tripped. Inference returns 503. |
hibernating |
Reserved for v2 (wake-on-demand). v1 never writes this state. |
Two timers on mcpd run the GC sweep:
- 90 s without a heartbeat → flip
active→inactive. - 4 h in
inactive→ delete the row entirely.
A reconnecting mcplocal with the same providerSessionId revives every
inactive row it owns; it only orphans rows that fell past the 4-h cutoff.
Inference relay
When mcpd receives POST /api/v1/llms/<virtual>/infer:
- Look up the row, see
kind=virtual+status=active. - Find the open SSE session for that
providerSessionId. Missing session → 503. - Push a
{ kind: "infer", taskId, llmName, request, streaming }task frame onto the SSE. - mcplocal pulls, calls
LlmProvider.complete(...), and POSTs the result back to/api/v1/llms/_provider-task/<taskId>/result:- non-streaming:
{ status: 200, body: <chat.completion> } - streaming: per-chunk
{ chunk: { data, done? } } - failure:
{ error: "..." }
- non-streaming:
- mcpd forwards the result/chunks out to the original caller.
v1 caveat — streaming granularity: LlmProvider.complete() returns
a finalized CompletionResult, not a token stream. Streaming requests
therefore arrive at the caller as a single delta + [DONE]. Real
per-token streaming is a v2 concern.
Roadmap (later stages)
- v2 — Wake-on-demand: Secret-stored "wake recipe" so mcpd can ask mcplocal to start a hibernating backend before sending inference.
- v3 — Virtual agents: mcplocal publishes its local agent configs
(model + system prompt + sampling defaults) into mcpd's
Agenttable. - v4 — LB pool by model: agents can target a model name instead of a specific Llm; mcpd picks the healthiest pool member per request.
- v5 — Task queue: persisted requests for hibernating/saturated pools. Workers pull tasks of their model when they come online.
API surface (v1)
POST /api/v1/llms/_provider-register → returns { providerSessionId, llms[] }
GET /api/v1/llms/_provider-stream → SSE channel; require x-mcpctl-provider-session header
POST /api/v1/llms/_provider-heartbeat → { providerSessionId }
POST /api/v1/llms/_provider-task/:id/result
→ one of:
{ error: "msg" }
{ chunk: { data, done? } }
{ status, body }
GET /api/v1/llms → list (now includes kind, status, lastHeartbeatAt, inactiveSince)
POST /api/v1/llms/<virtual>/infer → routes through the SSE relay
DELETE /api/v1/llms/<virtual> → delete unconditionally (also runs GC's job)
RBAC piggybacks on view/edit/create:llms — no new resource. Publishing
a virtual LLM is morally a create:llms operation.