Some checks failed
CI/CD / lint (pull_request) Successful in 55s
CI/CD / test (pull_request) Successful in 1m8s
CI/CD / typecheck (pull_request) Successful in 2m43s
CI/CD / smoke (pull_request) Failing after 1m44s
CI/CD / build (pull_request) Successful in 5m28s
CI/CD / publish (pull_request) Has been skipped
Closes v2 (wake-on-demand). Same shape as v1's stage 6: smoke
exercises the live-cluster path, docs lose the "v2 reserved" caveat
and gain a full wake-recipe section.
Smoke (virtual-llm.smoke.test.ts):
- New "wake-on-demand" describe block runs alongside the v1 tests.
- Spins a tiny in-process HTTP "wake controller"; the published
provider's isAvailable() returns false until the wake POST flips
the bool. Asserts:
1. Provider publishes as kind=virtual / status=hibernating.
2. First inference triggers the wake recipe, the recipe POSTs
to the controller, the provider becomes available, mcpd
relays the inference, and the row settles to active.
- Cleans up the row + wake server in afterAll.
Docs (docs/virtual-llms.md):
- Lifecycle table updates the `hibernating` description from
"reserved for v2" to the actual v2 semantics.
- New "Wake-on-demand (v2)" section: configuration shapes for both
recipe types (http + command), the wake-then-infer flow diagram,
concurrent-infer dedup, failure semantics.
- Roadmap drops v2; v3-v5 still listed.
Workspace: 2050/2050 (smoke runs separately; the new SSE-based wake
test runs only against a live cluster, not under \`pnpm test:run\`).
v2 closes. v3 = virtual agents, v4 = LB pool by model, v5 = queue.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
237 lines
8.9 KiB
Markdown
237 lines
8.9 KiB
Markdown
# Virtual LLMs
|
|
|
|
A **virtual LLM** is an `Llm` row in mcpd that's *registered by an mcplocal
|
|
client* rather than created by hand with `mcpctl create llm`. Inference for
|
|
a virtual LLM is relayed back through the publishing mcplocal's SSE control
|
|
channel — **mcpd never needs to know the local URL or hold its API key**.
|
|
|
|
When the publishing mcplocal goes away (or the user shuts down their
|
|
laptop) the row decays: `active → inactive` after 90 s without a
|
|
heartbeat, then deleted after 4 h of inactivity. A reconnecting mcplocal
|
|
adopts the same row using a sticky `providerSessionId` it persisted at
|
|
first publish.
|
|
|
|
## When to use this
|
|
|
|
- **Local model on a developer laptop** that you want everyone on the
|
|
team to be able to chat with via `mcpctl chat-llm <name>`. The model
|
|
doesn't need to be reachable from mcpd's k8s pods — only the user's
|
|
mcplocal does (which is already the case because mcplocal pulls
|
|
projects from mcpd over HTTPS).
|
|
- **Hibernating models** that wake on demand (v2 — see "Roadmap").
|
|
- **Pool of identical models** distributed across user laptops, eligible
|
|
for load balancing (v4).
|
|
|
|
If your model is reachable from mcpd's k8s pods over LAN/VPN, you don't
|
|
need a virtual LLM — just `mcpctl create llm <name> --type openai --url …`
|
|
and you're done.
|
|
|
|
## Publishing a local provider
|
|
|
|
mcplocal's local config (`~/.mcpctl/config.json`) gains a `publish: true`
|
|
opt-in per provider:
|
|
|
|
```json
|
|
{
|
|
"llm": {
|
|
"providers": [
|
|
{
|
|
"name": "vllm-local",
|
|
"type": "openai",
|
|
"model": "Qwen/Qwen2.5-7B-Instruct-AWQ",
|
|
"url": "http://127.0.0.1:8000/v1",
|
|
"tier": "fast",
|
|
"publish": true
|
|
}
|
|
]
|
|
}
|
|
}
|
|
```
|
|
|
|
Restart mcplocal:
|
|
|
|
```fish
|
|
systemctl --user restart mcplocal
|
|
```
|
|
|
|
The registrar:
|
|
1. Reads `~/.mcpctl/credentials` for `mcpdUrl` + bearer token.
|
|
2. POSTs to `/api/v1/llms/_provider-register` with the publishable set.
|
|
3. Persists the returned `providerSessionId` to
|
|
`~/.mcpctl/provider-session` so the next restart adopts the same
|
|
mcpd row.
|
|
4. Opens the SSE channel at `/api/v1/llms/_provider-stream`.
|
|
5. Heartbeats every 30 s.
|
|
6. Listens for `event: task` frames and runs them against the local
|
|
`LlmProvider`.
|
|
|
|
If `~/.mcpctl/credentials` doesn't exist (e.g. you haven't run
|
|
`mcpctl auth login`), the registrar logs a warning and skips —
|
|
publishing is a best-effort feature, not a boot blocker.
|
|
|
|
## Verifying
|
|
|
|
```fish
|
|
$ mcpctl get llm
|
|
NAME KIND STATUS TYPE MODEL TIER KEY ID
|
|
qwen3-thinking public active openai qwen3-thinking fast secret://litellm-key/API_KEY cmofx8y7u…
|
|
vllm-local virtual active openai Qwen/Qwen2.5-7B-Instruct-AWQ fast - cmoxz12ab…
|
|
|
|
$ mcpctl chat-llm vllm-local
|
|
─────────────────────────────────────────────────────────
|
|
LLM: vllm-local openai → Qwen/Qwen2.5-7B-Instruct-AWQ
|
|
Kind: virtual Status: active
|
|
─────────────────────────────────────────────────────────
|
|
> hello?
|
|
Hi! …
|
|
```
|
|
|
|
You can also chat with public LLMs the same way:
|
|
|
|
```fish
|
|
$ mcpctl chat-llm qwen3-thinking
|
|
```
|
|
|
|
The CLI doesn't care about `kind` — mcpd's `/api/v1/llms/<name>/infer`
|
|
route branches on it server-side.
|
|
|
|
## Lifecycle in detail
|
|
|
|
| State | What it means |
|
|
|----------------|---------------------------------------------------------------------------------|
|
|
| `active` | Heartbeat received within the last 90 s and the SSE channel is open. |
|
|
| `inactive` | Either the SSE closed or the heartbeat watchdog tripped. Inference returns 503. |
|
|
| `hibernating` | Publisher is online but the backend is asleep; the next inference triggers a `wake` task before relaying. |
|
|
|
|
Two timers on mcpd run the GC sweep:
|
|
|
|
- **90 s** without a heartbeat → flip `active` → `inactive`.
|
|
- **4 h** in `inactive` → delete the row entirely.
|
|
|
|
A reconnecting mcplocal with the same `providerSessionId` revives every
|
|
inactive row it owns; it only orphans rows that fell past the 4-h cutoff.
|
|
|
|
## Inference relay
|
|
|
|
When mcpd receives `POST /api/v1/llms/<virtual>/infer`:
|
|
|
|
1. Look up the row, see `kind=virtual` + `status=active`.
|
|
2. Find the open SSE session for that `providerSessionId`. Missing
|
|
session → 503.
|
|
3. Push a `{ kind: "infer", taskId, llmName, request, streaming }`
|
|
task frame onto the SSE.
|
|
4. mcplocal pulls, calls `LlmProvider.complete(...)`, and POSTs the
|
|
result back to `/api/v1/llms/_provider-task/<taskId>/result`:
|
|
- non-streaming: `{ status: 200, body: <chat.completion> }`
|
|
- streaming: per-chunk `{ chunk: { data, done? } }`
|
|
- failure: `{ error: "..." }`
|
|
5. mcpd forwards the result/chunks out to the original caller.
|
|
|
|
**v1 caveat — streaming granularity**: `LlmProvider.complete()` returns
|
|
a finalized `CompletionResult`, not a token stream. Streaming requests
|
|
therefore arrive at the caller as a single delta + `[DONE]`. Real
|
|
per-token streaming is a v2 concern.
|
|
|
|
## Wake-on-demand (v2)
|
|
|
|
A provider whose backend hibernates (a vLLM instance that suspends
|
|
when idle, an Ollama daemon that exits when nothing's connected, …)
|
|
can declare a **wake recipe** in mcplocal config. When that provider's
|
|
`isAvailable()` returns false at registrar startup, the row is
|
|
published as `status=hibernating`. The next inference request that
|
|
hits the row triggers the recipe and waits for the backend to come up
|
|
before relaying.
|
|
|
|
Two recipe types:
|
|
|
|
```jsonc
|
|
// HTTP — POST to a "wake controller" that starts the backend out of band.
|
|
{
|
|
"name": "vllm-local",
|
|
"type": "openai",
|
|
"model": "...",
|
|
"publish": true,
|
|
"wake": {
|
|
"type": "http",
|
|
"url": "http://10.0.0.50:9090/wake/vllm",
|
|
"method": "POST",
|
|
"headers": { "Authorization": "Bearer ..." },
|
|
"maxWaitSeconds": 60
|
|
}
|
|
}
|
|
```
|
|
|
|
```jsonc
|
|
// command — spawn a local process (systemd, wakeonlan, custom script).
|
|
{
|
|
"name": "vllm-local",
|
|
"type": "openai",
|
|
"model": "...",
|
|
"publish": true,
|
|
"wake": {
|
|
"type": "command",
|
|
"command": "/usr/local/bin/start-vllm",
|
|
"args": ["--profile", "qwen3"],
|
|
"maxWaitSeconds": 120
|
|
}
|
|
}
|
|
```
|
|
|
|
How a request flows when the row is `hibernating`:
|
|
|
|
```
|
|
client → mcpd POST /api/v1/llms/<name>/infer
|
|
mcpd: status === hibernating → push wake task on SSE
|
|
mcplocal: receive wake task → run recipe → poll isAvailable()
|
|
→ heartbeat each tick → POST { ok: true } back
|
|
mcpd: flip row → active, push the original infer task
|
|
mcplocal: run inference → POST result back
|
|
mcpd → client (forwards the inference result)
|
|
```
|
|
|
|
Concurrent infers for the same hibernating Llm share a single wake
|
|
task — only the first request triggers the recipe; later ones await
|
|
the same in-flight wake promise. After the wake settles, every queued
|
|
infer dispatches in order.
|
|
|
|
If the recipe fails (HTTP non-2xx, command exits non-zero, or the
|
|
provider doesn't come up within `maxWaitSeconds`), every queued infer
|
|
is rejected with a clear error and the row stays `hibernating` —
|
|
the next request gets a fresh wake attempt.
|
|
|
|
## Roadmap (later stages)
|
|
|
|
- **v3 — Virtual agents**: mcplocal publishes its local agent configs
|
|
(model + system prompt + sampling defaults) into mcpd's `Agent` table.
|
|
- **v4 — LB pool by model**: agents can target a model name instead of
|
|
a specific Llm; mcpd picks the healthiest pool member per request.
|
|
- **v5 — Task queue**: persisted requests for hibernating/saturated
|
|
pools. Workers pull tasks of their model when they come online.
|
|
|
|
## API surface (v1)
|
|
|
|
```
|
|
POST /api/v1/llms/_provider-register → returns { providerSessionId, llms[] }
|
|
GET /api/v1/llms/_provider-stream → SSE channel; require x-mcpctl-provider-session header
|
|
POST /api/v1/llms/_provider-heartbeat → { providerSessionId }
|
|
POST /api/v1/llms/_provider-task/:id/result
|
|
→ one of:
|
|
{ error: "msg" }
|
|
{ chunk: { data, done? } }
|
|
{ status, body }
|
|
|
|
GET /api/v1/llms → list (now includes kind, status, lastHeartbeatAt, inactiveSince)
|
|
POST /api/v1/llms/<virtual>/infer → routes through the SSE relay
|
|
DELETE /api/v1/llms/<virtual> → delete unconditionally (also runs GC's job)
|
|
```
|
|
|
|
RBAC piggybacks on `view/edit/create:llms` — no new resource. Publishing
|
|
a virtual LLM is morally a `create:llms` operation.
|
|
|
|
## See also
|
|
|
|
- [agents.md](./agents.md) — what an Agent is and how it pins to an LLM.
|
|
- [chat.md](./chat.md) — `mcpctl chat <agent>` (full agent flow).
|
|
- The CLI: `mcpctl chat-llm <name>` (this doc) is the stateless
|
|
counterpart for raw LLM chat.
|