feat: virtual-LLM smoke test + docs (v1 Stage 6)
Some checks failed
CI/CD / typecheck (pull_request) Successful in 53s
CI/CD / test (pull_request) Successful in 1m8s
CI/CD / lint (pull_request) Successful in 2m6s
CI/CD / smoke (pull_request) Failing after 1m39s
CI/CD / build (pull_request) Successful in 2m11s
CI/CD / publish (pull_request) Has been skipped
Some checks failed
CI/CD / typecheck (pull_request) Successful in 53s
CI/CD / test (pull_request) Successful in 1m8s
CI/CD / lint (pull_request) Successful in 2m6s
CI/CD / smoke (pull_request) Failing after 1m39s
CI/CD / build (pull_request) Successful in 2m11s
CI/CD / publish (pull_request) Has been skipped
Final stage of v1. Smoke (mcplocal/tests/smoke/virtual-llm.smoke.test.ts): - Spins an in-process LlmProvider that returns canned content. - Runs the registrar against the live mcpd in fulldeploy. - Asserts: row appears with kind=virtual / status=active, infer through /api/v1/llms/<name>/infer comes back through the SSE relay with the provider's content + finish_reason, and a 503 appears immediately after registrar.stop() (publisher offline). - Times out / cleanup paths idempotent so re-runs against the same cluster don't litter rows. The 90-s heartbeat-stale flip and 4-h GC are unit-tested — too slow for smoke. Docs: - New docs/virtual-llms.md: when to use this vs creating a regular Llm row, how to opt-in via publish: true, the lifecycle table, the inference-relay sequence, the v1 streaming caveat, the v2-v5 roadmap, and the full /api/v1/llms/_provider-* surface. - agents.md cross-links virtual-llms.md alongside personalities/chat. - README's Agents section gains a "Virtual LLMs" subsection. Workspace suite: 2043/2043 (smoke files run separately). v1 closes. Stage roadmap (each its own future PR): v2 wake-on-demand · v3 virtual agents · v4 LB pool · v5 task queue Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -201,4 +201,8 @@ mcpctl chat reviewer
|
||||
- [personalities.md](./personalities.md) — named overlays of prompts on
|
||||
top of an agent. Same agent, different prompt bundles, picked per-turn
|
||||
via `--personality <name>` or `agent.defaultPersonality`.
|
||||
- [virtual-llms.md](./virtual-llms.md) — local LLMs (e.g. `vllm-local`)
|
||||
publishing themselves into `mcpctl get llm` so anyone can chat with
|
||||
them via `mcpctl chat-llm <name>`. Inference is relayed through the
|
||||
publishing mcplocal — mcpd never holds the local URL or key.
|
||||
- [chat.md](./chat.md) — `mcpctl chat` flow and LiteLLM-style flags.
|
||||
|
||||
171
docs/virtual-llms.md
Normal file
171
docs/virtual-llms.md
Normal file
@@ -0,0 +1,171 @@
|
||||
# Virtual LLMs
|
||||
|
||||
A **virtual LLM** is an `Llm` row in mcpd that's *registered by an mcplocal
|
||||
client* rather than created by hand with `mcpctl create llm`. Inference for
|
||||
a virtual LLM is relayed back through the publishing mcplocal's SSE control
|
||||
channel — **mcpd never needs to know the local URL or hold its API key**.
|
||||
|
||||
When the publishing mcplocal goes away (or the user shuts down their
|
||||
laptop) the row decays: `active → inactive` after 90 s without a
|
||||
heartbeat, then deleted after 4 h of inactivity. A reconnecting mcplocal
|
||||
adopts the same row using a sticky `providerSessionId` it persisted at
|
||||
first publish.
|
||||
|
||||
## When to use this
|
||||
|
||||
- **Local model on a developer laptop** that you want everyone on the
|
||||
team to be able to chat with via `mcpctl chat-llm <name>`. The model
|
||||
doesn't need to be reachable from mcpd's k8s pods — only the user's
|
||||
mcplocal does (which is already the case because mcplocal pulls
|
||||
projects from mcpd over HTTPS).
|
||||
- **Hibernating models** that wake on demand (v2 — see "Roadmap").
|
||||
- **Pool of identical models** distributed across user laptops, eligible
|
||||
for load balancing (v4).
|
||||
|
||||
If your model is reachable from mcpd's k8s pods over LAN/VPN, you don't
|
||||
need a virtual LLM — just `mcpctl create llm <name> --type openai --url …`
|
||||
and you're done.
|
||||
|
||||
## Publishing a local provider
|
||||
|
||||
mcplocal's local config (`~/.mcpctl/config.json`) gains a `publish: true`
|
||||
opt-in per provider:
|
||||
|
||||
```json
|
||||
{
|
||||
"llm": {
|
||||
"providers": [
|
||||
{
|
||||
"name": "vllm-local",
|
||||
"type": "openai",
|
||||
"model": "Qwen/Qwen2.5-7B-Instruct-AWQ",
|
||||
"url": "http://127.0.0.1:8000/v1",
|
||||
"tier": "fast",
|
||||
"publish": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Restart mcplocal:
|
||||
|
||||
```fish
|
||||
systemctl --user restart mcplocal
|
||||
```
|
||||
|
||||
The registrar:
|
||||
1. Reads `~/.mcpctl/credentials` for `mcpdUrl` + bearer token.
|
||||
2. POSTs to `/api/v1/llms/_provider-register` with the publishable set.
|
||||
3. Persists the returned `providerSessionId` to
|
||||
`~/.mcpctl/provider-session` so the next restart adopts the same
|
||||
mcpd row.
|
||||
4. Opens the SSE channel at `/api/v1/llms/_provider-stream`.
|
||||
5. Heartbeats every 30 s.
|
||||
6. Listens for `event: task` frames and runs them against the local
|
||||
`LlmProvider`.
|
||||
|
||||
If `~/.mcpctl/credentials` doesn't exist (e.g. you haven't run
|
||||
`mcpctl auth login`), the registrar logs a warning and skips —
|
||||
publishing is a best-effort feature, not a boot blocker.
|
||||
|
||||
## Verifying
|
||||
|
||||
```fish
|
||||
$ mcpctl get llm
|
||||
NAME KIND STATUS TYPE MODEL TIER KEY ID
|
||||
qwen3-thinking public active openai qwen3-thinking fast secret://litellm-key/API_KEY cmofx8y7u…
|
||||
vllm-local virtual active openai Qwen/Qwen2.5-7B-Instruct-AWQ fast - cmoxz12ab…
|
||||
|
||||
$ mcpctl chat-llm vllm-local
|
||||
─────────────────────────────────────────────────────────
|
||||
LLM: vllm-local openai → Qwen/Qwen2.5-7B-Instruct-AWQ
|
||||
Kind: virtual Status: active
|
||||
─────────────────────────────────────────────────────────
|
||||
> hello?
|
||||
Hi! …
|
||||
```
|
||||
|
||||
You can also chat with public LLMs the same way:
|
||||
|
||||
```fish
|
||||
$ mcpctl chat-llm qwen3-thinking
|
||||
```
|
||||
|
||||
The CLI doesn't care about `kind` — mcpd's `/api/v1/llms/<name>/infer`
|
||||
route branches on it server-side.
|
||||
|
||||
## Lifecycle in detail
|
||||
|
||||
| State | What it means |
|
||||
|----------------|-----------------------------------------------------------------------|
|
||||
| `active` | Heartbeat received within the last 90 s and the SSE channel is open. |
|
||||
| `inactive` | Either the SSE closed or the heartbeat watchdog tripped. Inference returns 503. |
|
||||
| `hibernating` | Reserved for v2 (wake-on-demand). v1 never writes this state. |
|
||||
|
||||
Two timers on mcpd run the GC sweep:
|
||||
|
||||
- **90 s** without a heartbeat → flip `active` → `inactive`.
|
||||
- **4 h** in `inactive` → delete the row entirely.
|
||||
|
||||
A reconnecting mcplocal with the same `providerSessionId` revives every
|
||||
inactive row it owns; it only orphans rows that fell past the 4-h cutoff.
|
||||
|
||||
## Inference relay
|
||||
|
||||
When mcpd receives `POST /api/v1/llms/<virtual>/infer`:
|
||||
|
||||
1. Look up the row, see `kind=virtual` + `status=active`.
|
||||
2. Find the open SSE session for that `providerSessionId`. Missing
|
||||
session → 503.
|
||||
3. Push a `{ kind: "infer", taskId, llmName, request, streaming }`
|
||||
task frame onto the SSE.
|
||||
4. mcplocal pulls, calls `LlmProvider.complete(...)`, and POSTs the
|
||||
result back to `/api/v1/llms/_provider-task/<taskId>/result`:
|
||||
- non-streaming: `{ status: 200, body: <chat.completion> }`
|
||||
- streaming: per-chunk `{ chunk: { data, done? } }`
|
||||
- failure: `{ error: "..." }`
|
||||
5. mcpd forwards the result/chunks out to the original caller.
|
||||
|
||||
**v1 caveat — streaming granularity**: `LlmProvider.complete()` returns
|
||||
a finalized `CompletionResult`, not a token stream. Streaming requests
|
||||
therefore arrive at the caller as a single delta + `[DONE]`. Real
|
||||
per-token streaming is a v2 concern.
|
||||
|
||||
## Roadmap (later stages)
|
||||
|
||||
- **v2 — Wake-on-demand**: Secret-stored "wake recipe" so mcpd can ask
|
||||
mcplocal to start a hibernating backend before sending inference.
|
||||
- **v3 — Virtual agents**: mcplocal publishes its local agent configs
|
||||
(model + system prompt + sampling defaults) into mcpd's `Agent` table.
|
||||
- **v4 — LB pool by model**: agents can target a model name instead of
|
||||
a specific Llm; mcpd picks the healthiest pool member per request.
|
||||
- **v5 — Task queue**: persisted requests for hibernating/saturated
|
||||
pools. Workers pull tasks of their model when they come online.
|
||||
|
||||
## API surface (v1)
|
||||
|
||||
```
|
||||
POST /api/v1/llms/_provider-register → returns { providerSessionId, llms[] }
|
||||
GET /api/v1/llms/_provider-stream → SSE channel; require x-mcpctl-provider-session header
|
||||
POST /api/v1/llms/_provider-heartbeat → { providerSessionId }
|
||||
POST /api/v1/llms/_provider-task/:id/result
|
||||
→ one of:
|
||||
{ error: "msg" }
|
||||
{ chunk: { data, done? } }
|
||||
{ status, body }
|
||||
|
||||
GET /api/v1/llms → list (now includes kind, status, lastHeartbeatAt, inactiveSince)
|
||||
POST /api/v1/llms/<virtual>/infer → routes through the SSE relay
|
||||
DELETE /api/v1/llms/<virtual> → delete unconditionally (also runs GC's job)
|
||||
```
|
||||
|
||||
RBAC piggybacks on `view/edit/create:llms` — no new resource. Publishing
|
||||
a virtual LLM is morally a `create:llms` operation.
|
||||
|
||||
## See also
|
||||
|
||||
- [agents.md](./agents.md) — what an Agent is and how it pins to an LLM.
|
||||
- [chat.md](./chat.md) — `mcpctl chat <agent>` (full agent flow).
|
||||
- The CLI: `mcpctl chat-llm <name>` (this doc) is the stateless
|
||||
counterpart for raw LLM chat.
|
||||
Reference in New Issue
Block a user