feat: virtual-LLM v2 smoke + docs (v2 Stage 3)
Some checks failed
CI/CD / lint (pull_request) Successful in 55s
CI/CD / test (pull_request) Successful in 1m8s
CI/CD / typecheck (pull_request) Successful in 2m43s
CI/CD / smoke (pull_request) Failing after 1m44s
CI/CD / build (pull_request) Successful in 5m28s
CI/CD / publish (pull_request) Has been skipped
Some checks failed
CI/CD / lint (pull_request) Successful in 55s
CI/CD / test (pull_request) Successful in 1m8s
CI/CD / typecheck (pull_request) Successful in 2m43s
CI/CD / smoke (pull_request) Failing after 1m44s
CI/CD / build (pull_request) Successful in 5m28s
CI/CD / publish (pull_request) Has been skipped
Closes v2 (wake-on-demand). Same shape as v1's stage 6: smoke
exercises the live-cluster path, docs lose the "v2 reserved" caveat
and gain a full wake-recipe section.
Smoke (virtual-llm.smoke.test.ts):
- New "wake-on-demand" describe block runs alongside the v1 tests.
- Spins a tiny in-process HTTP "wake controller"; the published
provider's isAvailable() returns false until the wake POST flips
the bool. Asserts:
1. Provider publishes as kind=virtual / status=hibernating.
2. First inference triggers the wake recipe, the recipe POSTs
to the controller, the provider becomes available, mcpd
relays the inference, and the row settles to active.
- Cleans up the row + wake server in afterAll.
Docs (docs/virtual-llms.md):
- Lifecycle table updates the `hibernating` description from
"reserved for v2" to the actual v2 semantics.
- New "Wake-on-demand (v2)" section: configuration shapes for both
recipe types (http + command), the wake-then-infer flow diagram,
concurrent-infer dedup, failure semantics.
- Roadmap drops v2; v3-v5 still listed.
Workspace: 2050/2050 (smoke runs separately; the new SSE-based wake
test runs only against a live cluster, not under \`pnpm test:run\`).
v2 closes. v3 = virtual agents, v4 = LB pool by model, v5 = queue.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -97,11 +97,11 @@ route branches on it server-side.
|
||||
|
||||
## Lifecycle in detail
|
||||
|
||||
| State | What it means |
|
||||
|----------------|-----------------------------------------------------------------------|
|
||||
| `active` | Heartbeat received within the last 90 s and the SSE channel is open. |
|
||||
| State | What it means |
|
||||
|----------------|---------------------------------------------------------------------------------|
|
||||
| `active` | Heartbeat received within the last 90 s and the SSE channel is open. |
|
||||
| `inactive` | Either the SSE closed or the heartbeat watchdog tripped. Inference returns 503. |
|
||||
| `hibernating` | Reserved for v2 (wake-on-demand). v1 never writes this state. |
|
||||
| `hibernating` | Publisher is online but the backend is asleep; the next inference triggers a `wake` task before relaying. |
|
||||
|
||||
Two timers on mcpd run the GC sweep:
|
||||
|
||||
@@ -132,10 +132,75 @@ a finalized `CompletionResult`, not a token stream. Streaming requests
|
||||
therefore arrive at the caller as a single delta + `[DONE]`. Real
|
||||
per-token streaming is a v2 concern.
|
||||
|
||||
## Wake-on-demand (v2)
|
||||
|
||||
A provider whose backend hibernates (a vLLM instance that suspends
|
||||
when idle, an Ollama daemon that exits when nothing's connected, …)
|
||||
can declare a **wake recipe** in mcplocal config. When that provider's
|
||||
`isAvailable()` returns false at registrar startup, the row is
|
||||
published as `status=hibernating`. The next inference request that
|
||||
hits the row triggers the recipe and waits for the backend to come up
|
||||
before relaying.
|
||||
|
||||
Two recipe types:
|
||||
|
||||
```jsonc
|
||||
// HTTP — POST to a "wake controller" that starts the backend out of band.
|
||||
{
|
||||
"name": "vllm-local",
|
||||
"type": "openai",
|
||||
"model": "...",
|
||||
"publish": true,
|
||||
"wake": {
|
||||
"type": "http",
|
||||
"url": "http://10.0.0.50:9090/wake/vllm",
|
||||
"method": "POST",
|
||||
"headers": { "Authorization": "Bearer ..." },
|
||||
"maxWaitSeconds": 60
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```jsonc
|
||||
// command — spawn a local process (systemd, wakeonlan, custom script).
|
||||
{
|
||||
"name": "vllm-local",
|
||||
"type": "openai",
|
||||
"model": "...",
|
||||
"publish": true,
|
||||
"wake": {
|
||||
"type": "command",
|
||||
"command": "/usr/local/bin/start-vllm",
|
||||
"args": ["--profile", "qwen3"],
|
||||
"maxWaitSeconds": 120
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
How a request flows when the row is `hibernating`:
|
||||
|
||||
```
|
||||
client → mcpd POST /api/v1/llms/<name>/infer
|
||||
mcpd: status === hibernating → push wake task on SSE
|
||||
mcplocal: receive wake task → run recipe → poll isAvailable()
|
||||
→ heartbeat each tick → POST { ok: true } back
|
||||
mcpd: flip row → active, push the original infer task
|
||||
mcplocal: run inference → POST result back
|
||||
mcpd → client (forwards the inference result)
|
||||
```
|
||||
|
||||
Concurrent infers for the same hibernating Llm share a single wake
|
||||
task — only the first request triggers the recipe; later ones await
|
||||
the same in-flight wake promise. After the wake settles, every queued
|
||||
infer dispatches in order.
|
||||
|
||||
If the recipe fails (HTTP non-2xx, command exits non-zero, or the
|
||||
provider doesn't come up within `maxWaitSeconds`), every queued infer
|
||||
is rejected with a clear error and the row stays `hibernating` —
|
||||
the next request gets a fresh wake attempt.
|
||||
|
||||
## Roadmap (later stages)
|
||||
|
||||
- **v2 — Wake-on-demand**: Secret-stored "wake recipe" so mcpd can ask
|
||||
mcplocal to start a hibernating backend before sending inference.
|
||||
- **v3 — Virtual agents**: mcplocal publishes its local agent configs
|
||||
(model + system prompt + sampling defaults) into mcpd's `Agent` table.
|
||||
- **v4 — LB pool by model**: agents can target a model name instead of
|
||||
|
||||
Reference in New Issue
Block a user