feat: virtual-LLM smoke test + docs (v1 Stage 6)
Some checks failed
CI/CD / typecheck (pull_request) Successful in 53s
CI/CD / test (pull_request) Successful in 1m8s
CI/CD / lint (pull_request) Successful in 2m6s
CI/CD / smoke (pull_request) Failing after 1m39s
CI/CD / build (pull_request) Successful in 2m11s
CI/CD / publish (pull_request) Has been skipped
Some checks failed
CI/CD / typecheck (pull_request) Successful in 53s
CI/CD / test (pull_request) Successful in 1m8s
CI/CD / lint (pull_request) Successful in 2m6s
CI/CD / smoke (pull_request) Failing after 1m39s
CI/CD / build (pull_request) Successful in 2m11s
CI/CD / publish (pull_request) Has been skipped
Final stage of v1. Smoke (mcplocal/tests/smoke/virtual-llm.smoke.test.ts): - Spins an in-process LlmProvider that returns canned content. - Runs the registrar against the live mcpd in fulldeploy. - Asserts: row appears with kind=virtual / status=active, infer through /api/v1/llms/<name>/infer comes back through the SSE relay with the provider's content + finish_reason, and a 503 appears immediately after registrar.stop() (publisher offline). - Times out / cleanup paths idempotent so re-runs against the same cluster don't litter rows. The 90-s heartbeat-stale flip and 4-h GC are unit-tested — too slow for smoke. Docs: - New docs/virtual-llms.md: when to use this vs creating a regular Llm row, how to opt-in via publish: true, the lifecycle table, the inference-relay sequence, the v1 streaming caveat, the v2-v5 roadmap, and the full /api/v1/llms/_provider-* surface. - agents.md cross-links virtual-llms.md alongside personalities/chat. - README's Agents section gains a "Virtual LLMs" subsection. Workspace suite: 2043/2043 (smoke files run separately). v1 closes. Stage roadmap (each its own future PR): v2 wake-on-demand · v3 virtual agents · v4 LB pool · v5 task queue Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
26
README.md
26
README.md
@@ -571,6 +571,32 @@ For binding prompts to personalities and the API surface, see
|
||||
prompt editing — paste a session token (`mcpctl auth login`) or PAT to log
|
||||
in.
|
||||
|
||||
### Virtual LLMs
|
||||
|
||||
A user's local LLM (`vllm-local`, Ollama, …) can publish itself into
|
||||
mcpd's `Llm` registry so anyone authorized sees it under `mcpctl get llm`
|
||||
and can chat with it via `mcpctl chat-llm <name>`. Inference is relayed
|
||||
through the publishing mcplocal's SSE control channel — mcpd never holds
|
||||
the local URL or API key.
|
||||
|
||||
```fish
|
||||
# In ~/.mcpctl/config.json, opt the provider in with `publish: true`:
|
||||
# { "name": "vllm-local", "type": "openai", "model": "...", "publish": true }
|
||||
systemctl --user restart mcplocal
|
||||
|
||||
mcpctl get llm
|
||||
# NAME KIND STATUS TYPE MODEL TIER ID
|
||||
# qwen3-thinking public active openai qwen3-thinking fast ...
|
||||
# vllm-local virtual active openai Qwen/Qwen2.5-7B-Instruct-AWQ fast ...
|
||||
|
||||
mcpctl chat-llm vllm-local
|
||||
> hello?
|
||||
```
|
||||
|
||||
Lifecycle: 30 s heartbeats, 90 s heartbeat-stale → inactive, 4 h
|
||||
inactive → auto-deleted. A reconnecting mcplocal adopts the same row
|
||||
via a sticky `providerSessionId`. Full design: [docs/virtual-llms.md](docs/virtual-llms.md).
|
||||
|
||||
## Commands
|
||||
|
||||
```bash
|
||||
|
||||
Reference in New Issue
Block a user