CLI: `mcpctl get agent` table view gains KIND and STATUS columns mirroring the `get llm` shape from v1. Public agents render as `public/active` (the AgentRow defaults) and virtual ones surface their true lifecycle state, so `mcpctl get agent` becomes a single-pane view for both manually-created and mcplocal-published personas. Smoke: tests/smoke/virtual-agent.smoke.test.ts mirrors virtual-llm's in-process registrar pattern — publishes a fake provider + agent in one round-trip, confirms mcpd surfaces the agent kind=virtual / status=active under /api/v1/agents, then disconnects and verifies the paired Llm-and-Agent both flip to inactive (deletion is GC-driven, not disconnect-driven, so the rows must still exist post-stop). Heartbeat- stale and 4 h sweep paths are covered by the unit suite to keep smoke duration in check. Docs: docs/virtual-llms.md gets a "Virtual agents (v3)" section with a config sample, lifecycle notes, listing example, and the cluster-wide name-uniqueness caveat. The API surface block now mentions the new `agents[]` field on _provider-register, the join-by-session heartbeat behavior, and the `GET /api/v1/agents` lifecycle fields. docs/agents.md gains a one-paragraph note pointing to the v3 publishing path. Tests: full smoke suite 141/141 (was 139, +2 new), unit suites unchanged (mcpd 860/860, mcplocal 723/723).
12 KiB
Virtual LLMs
A virtual LLM is an Llm row in mcpd that's registered by an mcplocal
client rather than created by hand with mcpctl create llm. Inference for
a virtual LLM is relayed back through the publishing mcplocal's SSE control
channel — mcpd never needs to know the local URL or hold its API key.
When the publishing mcplocal goes away (or the user shuts down their
laptop) the row decays: active → inactive after 90 s without a
heartbeat, then deleted after 4 h of inactivity. A reconnecting mcplocal
adopts the same row using a sticky providerSessionId it persisted at
first publish.
When to use this
- Local model on a developer laptop that you want everyone on the
team to be able to chat with via
mcpctl chat-llm <name>. The model doesn't need to be reachable from mcpd's k8s pods — only the user's mcplocal does (which is already the case because mcplocal pulls projects from mcpd over HTTPS). - Hibernating models that wake on demand (v2 — see "Roadmap").
- Pool of identical models distributed across user laptops, eligible for load balancing (v4).
If your model is reachable from mcpd's k8s pods over LAN/VPN, you don't
need a virtual LLM — just mcpctl create llm <name> --type openai --url …
and you're done.
Publishing a local provider
mcplocal's local config (~/.mcpctl/config.json) gains a publish: true
opt-in per provider:
{
"llm": {
"providers": [
{
"name": "vllm-local",
"type": "openai",
"model": "Qwen/Qwen2.5-7B-Instruct-AWQ",
"url": "http://127.0.0.1:8000/v1",
"tier": "fast",
"publish": true
}
]
}
}
Restart mcplocal:
systemctl --user restart mcplocal
The registrar:
- Reads
~/.mcpctl/credentialsformcpdUrl+ bearer token. - POSTs to
/api/v1/llms/_provider-registerwith the publishable set. - Persists the returned
providerSessionIdto~/.mcpctl/provider-sessionso the next restart adopts the same mcpd row. - Opens the SSE channel at
/api/v1/llms/_provider-stream. - Heartbeats every 30 s.
- Listens for
event: taskframes and runs them against the localLlmProvider.
If ~/.mcpctl/credentials doesn't exist (e.g. you haven't run
mcpctl auth login), the registrar logs a warning and skips —
publishing is a best-effort feature, not a boot blocker.
Verifying
$ mcpctl get llm
NAME KIND STATUS TYPE MODEL TIER KEY ID
qwen3-thinking public active openai qwen3-thinking fast secret://litellm-key/API_KEY cmofx8y7u…
vllm-local virtual active openai Qwen/Qwen2.5-7B-Instruct-AWQ fast - cmoxz12ab…
$ mcpctl chat-llm vllm-local
─────────────────────────────────────────────────────────
LLM: vllm-local openai → Qwen/Qwen2.5-7B-Instruct-AWQ
Kind: virtual Status: active
─────────────────────────────────────────────────────────
> hello?
Hi! …
You can also chat with public LLMs the same way:
$ mcpctl chat-llm qwen3-thinking
The CLI doesn't care about kind — mcpd's /api/v1/llms/<name>/infer
route branches on it server-side.
Lifecycle in detail
| State | What it means |
|---|---|
active |
Heartbeat received within the last 90 s and the SSE channel is open. |
inactive |
Either the SSE closed or the heartbeat watchdog tripped. Inference returns 503. |
hibernating |
Publisher is online but the backend is asleep; the next inference triggers a wake task before relaying. |
Two timers on mcpd run the GC sweep:
- 90 s without a heartbeat → flip
active→inactive. - 4 h in
inactive→ delete the row entirely.
A reconnecting mcplocal with the same providerSessionId revives every
inactive row it owns; it only orphans rows that fell past the 4-h cutoff.
Inference relay
When mcpd receives POST /api/v1/llms/<virtual>/infer:
- Look up the row, see
kind=virtual+status=active. - Find the open SSE session for that
providerSessionId. Missing session → 503. - Push a
{ kind: "infer", taskId, llmName, request, streaming }task frame onto the SSE. - mcplocal pulls, calls
LlmProvider.complete(...), and POSTs the result back to/api/v1/llms/_provider-task/<taskId>/result:- non-streaming:
{ status: 200, body: <chat.completion> } - streaming: per-chunk
{ chunk: { data, done? } } - failure:
{ error: "..." }
- non-streaming:
- mcpd forwards the result/chunks out to the original caller.
v1 caveat — streaming granularity: LlmProvider.complete() returns
a finalized CompletionResult, not a token stream. Streaming requests
therefore arrive at the caller as a single delta + [DONE]. Real
per-token streaming is a v2 concern.
Wake-on-demand (v2)
A provider whose backend hibernates (a vLLM instance that suspends
when idle, an Ollama daemon that exits when nothing's connected, …)
can declare a wake recipe in mcplocal config. When that provider's
isAvailable() returns false at registrar startup, the row is
published as status=hibernating. The next inference request that
hits the row triggers the recipe and waits for the backend to come up
before relaying.
Two recipe types:
// HTTP — POST to a "wake controller" that starts the backend out of band.
{
"name": "vllm-local",
"type": "openai",
"model": "...",
"publish": true,
"wake": {
"type": "http",
"url": "http://10.0.0.50:9090/wake/vllm",
"method": "POST",
"headers": { "Authorization": "Bearer ..." },
"maxWaitSeconds": 60
}
}
// command — spawn a local process (systemd, wakeonlan, custom script).
{
"name": "vllm-local",
"type": "openai",
"model": "...",
"publish": true,
"wake": {
"type": "command",
"command": "/usr/local/bin/start-vllm",
"args": ["--profile", "qwen3"],
"maxWaitSeconds": 120
}
}
How a request flows when the row is hibernating:
client → mcpd POST /api/v1/llms/<name>/infer
mcpd: status === hibernating → push wake task on SSE
mcplocal: receive wake task → run recipe → poll isAvailable()
→ heartbeat each tick → POST { ok: true } back
mcpd: flip row → active, push the original infer task
mcplocal: run inference → POST result back
mcpd → client (forwards the inference result)
Concurrent infers for the same hibernating Llm share a single wake task — only the first request triggers the recipe; later ones await the same in-flight wake promise. After the wake settles, every queued infer dispatches in order.
If the recipe fails (HTTP non-2xx, command exits non-zero, or the
provider doesn't come up within maxWaitSeconds), every queued infer
is rejected with a clear error and the row stays hibernating —
the next request gets a fresh wake attempt.
Virtual agents (v3)
Virtual agents extend the same publishing model to agents — named
LLM personas with their own system prompt and sampling defaults. mcplocal
declares them in its config alongside its providers, and the existing
_provider-register endpoint atomically publishes both Llms and Agents
in one round-trip. They show up under mcpctl get agent next to
manually-created public agents and become chat-able via
mcpctl chat <agent> — no special command.
Declaring a virtual agent in mcplocal config
// ~/.mcpctl/config.json
{
"llm": {
"providers": [
{ "name": "vllm-local", "type": "vllm", "model": "Qwen/Qwen2.5-7B-Instruct-AWQ", "publish": true }
]
},
"agents": [
{
"name": "local-coder",
"llm": "vllm-local",
"description": "Local coding assistant on the workstation GPU",
"systemPrompt": "You are a senior engineer. Be terse.",
"defaultParams": { "temperature": 0.2 }
}
]
}
llm references a published provider's name from the same config. Agents
pinned to a name that isn't being published are still forwarded to mcpd —
the server validates llmName and 404s with a clear message if it's
genuinely missing, which lets you point at a public Llm if you want.
Lifecycle
Same shape as virtual Llms — 30 s heartbeat from mcplocal, 90 s
heartbeat-stale → status flips to inactive, 4 h inactive → row deleted
by mcpd's GC sweep. Heartbeats cover both Llms and Agents owned by the
session.
The GC orders agent deletes before their pinned virtual Llm so the
Agent.llmId onDelete: Restrict FK doesn't block the sweep.
Listing
$ mcpctl get agents
NAME KIND STATUS LLM PROJECT DESCRIPTION
local-coder virtual active vllm-local - Local coding assistant on…
reviewer public active qwen3-thinking mcpctl-development I review what you're shipping…
The KIND and STATUS columns are the v3 additions. Round-tripping
through mcpctl get agent X -o yaml | mcpctl apply -f - strips those
runtime fields cleanly so a virtual agent can be re-declared as a public
one (or vice versa) without manual editing.
Chatting
$ mcpctl chat local-coder
> hello?
… streams through mcpd → SSE → mcplocal's vllm-local provider …
Same command as for public agents. Works because chat.service has a
kind=virtual branch that hands off to VirtualLlmService.enqueueInferTask
when the agent's pinned Llm is virtual.
Cluster-wide name uniqueness
Agent.name is unique cluster-wide. Two mcplocals trying to publish the
same agent name collide on the second register with HTTP 409. Per-publisher
namespacing is a v4+ concern — same constraint as virtual Llms in v1.
Roadmap (later stages)
- v4 — LB pool by model: agents can target a model name instead of a specific Llm; mcpd picks the healthiest pool member per request.
- v5 — Task queue: persisted requests for hibernating/saturated pools. Workers pull tasks of their model when they come online.
API surface (v1)
POST /api/v1/llms/_provider-register → returns { providerSessionId, llms[], agents[] }
v3: body accepts an optional `agents[]` array
alongside `providers[]`. Atomic publish; older
clients (providers-only) keep working.
GET /api/v1/llms/_provider-stream → SSE channel; require x-mcpctl-provider-session header
POST /api/v1/llms/_provider-heartbeat → { providerSessionId } — bumps both Llms and Agents
owned by the session
POST /api/v1/llms/_provider-task/:id/result
→ one of:
{ error: "msg" }
{ chunk: { data, done? } }
{ status, body }
GET /api/v1/llms → list (includes kind, status, lastHeartbeatAt, inactiveSince)
POST /api/v1/llms/<virtual>/infer → routes through the SSE relay
DELETE /api/v1/llms/<virtual> → delete unconditionally (also runs GC's job)
GET /api/v1/agents → list (v3: includes kind, status, lastHeartbeatAt, inactiveSince)
RBAC piggybacks on view/edit/create:llms — no new resource. Publishing
a virtual LLM is morally a create:llms operation.