Smoke (tests/smoke/llm-pool.smoke.test.ts): two in-process registrars
publish virtual Llms with distinct names but a shared poolName, then:
1. /api/v1/llms/<name>/members surfaces both with the correct
effective pool key, size, activeCount, and per-member kind/status.
2. Chat through an agent pinned to one pool member dispatches across
the pool — verified by running 12 calls and asserting at least
one response from each backend (the random-shuffle selection
would have to hit only-A or only-B in 12 fair coin flips, ~1/2048).
3. Failover: stop one publisher, the surviving member still serves
chat. /members shows the stopped row as inactive immediately
(unbindSession runs synchronously on SSE close).
docs/virtual-llms.md gets a full "LB pools (v4)" section with the
two-field schema model, dispatcher selection + failover semantics,
public + virtual declaration examples, list/describe rendering, the
"pin to specific instance" escape hatch, and an API surface entry
for /members. docs/agents.md cross-link extended.
Tests: full smoke 144/144 (was 141, +3 for the new pool smoke).
Stages 1-3 ship the complete v4 — public and virtual Llms can both
join pools, agents transparently load-balance across them, yaml
round-trip preserves poolName, and the existing single-Llm world
keeps working byte-identically when poolName is null.
CLI: `mcpctl get agent` table view gains KIND and STATUS columns
mirroring the `get llm` shape from v1. Public agents render as
`public/active` (the AgentRow defaults) and virtual ones surface their
true lifecycle state, so `mcpctl get agent` becomes a single-pane view
for both manually-created and mcplocal-published personas.
Smoke: tests/smoke/virtual-agent.smoke.test.ts mirrors virtual-llm's
in-process registrar pattern — publishes a fake provider + agent in
one round-trip, confirms mcpd surfaces the agent kind=virtual /
status=active under /api/v1/agents, then disconnects and verifies the
paired Llm-and-Agent both flip to inactive (deletion is GC-driven, not
disconnect-driven, so the rows must still exist post-stop). Heartbeat-
stale and 4 h sweep paths are covered by the unit suite to keep smoke
duration in check.
Docs: docs/virtual-llms.md gets a "Virtual agents (v3)" section with a
config sample, lifecycle notes, listing example, and the cluster-wide
name-uniqueness caveat. The API surface block now mentions the new
`agents[]` field on _provider-register, the join-by-session heartbeat
behavior, and the `GET /api/v1/agents` lifecycle fields. docs/agents.md
gains a one-paragraph note pointing to the v3 publishing path.
Tests: full smoke suite 141/141 (was 139, +2 new), unit suites
unchanged (mcpd 860/860, mcplocal 723/723).
Closes v2 (wake-on-demand). Same shape as v1's stage 6: smoke
exercises the live-cluster path, docs lose the "v2 reserved" caveat
and gain a full wake-recipe section.
Smoke (virtual-llm.smoke.test.ts):
- New "wake-on-demand" describe block runs alongside the v1 tests.
- Spins a tiny in-process HTTP "wake controller"; the published
provider's isAvailable() returns false until the wake POST flips
the bool. Asserts:
1. Provider publishes as kind=virtual / status=hibernating.
2. First inference triggers the wake recipe, the recipe POSTs
to the controller, the provider becomes available, mcpd
relays the inference, and the row settles to active.
- Cleans up the row + wake server in afterAll.
Docs (docs/virtual-llms.md):
- Lifecycle table updates the `hibernating` description from
"reserved for v2" to the actual v2 semantics.
- New "Wake-on-demand (v2)" section: configuration shapes for both
recipe types (http + command), the wake-then-infer flow diagram,
concurrent-infer dedup, failure semantics.
- Roadmap drops v2; v3-v5 still listed.
Workspace: 2050/2050 (smoke runs separately; the new SSE-based wake
test runs only against a live cluster, not under \`pnpm test:run\`).
v2 closes. v3 = virtual agents, v4 = LB pool by model, v5 = queue.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Final stage of v1.
Smoke (mcplocal/tests/smoke/virtual-llm.smoke.test.ts):
- Spins an in-process LlmProvider that returns canned content.
- Runs the registrar against the live mcpd in fulldeploy.
- Asserts: row appears with kind=virtual / status=active, infer
through /api/v1/llms/<name>/infer comes back through the SSE
relay with the provider's content + finish_reason, and a 503
appears immediately after registrar.stop() (publisher offline).
- Times out / cleanup paths idempotent so re-runs against the same
cluster don't litter rows. The 90-s heartbeat-stale flip and 4-h
GC are unit-tested — too slow for smoke.
Docs:
- New docs/virtual-llms.md: when to use this vs creating a regular
Llm row, how to opt-in via publish: true, the lifecycle table,
the inference-relay sequence, the v1 streaming caveat, the v2-v5
roadmap, and the full /api/v1/llms/_provider-* surface.
- agents.md cross-links virtual-llms.md alongside personalities/chat.
- README's Agents section gains a "Virtual LLMs" subsection.
Workspace suite: 2043/2043 (smoke files run separately). v1 closes.
Stage roadmap (each its own future PR):
v2 wake-on-demand · v3 virtual agents · v4 LB pool · v5 task queue
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>