First half of v2 — mcplocal can now declare a hibernating backend and
respond to a `wake` task by running a configured recipe. v2 Stage 2
will wire mcpd to dispatch the wake task before relaying inference.
Config (LlmProviderFileEntry):
- New \`wake\` block on a published provider:
wake:
type: http # or: command
url: ... # http only
method: POST # http only, default POST
headers: {...} # http only
body: ... # http only
command: ... # command only
args: [...] # command only
maxWaitSeconds: 60 # how long to poll isAvailable() after wake fires
Registrar (mcplocal):
- At publish time, providers with a wake recipe whose isAvailable()
returns false report initialStatus=hibernating to mcpd. Without a
wake recipe (legacy v1) or when already up, status stays active.
- handleWakeTask: runs the recipe (HTTP request OR child-process
spawn), then polls isAvailable() up to maxWaitSeconds, sending a
heartbeat each loop so mcpd's GC sweep doesn't time us out
mid-boot. Reports { ok, ms } on success or { error } on
timeout/recipe failure via the existing _provider-task/:id/result.
- Replaces the v1 stub that rejected wake tasks with "not implemented".
mcpd VirtualLlmService:
- RegisterProviderInput gains optional initialStatus ('active' |
'hibernating'). The register/upsert path uses it for both new and
reconnecting rows. Defaults to 'active' so v1 publishers still
work unchanged.
- Provider-register route's coercer accepts the new field.
Tests: 3 new in registrar.test.ts cover initialStatus selection
(hibernating when wake configured + unavailable, active otherwise,
active when no wake even if unavailable). 8/8 registrar tests, 833/833
mcpd unchanged.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The mcplocal counterpart to mcpd's VirtualLlmService. After this stage,
flipping \`publish: true\` on a provider in ~/.mcpctl/config.json makes
the provider show up in mcpctl get llm with kind=virtual the next time
mcplocal restarts; running an inference against it relays through this
client back to the local LlmProvider.
Config:
- LlmProviderFileEntry gains optional \`publish: boolean\` (default false,
so existing setups don't change).
Registrar (new file: providers/registrar.ts):
- start(): if any provider is opted-in, POSTs to
/api/v1/llms/_provider-register with the publishable set, persists
the returned providerSessionId to ~/.mcpctl/provider-session for
sticky reconnects, then opens the SSE control channel and starts a
30-s heartbeat ticker.
- SSE listener parses event/data lines from text/event-stream frames.
task frames trigger handleInferTask: convert OpenAI body to
CompletionOptions, call provider.complete(), POST the result back as
either { status, body } (non-streaming) or two chunk POSTs
(streaming: one delta + a [DONE] marker).
- Disconnect → exponential backoff reconnect from 5 s up to 60 s. On
successful reconnect the persisted sessionId revives the same Llm
rows in mcpd (mcpd flips them back to active on heartbeat).
- stop() destroys the SSE socket and clears the timer; cleanly handed
off from main.ts's existing shutdown handler.
Wired into mcplocal main.ts via maybeStartVirtualLlmRegistrar:
- Filters opted-in providers, looks up their LlmProvider instances in
the registry.
- Reads ~/.mcpctl/credentials for mcpdUrl + bearer; absence is a
best-effort skip (logs a warning, returns null) — never a boot
blocker.
v1 caveat documented in the file header: LlmProvider returns a
finalized CompletionResult, not a token stream, so streaming requests
get a single delta chunk + [DONE]. Real per-token streaming is a v2
concern.
Tests: 5 new in tests/registrar.test.ts using a tiny in-process HTTP
server. Cover: no-op when nothing opted-in, register POST + sticky
sessionId persistence, sticky reconnect from disk, heartbeat ticker
fires at the configured interval, register HTTP error surfaces.
Workspace suite: 2043/2043 across 152 files (was 2006/149, +5
new tests + the new file gets discovered).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>