Some checks failed
CI/CD / lint (pull_request) Successful in 55s
CI/CD / test (pull_request) Successful in 1m11s
CI/CD / typecheck (pull_request) Successful in 3m20s
CI/CD / smoke (pull_request) Failing after 52s
CI/CD / build (pull_request) Successful in 3m59s
CI/CD / publish (pull_request) Has been skipped
Adds two new subcommands on top of v7's provider lifecycle CLI:
mcpctl provider disable vllm-local # release GPU + survive restart
mcpctl provider enable vllm-local # clear the flag, ready to chat
Use case: vLLM keeps crashing on engine init. `down` works for "now"
but the next chat triggers a restart; `disable` writes
`disabled: true` into the provider's entry in ~/.mcpctl/config.json
and short-circuits complete()/ensureRunning() until you re-enable.
Implementation:
- LlmProviderEntry / LlmProviderFileEntry: new optional `disabled` field
- ManagedVllmProvider: setDisabled(bool), isDisabled(), gate in
complete()/ensureRunning(), expose `disabled` in getStatus()
- mcplocal HTTP: POST /llm/providers/:name/{disable,enable} write the
config file and apply the change live; /start returns 409 when the
target is disabled instead of silently failing
- Boot: createSingleProvider honors `entry.disabled` so a known-bad
vLLM doesn't auto-start on the first chat after mcplocal restart
- CLI: `disable` / `enable` subcommands on `mcpctl provider`; status
output now shows `(disabled)` next to the state
`enable` is live — provider stays in the registry while disabled, so
flipping the flag back is enough; no mcplocal restart needed.
Tests: cli 437/437, mcplocal 731/731.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>