Files
mcpctl/completions/mcpctl.fish

613 lines
46 KiB
Fish
Raw Normal View History

# mcpctl fish completions — auto-generated by scripts/generate-completions.ts
# DO NOT EDIT MANUALLY — run: pnpm completions:generate
# Erase any stale completions from previous versions
complete -c mcpctl -e
feat(cli+mcpd): mcpctl skills sync + config claude extension Phase 5 of the Skills + Revisions + Proposals work. Skills are now materialised onto disk under ~/.claude/skills/<name>/, with hash-pinned diff against mcpd, atomic per-skill install, and preservation of locally-modified files. `mcpctl config claude --project X` now wires the full pickup chain: writes .mcpctl-project marker, runs the initial sync, installs the SessionStart hook so subsequent Claude invocations stay in sync transparently. ## Sync algorithm 1. Resolve project: `--project` flag overrides; else walk up from cwd looking for `.mcpctl-project`; else fall back to globals-only. 2. GET /api/v1/projects/:name/skills/visible (or /api/v1/skills?scope=global without a project). Server returns id + name + semver + scope + contentHash + metadata — no body, no files. The contentHash is sha256 of the canonicalised body, computed server-side; any reordering of keys produces the same hash, so it's a stable diff key. 3. Load ~/.mcpctl/skills-state.json (lives outside ~/.claude/skills/ on purpose — Claude Code reads that tree and we don't want to pollute it with our bookkeeping). 4. Diff: - server skill not in state → INSTALL - server skill, state contentHash matches → SKIP (cheap path) - server skill, state contentHash differs → UPDATE (fetch full body) - state skill not in server → orphan, REMOVE (preserve if locally modified, unless --force) 5. Atomic per-skill install: write to <targetDir>.mcpctl-staging-<pid>/, rename existing tree to .mcpctl-trash-<pid>, swap staging in, rmtree the trash. A concurrent reader (Claude Code starting up) never sees a partial tree. 6. State file updated with new versions, per-file SHA-256, install path. saveState is atomic (temp + rename). ## Failure semantics - `--quiet` mode (used by SessionStart hook): exit 0 on network / timeout / mcpd error. Fail-open is non-negotiable here — we never want a hung mcpd to block Claude Code starting up. - Auth failure: exit 1, clear "run mcpctl login" message. - Disk error during state save: exit 2. - Per-skill errors are collected in the result and reported as a count; one bad skill doesn't stop the others. Network fetches run with concurrency 5. The server-side `/visible` endpoint is metadata-only so the cheap path (everything unchanged) needs exactly one HTTP roundtrip total. ## Files added ### CLI utilities (src/cli/src/utils/) - skills-state.ts — load/save state, per-file sha256, edit detection. - project-marker.ts — walk-up to find `.mcpctl-project`, bounded by user home so we never search above $HOME. - sessionhook.ts — install/remove a SessionStart hook entry tagged with `_mcpctl_managed: true`. Idempotent. Defensive against missing/empty/JSONC settings.json. - skills-disk.ts — atomic install via staging-dir rename swap, symmetric atomic delete via trash-dir rename. Path-escape attempts in files{} are rejected. ### CLI command (src/cli/src/commands/) - skills.ts — `mcpctl skills sync` Commander wrapper + the `runSkillsSync(opts, deps)` library function (also called from `mcpctl config claude --project`). Supports `--dry-run`, `--force`, `--quiet`, `--keep-orphans`. `--skip-postinstall` is reserved (postInstall execution lands in a follow-up PR, not this one). ### Wiring - index.ts: registers `mcpctl skills` after `mcpctl review`. - config.ts: `mcpctl config claude --project X` now writes the `.mcpctl-project` marker, runs `runSkillsSync` in-process, and calls `installManagedSessionHook('mcpctl skills sync --quiet')`. New flag `--skip-skills` opts out (used by tests; useful for CI). ## Server-side change - src/mcpd/src/services/skill.service.ts: getVisibleSkills now computes contentHash on the fly from the canonical body shape the client will reconstruct. Cheap (sha256 of ~few KB per skill); no schema migration needed since hash is derived not stored. ## Tests Four new utility test files (31 tests) under src/cli/tests/utils/: - sessionhook.test.ts — creation, idempotency, command updates, preservation of user hooks, removal, empty/JSONC tolerance. - skills-disk.test.ts — atomic write, replacement without leftovers, path-escape rejection, atomic delete, listing ignores staging/trash artifacts. - skills-state.test.ts — sha256 determinism, state round-trip, schema-version drift handling, edit detection. - project-marker.test.ts — cwd hit, walk-up, $HOME boundary, empty marker, write+read round-trip. The existing `mcpctl config claude` test (claude.test.ts) was updated to pass `--skip-skills` so it stays focused on .mcp.json generation; the new sync flow is covered by the utility tests. Full suite: 162 test files / 2157 tests green (up from 158 / 2127). ## Deferred to a follow-up - `metadata.hooks` materialisation into `~/.claude/settings.json` — the data path exists, sync receives it; PR-7 or a focused follow-up will write the `_mcpctl_managed: true` entries for declarative hooks. - `metadata.mcpServers` auto-attach via mcpd API — likewise. - `metadata.postInstall` script execution — the most substantive deferred piece. Current sync logs a TODO and skips. The corporate trust model (publisher-side rigor, not client-side defence) means this is straightforward to add once we wire the curated env + timeout + audit emission. Orthogonal to file sync, easier to ship separately. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:26:35 +01:00
set -l commands status login logout config get describe delete logs create edit apply chat chat-llm patch backup approve review skills console cache provider test migrate rotate
set -l project_commands get describe delete logs create edit attach-server detach-server
# Disable file completions by default
complete -c mcpctl -f
# Global options
complete -c mcpctl -s v -l version -d 'Show version'
complete -c mcpctl -l daemon-url -d 'mcplocal daemon URL' -x
complete -c mcpctl -l direct -d 'bypass mcplocal and connect directly to mcpd'
complete -c mcpctl -s p -l project -d 'Target project for project commands' -xa '(__mcpctl_project_names)'
complete -c mcpctl -s h -l help -d 'Show help'
# ---- Runtime helpers ----
# Helper: check if --project or -p was given
function __mcpctl_has_project
set -l tokens (commandline -opc)
for i in (seq (count $tokens))
if test "$tokens[$i]" = "--project" -o "$tokens[$i]" = "-p"
return 0
end
end
return 1
end
# Resource type detection
feat(cli+docs+smoke): inference-task CLI + GC ticker + smoke + docs (v5 Stage 4) CLI surface for the durable queue: - `mcpctl get tasks` — table view (ID, STATUS, POOL, LLM, MODEL, STREAM, AGE, WORKER). Aliases `task`, `tasks`, `inference-task`, `inference-tasks` all normalize to the canonical plural so URL construction works uniformly. RESOURCE_ALIASES + completions generator updated. - `mcpctl chat-llm <name> --async -m <msg>` — enqueue and exit. stdout is just the task id (pipeable into `xargs mcpctl get task`); stderr carries human-readable status. REPL mode is rejected for --async (fire-and-forget doesn't make sense without -m). GC ticker in mcpd: 5-min interval. Pending tasks past 1 h queue timeout flip to error with a clear message; terminal tasks past 7 d retention get deleted. Both queries are index-backed. Crash fix uncovered by the smoke: when the async route doesn't await ref.done, a later cancel/error rejected the in-flight Promise as unhandled and crashed mcpd. The route now attaches a no-op `.catch` so the legacy `done` semantic still works for sync callers (chat, direct infer) without taking out the process for async ones. The EnqueueInferOptions also gained an explicit `ownerId` field so the async API can stamp the authenticated user on the row instead of inheriting 'system' from the constructor's resolveOwner — without this, every GET/DELETE from the original caller would 404 due to foreign-owner mismatch. Smoke (tests/smoke/inference-task.smoke.test.ts): 1. POST /inference-tasks while no worker bound → row=pending. 2. Bring a registrar online → bindSession drain claims and dispatches → worker complete()s → row=completed → GET returns the assistant body. 3. Stop worker, enqueue, DELETE → row=cancelled, persisted. docs/inference-tasks.md (new): full data model, lifecycle diagram, async API reference, CLI examples, RBAC table, GC defaults, and the v5 limitations / v6 roadmap. Cross-linked from virtual-llms.md and agents.md. Tests + smoke: mcpd 893/893, mcplocal 723/723, cli 437/437, full smoke 146/146 (was 144, +2 new task smoke). Live mcpd verified via manual curl: enqueue → cancel → re-fetch — no crash, owner scoping returns 404 on foreign ids, GC ticker logs at info when it sweeps. v5 complete: durable queue (Stage 1) + VirtualLlmService rewire (Stage 2) + async API & RBAC (Stage 3) + CLI/GC/smoke/docs (Stage 4).
2026-04-28 15:25:09 +01:00
set -l resources servers instances secrets secretbackends llms agents personalities templates projects users groups rbac prompts promptrequests serverattachments proxymodels inference-tasks all
function __mcpctl_needs_resource_type
feat(cli+docs+smoke): inference-task CLI + GC ticker + smoke + docs (v5 Stage 4) CLI surface for the durable queue: - `mcpctl get tasks` — table view (ID, STATUS, POOL, LLM, MODEL, STREAM, AGE, WORKER). Aliases `task`, `tasks`, `inference-task`, `inference-tasks` all normalize to the canonical plural so URL construction works uniformly. RESOURCE_ALIASES + completions generator updated. - `mcpctl chat-llm <name> --async -m <msg>` — enqueue and exit. stdout is just the task id (pipeable into `xargs mcpctl get task`); stderr carries human-readable status. REPL mode is rejected for --async (fire-and-forget doesn't make sense without -m). GC ticker in mcpd: 5-min interval. Pending tasks past 1 h queue timeout flip to error with a clear message; terminal tasks past 7 d retention get deleted. Both queries are index-backed. Crash fix uncovered by the smoke: when the async route doesn't await ref.done, a later cancel/error rejected the in-flight Promise as unhandled and crashed mcpd. The route now attaches a no-op `.catch` so the legacy `done` semantic still works for sync callers (chat, direct infer) without taking out the process for async ones. The EnqueueInferOptions also gained an explicit `ownerId` field so the async API can stamp the authenticated user on the row instead of inheriting 'system' from the constructor's resolveOwner — without this, every GET/DELETE from the original caller would 404 due to foreign-owner mismatch. Smoke (tests/smoke/inference-task.smoke.test.ts): 1. POST /inference-tasks while no worker bound → row=pending. 2. Bring a registrar online → bindSession drain claims and dispatches → worker complete()s → row=completed → GET returns the assistant body. 3. Stop worker, enqueue, DELETE → row=cancelled, persisted. docs/inference-tasks.md (new): full data model, lifecycle diagram, async API reference, CLI examples, RBAC table, GC defaults, and the v5 limitations / v6 roadmap. Cross-linked from virtual-llms.md and agents.md. Tests + smoke: mcpd 893/893, mcplocal 723/723, cli 437/437, full smoke 146/146 (was 144, +2 new task smoke). Live mcpd verified via manual curl: enqueue → cancel → re-fetch — no crash, owner scoping returns 404 on foreign ids, GC ticker logs at info when it sweeps. v5 complete: durable queue (Stage 1) + VirtualLlmService rewire (Stage 2) + async API & RBAC (Stage 3) + CLI/GC/smoke/docs (Stage 4).
2026-04-28 15:25:09 +01:00
set -l resource_aliases servers instances secrets secretbackends llms agents personalities templates projects users groups rbac prompts promptrequests serverattachments proxymodels inference-tasks all server srv instance inst secret sec secretbackend sb llm agent personality template tpl project proj user group rbac-definition rbac-binding prompt promptrequest pr serverattachment sa proxymodel pm task tasks inference-task
set -l tokens (commandline -opc)
set -l found_cmd false
for tok in $tokens
if $found_cmd
if contains -- $tok $resource_aliases
return 1 # resource type already present
end
end
if contains -- $tok get describe delete edit patch approve
set found_cmd true
end
end
if $found_cmd
return 0 # command found but no resource type yet
end
return 1
end
# Map any resource alias to the canonical plural form for API calls
function __mcpctl_resolve_resource
switch $argv[1]
case server srv servers; echo servers
case instance inst instances; echo instances
case secret sec secrets; echo secrets
feat(mcpd): pluggable SecretBackend abstraction + OpenBao driver + migrate Why: API keys live in Postgres as plaintext JSON. A DB read exposes every credential in the system. Before centralising more secrets (LLM keys, etc.) we want to be able to point at an external KV store and drop DB access to sensitive rows. New model: - `SecretBackend` resource (CRUD + isDefault invariant) owns how a secret is stored. `Secret` gains `backendId` FK and `externalRef`. Reads/writes dispatch through a driver. - `plaintext` driver (near-noop, uses existing Secret.data column) is seeded as the `default` row at startup. Acts as trust root / bootstrap. - `openbao` driver (also HashiCorp Vault KV v2 compatible) talks plain HTTP, no SDK dependency. Auth via static token pulled from a plaintext-backed `Secret` through the injected SecretRefResolver. Caches resolved token. - `SecretMigrateService` moves secrets one-at-a-time: read → write dest → flip row → best-effort source delete. Interrupted runs are idempotent (skips secrets already on destination). CLI surface: - `mcpctl create|get|describe|delete secretbackend` + `--default` on create. - `mcpctl migrate secrets --from X --to Y [--names a,b] [--keep-source] [--dry-run]` - `apply -f` round-trips secretbackends (yaml/json multi-doc + grouped). - RBAC: `secretbackends` resource + `run:migrate-secrets` operation. - Fish + bash completions regenerated. docs/secret-backends.md covers the OpenBao policy, chicken-and-egg auth flow, and the migration semantics. Broke the circular dep (OpenBao needs SecretService to resolve its own token, SecretService needs SecretBackendService) with a deferred-resolver bridge in mcpd startup. 11 new driver unit tests; existing env-resolver/secret-route/ backup tests updated for the new service signatures. Full suite: 1792/1792. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 19:29:55 +01:00
case secretbackend sb secretbackends; echo secretbackends
feat(mcpd): Llm resource — CRUD + CLI + apply Why: every client that wants an LLM (the agent, HTTP-mode mcplocal, Claude Code's STDIO mcplocal) today has to know the provider URL + key, and each user's ~/.mcpctl/config.json carries them. Centralising the catalogue on the server is the prerequisite for Phase 2 (mcpd proxies inference so credentials never leave the cluster). This phase adds the `Llm` resource and its CRUD surface — no proxy yet, no client pivot yet. Just enough to register what you have. Schema: - New `Llm` model: name/type/model/url/tier/description + {apiKeySecretId, apiKeySecretKey} FK pair. Reverse `llms` relation on Secret. - Provider types: anthropic | openai | deepseek | vllm | ollama | gemini-cli. - Tiers: fast | heavy. mcpd: - LlmRepository + LlmService + Zod validation schema + /api/v1/llms routes. - API surface exposes `apiKeyRef: {name, key}` — the service translates to/ from the FK pair so clients never deal in cuids. - `resolveApiKey(llmName)` reads through SecretService (which itself dispatches to the right SecretBackend). That's the hook Phase 2's inference proxy uses. - RBAC: added `'llms'` to RBAC_RESOURCES + resource alias. Standard view/create/edit/delete semantics. - Wired into main.ts (repo, service, routes). CLI: - `mcpctl create llm <name> --type X --model Y --tier fast|heavy --api-key-ref SECRET/KEY [--url ...] [--extra k=v ...]` - `mcpctl get|describe|delete llm` — standard resource verbs. - `mcpctl apply -f` with `kind: llm` (single- or multi-doc yaml/json). Applied after secrets, before servers — apiKeyRef resolves an existing Secret. - Shell completions regenerated. Tests: 11 service unit tests + 9 route tests (happy path, 404s, 409, validation). Full suite 1812/1812 (+20 from the 1792 Phase 0 baseline). TypeScript clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:28:43 +01:00
case llm llms; echo llms
feat(agents): mcpctl chat REPL + agent CRUD + completions (Stage 5) This is the moment the user can actually talk to an agent end-to-end: mcpctl create llm qwen3-thinking --type openai --model qwen3-thinking \ --url http://litellm.nvidia-nim.svc.cluster.local:4000/v1 \ --api-key-ref litellm-key/API_KEY mcpctl create agent reviewer --llm qwen3-thinking --project mcpctl-dev \ --description "I review security design — ask me after each major change." mcpctl chat reviewer Pieces: * src/cli/src/commands/chat.ts (new) — REPL + one-shot. Streams the SSE endpoint and prints text deltas to stdout as they arrive; tool_call / tool_result events go to stderr in dim-style brackets so the chat output stays clean. LiteLLM-style flags (--temperature / --top-p / --top-k / --max-tokens / --seed / --stop / --allow-tool / --extra) layer over agent.defaultParams. In-REPL slash-commands: /set KEY VAL, /system <text>, /tools (list project's MCP servers), /clear (new thread), /save (PATCH agent.defaultParams = current overrides), /quit. * src/cli/src/commands/create.ts — `create agent` mirroring the llm pattern. Every yaml-applyable field has a corresponding flag (memory rule); --default-temperature / --default-top-p / --default-top-k / --default-max-tokens / --default-seed / --default-stop / --default-extra / --default-params-file all populate agent.defaultParams. * src/cli/src/commands/apply.ts — AgentSpecSchema accepts both `llm: qwen3-thinking` shorthand and `llm: { name: ... }` long form; runs after llms in the apply order so apiKey/llm references resolve. Round- trips with `get agent foo -o yaml | apply -f -` (memory rule). * src/cli/src/commands/get.ts — agentColumns (NAME, LLM, PROJECT, DESCRIPTION, ID); RESOURCE_KIND mapping for yaml export. * src/cli/src/commands/shared.ts — `agent`/`agents`/`thread`/`threads` added to RESOURCE_ALIASES. * src/cli/src/index.ts — wires createChatCommand into the program; passes the resolved baseUrl + token so chat can stream SSE without going through ApiClient (which only does buffered request/response). * completions/mcpctl.{fish,bash} regenerated. scripts/generate-completions.ts knows about agents (canonical + aliases) and emits a special-case `chat)` block that completes the first arg with `mcpctl get agents` names. tests/completions.test.ts: +9 new assertions covering agents in the resource list, chat in the commands list, --llm flag for create agent, agent-name completion for chat, etc. CLI suite: 430/430 (was 421). Completions --check is clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 17:02:38 +01:00
case agent agents; echo agents
case personality personalities; echo personalities
case template tpl templates; echo templates
case project proj projects; echo projects
case user users; echo users
case group groups; echo groups
case rbac rbac-definition rbac-binding; echo rbac
case prompt prompts; echo prompts
case promptrequest promptrequests pr; echo promptrequests
case serverattachment serverattachments sa; echo serverattachments
case proxymodel proxymodels pm; echo proxymodels
feat(cli+docs+smoke): inference-task CLI + GC ticker + smoke + docs (v5 Stage 4) CLI surface for the durable queue: - `mcpctl get tasks` — table view (ID, STATUS, POOL, LLM, MODEL, STREAM, AGE, WORKER). Aliases `task`, `tasks`, `inference-task`, `inference-tasks` all normalize to the canonical plural so URL construction works uniformly. RESOURCE_ALIASES + completions generator updated. - `mcpctl chat-llm <name> --async -m <msg>` — enqueue and exit. stdout is just the task id (pipeable into `xargs mcpctl get task`); stderr carries human-readable status. REPL mode is rejected for --async (fire-and-forget doesn't make sense without -m). GC ticker in mcpd: 5-min interval. Pending tasks past 1 h queue timeout flip to error with a clear message; terminal tasks past 7 d retention get deleted. Both queries are index-backed. Crash fix uncovered by the smoke: when the async route doesn't await ref.done, a later cancel/error rejected the in-flight Promise as unhandled and crashed mcpd. The route now attaches a no-op `.catch` so the legacy `done` semantic still works for sync callers (chat, direct infer) without taking out the process for async ones. The EnqueueInferOptions also gained an explicit `ownerId` field so the async API can stamp the authenticated user on the row instead of inheriting 'system' from the constructor's resolveOwner — without this, every GET/DELETE from the original caller would 404 due to foreign-owner mismatch. Smoke (tests/smoke/inference-task.smoke.test.ts): 1. POST /inference-tasks while no worker bound → row=pending. 2. Bring a registrar online → bindSession drain claims and dispatches → worker complete()s → row=completed → GET returns the assistant body. 3. Stop worker, enqueue, DELETE → row=cancelled, persisted. docs/inference-tasks.md (new): full data model, lifecycle diagram, async API reference, CLI examples, RBAC table, GC defaults, and the v5 limitations / v6 roadmap. Cross-linked from virtual-llms.md and agents.md. Tests + smoke: mcpd 893/893, mcplocal 723/723, cli 437/437, full smoke 146/146 (was 144, +2 new task smoke). Live mcpd verified via manual curl: enqueue → cancel → re-fetch — no crash, owner scoping returns 404 on foreign ids, GC ticker logs at info when it sweeps. v5 complete: durable queue (Stage 1) + VirtualLlmService rewire (Stage 2) + async API & RBAC (Stage 3) + CLI/GC/smoke/docs (Stage 4).
2026-04-28 15:25:09 +01:00
case task tasks inference-task inference-tasks; echo inference-tasks
case all; echo all
case '*'; echo $argv[1]
end
end
function __mcpctl_get_resource_type
feat(cli+docs+smoke): inference-task CLI + GC ticker + smoke + docs (v5 Stage 4) CLI surface for the durable queue: - `mcpctl get tasks` — table view (ID, STATUS, POOL, LLM, MODEL, STREAM, AGE, WORKER). Aliases `task`, `tasks`, `inference-task`, `inference-tasks` all normalize to the canonical plural so URL construction works uniformly. RESOURCE_ALIASES + completions generator updated. - `mcpctl chat-llm <name> --async -m <msg>` — enqueue and exit. stdout is just the task id (pipeable into `xargs mcpctl get task`); stderr carries human-readable status. REPL mode is rejected for --async (fire-and-forget doesn't make sense without -m). GC ticker in mcpd: 5-min interval. Pending tasks past 1 h queue timeout flip to error with a clear message; terminal tasks past 7 d retention get deleted. Both queries are index-backed. Crash fix uncovered by the smoke: when the async route doesn't await ref.done, a later cancel/error rejected the in-flight Promise as unhandled and crashed mcpd. The route now attaches a no-op `.catch` so the legacy `done` semantic still works for sync callers (chat, direct infer) without taking out the process for async ones. The EnqueueInferOptions also gained an explicit `ownerId` field so the async API can stamp the authenticated user on the row instead of inheriting 'system' from the constructor's resolveOwner — without this, every GET/DELETE from the original caller would 404 due to foreign-owner mismatch. Smoke (tests/smoke/inference-task.smoke.test.ts): 1. POST /inference-tasks while no worker bound → row=pending. 2. Bring a registrar online → bindSession drain claims and dispatches → worker complete()s → row=completed → GET returns the assistant body. 3. Stop worker, enqueue, DELETE → row=cancelled, persisted. docs/inference-tasks.md (new): full data model, lifecycle diagram, async API reference, CLI examples, RBAC table, GC defaults, and the v5 limitations / v6 roadmap. Cross-linked from virtual-llms.md and agents.md. Tests + smoke: mcpd 893/893, mcplocal 723/723, cli 437/437, full smoke 146/146 (was 144, +2 new task smoke). Live mcpd verified via manual curl: enqueue → cancel → re-fetch — no crash, owner scoping returns 404 on foreign ids, GC ticker logs at info when it sweeps. v5 complete: durable queue (Stage 1) + VirtualLlmService rewire (Stage 2) + async API & RBAC (Stage 3) + CLI/GC/smoke/docs (Stage 4).
2026-04-28 15:25:09 +01:00
set -l resource_aliases servers instances secrets secretbackends llms agents personalities templates projects users groups rbac prompts promptrequests serverattachments proxymodels inference-tasks all server srv instance inst secret sec secretbackend sb llm agent personality template tpl project proj user group rbac-definition rbac-binding prompt promptrequest pr serverattachment sa proxymodel pm task tasks inference-task
set -l tokens (commandline -opc)
set -l found_cmd false
for tok in $tokens
if $found_cmd
if contains -- $tok $resource_aliases
__mcpctl_resolve_resource $tok
return
end
end
if contains -- $tok get describe delete edit patch approve
set found_cmd true
end
end
end
# Fetch resource names dynamically from the API
function __mcpctl_resource_names
set -l resource (__mcpctl_get_resource_type)
if test -z "$resource"
return
end
if test "$resource" = "instances"
mcpctl get instances -o json 2>/dev/null | jq -r '.[][].server.name' 2>/dev/null
else if test "$resource" = "prompts" -o "$resource" = "promptrequests"
mcpctl get $resource -A -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null
else
mcpctl get $resource -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null
end
end
# Fetch project names for --project value
function __mcpctl_project_names
mcpctl get projects -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null
end
# Helper: get the --project/-p value from the command line
function __mcpctl_get_project_value
set -l tokens (commandline -opc)
for i in (seq (count $tokens))
if test "$tokens[$i]" = "--project" -o "$tokens[$i]" = "-p"; and test $i -lt (count $tokens)
echo $tokens[(math $i + 1)]
return
end
end
end
# Servers currently attached to the project (for detach-server)
function __mcpctl_project_servers
set -l proj (__mcpctl_get_project_value)
if test -z "$proj"
return
end
mcpctl --project $proj get servers -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null
end
# Servers NOT attached to the project (for attach-server)
function __mcpctl_available_servers
set -l proj (__mcpctl_get_project_value)
if test -z "$proj"
mcpctl get servers -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null
return
end
set -l all (mcpctl get servers -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null)
set -l attached (mcpctl --project $proj get servers -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null)
for s in $all
if not contains -- $s $attached
echo $s
end
end
end
# Instance names for logs
function __mcpctl_instance_names
mcpctl get instances -o json 2>/dev/null | jq -r '.[][].server.name' 2>/dev/null
end
# Helper: check if a positional arg has been given for a specific command
function __mcpctl_needs_arg_for
set -l cmd $argv[1]
set -l tokens (commandline -opc)
set -l found false
for tok in $tokens
if $found
if not string match -q -- '-*' $tok
return 1 # arg already present
end
end
if test "$tok" = "$cmd"
set found true
end
end
if $found
return 0 # command found but no arg yet
end
return 1
end
# Helper: check if attach-server/detach-server already has a server argument
function __mcpctl_needs_server_arg
set -l tokens (commandline -opc)
set -l found_cmd false
for tok in $tokens
if $found_cmd
if not string match -q -- '-*' $tok
return 1 # server arg already present
end
end
if contains -- $tok attach-server detach-server
set found_cmd true
end
end
if $found_cmd
return 0
end
return 1
end
# Helper: check if a specific parent-child subcommand pair is active
function __mcpctl_subcmd_active
set -l parent $argv[1]
set -l child $argv[2]
set -l tokens (commandline -opc)
set -l found_parent false
for tok in $tokens
if $found_parent
if test "$tok" = "$child"
return 0
end
if not string match -q -- '-*' $tok
return 1 # different subcommand
end
end
if test "$tok" = "$parent"
set found_parent true
end
end
return 1
end
# Top-level commands (without --project)
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a status -d 'Show mcpctl status and connectivity'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a login -d 'Authenticate with mcpd'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a logout -d 'Log out and remove stored credentials'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a config -d 'Manage mcpctl configuration'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a get -d 'List resources (servers, projects, instances, all)'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a describe -d 'Show detailed information about a resource'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a delete -d 'Delete a resource (server, instance, secret, project, user, group, rbac, personality)'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a logs -d 'Get logs from an MCP server instance'
feat(agents): mcpctl chat REPL + agent CRUD + completions (Stage 5) This is the moment the user can actually talk to an agent end-to-end: mcpctl create llm qwen3-thinking --type openai --model qwen3-thinking \ --url http://litellm.nvidia-nim.svc.cluster.local:4000/v1 \ --api-key-ref litellm-key/API_KEY mcpctl create agent reviewer --llm qwen3-thinking --project mcpctl-dev \ --description "I review security design — ask me after each major change." mcpctl chat reviewer Pieces: * src/cli/src/commands/chat.ts (new) — REPL + one-shot. Streams the SSE endpoint and prints text deltas to stdout as they arrive; tool_call / tool_result events go to stderr in dim-style brackets so the chat output stays clean. LiteLLM-style flags (--temperature / --top-p / --top-k / --max-tokens / --seed / --stop / --allow-tool / --extra) layer over agent.defaultParams. In-REPL slash-commands: /set KEY VAL, /system <text>, /tools (list project's MCP servers), /clear (new thread), /save (PATCH agent.defaultParams = current overrides), /quit. * src/cli/src/commands/create.ts — `create agent` mirroring the llm pattern. Every yaml-applyable field has a corresponding flag (memory rule); --default-temperature / --default-top-p / --default-top-k / --default-max-tokens / --default-seed / --default-stop / --default-extra / --default-params-file all populate agent.defaultParams. * src/cli/src/commands/apply.ts — AgentSpecSchema accepts both `llm: qwen3-thinking` shorthand and `llm: { name: ... }` long form; runs after llms in the apply order so apiKey/llm references resolve. Round- trips with `get agent foo -o yaml | apply -f -` (memory rule). * src/cli/src/commands/get.ts — agentColumns (NAME, LLM, PROJECT, DESCRIPTION, ID); RESOURCE_KIND mapping for yaml export. * src/cli/src/commands/shared.ts — `agent`/`agents`/`thread`/`threads` added to RESOURCE_ALIASES. * src/cli/src/index.ts — wires createChatCommand into the program; passes the resolved baseUrl + token so chat can stream SSE without going through ApiClient (which only does buffered request/response). * completions/mcpctl.{fish,bash} regenerated. scripts/generate-completions.ts knows about agents (canonical + aliases) and emits a special-case `chat)` block that completes the first arg with `mcpctl get agents` names. tests/completions.test.ts: +9 new assertions covering agents in the resource list, chat in the commands list, --llm flag for create agent, agent-name completion for chat, etc. CLI suite: 430/430 (was 421). Completions --check is clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 17:02:38 +01:00
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a create -d 'Create a resource (server, secret, secretbackend, llm, agent, project, user, group, rbac, serverattachment, prompt)'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a edit -d 'Edit a resource in your default editor (server, project)'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a apply -d 'Apply declarative configuration from a YAML or JSON file'
feat(agents): mcpctl chat REPL + agent CRUD + completions (Stage 5) This is the moment the user can actually talk to an agent end-to-end: mcpctl create llm qwen3-thinking --type openai --model qwen3-thinking \ --url http://litellm.nvidia-nim.svc.cluster.local:4000/v1 \ --api-key-ref litellm-key/API_KEY mcpctl create agent reviewer --llm qwen3-thinking --project mcpctl-dev \ --description "I review security design — ask me after each major change." mcpctl chat reviewer Pieces: * src/cli/src/commands/chat.ts (new) — REPL + one-shot. Streams the SSE endpoint and prints text deltas to stdout as they arrive; tool_call / tool_result events go to stderr in dim-style brackets so the chat output stays clean. LiteLLM-style flags (--temperature / --top-p / --top-k / --max-tokens / --seed / --stop / --allow-tool / --extra) layer over agent.defaultParams. In-REPL slash-commands: /set KEY VAL, /system <text>, /tools (list project's MCP servers), /clear (new thread), /save (PATCH agent.defaultParams = current overrides), /quit. * src/cli/src/commands/create.ts — `create agent` mirroring the llm pattern. Every yaml-applyable field has a corresponding flag (memory rule); --default-temperature / --default-top-p / --default-top-k / --default-max-tokens / --default-seed / --default-stop / --default-extra / --default-params-file all populate agent.defaultParams. * src/cli/src/commands/apply.ts — AgentSpecSchema accepts both `llm: qwen3-thinking` shorthand and `llm: { name: ... }` long form; runs after llms in the apply order so apiKey/llm references resolve. Round- trips with `get agent foo -o yaml | apply -f -` (memory rule). * src/cli/src/commands/get.ts — agentColumns (NAME, LLM, PROJECT, DESCRIPTION, ID); RESOURCE_KIND mapping for yaml export. * src/cli/src/commands/shared.ts — `agent`/`agents`/`thread`/`threads` added to RESOURCE_ALIASES. * src/cli/src/index.ts — wires createChatCommand into the program; passes the resolved baseUrl + token so chat can stream SSE without going through ApiClient (which only does buffered request/response). * completions/mcpctl.{fish,bash} regenerated. scripts/generate-completions.ts knows about agents (canonical + aliases) and emits a special-case `chat)` block that completes the first arg with `mcpctl get agents` names. tests/completions.test.ts: +9 new assertions covering agents in the resource list, chat in the commands list, --llm flag for create agent, agent-name completion for chat, etc. CLI suite: 430/430 (was 421). Completions --check is clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 17:02:38 +01:00
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a chat -d 'Open an interactive chat session with an agent (REPL or one-shot).'
feat(cli): mcpctl chat-llm + KIND/STATUS columns (v1 Stage 5) Closes the loop on user-facing surface: $ mcpctl get llm NAME KIND STATUS TYPE MODEL TIER KEY ID qwen3-thinking public active openai qwen3-thinking fast ... ... vllm-local virtual active openai Qwen/Qwen2.5-7B-Instruct fast - ... $ mcpctl chat-llm vllm-local ──────────────────────────────────────── LLM: vllm-local openai → Qwen/Qwen2.5-7B-Instruct-AWQ Kind: virtual Status: active ──────────────────────────────────────── > hello? Hi! … New: chat-llm command (commands/chat-llm.ts) - Stateless chat with any mcpd-registered LLM. No threads, no tools, no project prompts. POSTs to /api/v1/llms/<name>/infer; mcpd's kind=virtual branch handles relay-through-mcplocal transparently, so the same CLI command works for both public and virtual LLMs. - Reuses installStatusBar / formatStats / recordDelta / styleStats / PhaseStats from chat.ts (now exported) so the bottom-row tokens-per- second ticker behaves identically to mcpctl chat. - Flags: --message (one-shot), --system, --temperature, --max-tokens, --no-stream. Streaming uses OpenAI chat.completion.chunk SSE. - REPL mode keeps a per-session history array so multi-turn flows feel natural; each turn is an independent inference call. Updated: get.ts - LlmRow gains optional kind/status fields. - llmColumns layout: NAME, KIND, STATUS, TYPE, MODEL, TIER, KEY, ID. Defaults gracefully when older mcpd responses don't return them. Updated: chat.ts - Re-exports the helpers chat-llm.ts needs (PhaseStats, newPhase, recordDelta, formatStats, styleStats, styleThinking, STDERR_IS_TTY, StatusBar, installStatusBar). No behavior change. Completions: chat-llm picks up the standard option enumeration automatically; bash gets a special-case for first-arg LLM-name completion via _mcpctl_resource_names "llms". CLI suite: 437/437 (was 430, +7 from auto-discovered test cases in the regenerated completions golden). Workspace: 2043/2043 across 152 files. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 14:25:38 +01:00
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a chat-llm -d 'Stateless chat with any registered LLM (public or virtual). No threads, no tools.'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a patch -d 'Patch a resource field (e.g. mcpctl patch project myproj llmProvider=none)'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a backup -d 'Git-based backup status and management'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a approve -d 'Approve a pending prompt request (atomic: delete request, create prompt)'
feat(mcpd+mcplocal+cli): propose-learnings system skill, propose_skill MCP tool, mcpctl review Phase 4 of the Skills + Revisions + Proposals work. Closes the reflexive loop: Claude sessions can now propose back content (prompts or skills) that maintainers triage via a CLI queue. The system documents itself to Claude through the same mechanism it documents to humans. ## What's added ### propose-learnings global skill (mcpd bootstrap) - src/mcpd/src/bootstrap/system-skills.ts — idempotent upsert, mirrors system-project.ts. Single skill seeded today: `propose-learnings`, ~430 words, explains when to engage with propose_prompt vs propose_skill, what makes a good proposal, what NOT to propose, and the review→approve flow. Priority 9, global scope. - main.ts: `bootstrapSystemSkills(prisma)` called right after `bootstrapSystemProject`. ### gate-encouragement-propose system prompt - system-project.ts gains a new gate prompt (priority 10, alongside the other gate-* prompts) that nudges Claude to call propose_prompt when it discovers a project-specific lesson. Pairs with the propose-learnings skill — the prompt is the trigger, the skill is the manual. ### propose_skill MCP tool (mcplocal) - proxymodel/plugins/gate.ts: new virtual tool registered alongside propose_prompt. Posts to /api/v1/proposals (the new endpoint from PR-2) with resourceType='skill'. Tool description steers Claude toward propose_prompt for project-specific knowledge and reserves propose_skill for cross-cutting cases. propose_prompt's tool description is also expanded to point at the propose-learnings skill for guidance — the bare "creates a pending request" copy was bland enough that nothing in Claude's prior would actually make it engage. ### mcpctl review CLI - New top-level command in src/cli/src/commands/review.ts. Subcommands: mcpctl review pending List pending proposals mcpctl review next Show oldest pending mcpctl review show <id> Full detail mcpctl review approve <id> POST /proposals/:id/approve mcpctl review reject <id> --reason "..." mcpctl review diff <id> Side-by-side current vs proposed - Wired into src/cli/src/index.ts. Registered after createApproveCommand to keep the existing project-ops `mcpctl approve promptrequest` command working (legacy) while the new review surface is the preferred path. ## Tests touched - bootstrap-system-project.test.ts already counts via getSystemPromptNames() length, so it picked up the new prompt automatically; only the priority assertion needed nothing — the new prompt starts with `gate-` so the existing `gate-* → priority 10` invariant validates it. - system-prompt-validation.test.ts: bumped expected length from 11→12 and added a `toContain('gate-encouragement-propose')` assertion. Full suite: 158 test files / 2127 tests green. ## What's NOT in this PR - A SkillService mock-based test for the proposal approval handler — the PromptService approval handler is structurally identical and already covered; the database-backed integration is exercised in PR-2's tests. - Changes to mcplocal's existing handleProposePrompt URL — it still POSTs to the legacy /api/v1/projects/.../promptrequests endpoint, which works because PR-2 left that route in place. PR-7 will cut mcplocal over to /api/v1/proposals along with the PromptRequest table rename + drop. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 13:13:33 +01:00
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a review -d 'Triage proposed prompts and skills'
feat(cli+mcpd): mcpctl skills sync + config claude extension Phase 5 of the Skills + Revisions + Proposals work. Skills are now materialised onto disk under ~/.claude/skills/<name>/, with hash-pinned diff against mcpd, atomic per-skill install, and preservation of locally-modified files. `mcpctl config claude --project X` now wires the full pickup chain: writes .mcpctl-project marker, runs the initial sync, installs the SessionStart hook so subsequent Claude invocations stay in sync transparently. ## Sync algorithm 1. Resolve project: `--project` flag overrides; else walk up from cwd looking for `.mcpctl-project`; else fall back to globals-only. 2. GET /api/v1/projects/:name/skills/visible (or /api/v1/skills?scope=global without a project). Server returns id + name + semver + scope + contentHash + metadata — no body, no files. The contentHash is sha256 of the canonicalised body, computed server-side; any reordering of keys produces the same hash, so it's a stable diff key. 3. Load ~/.mcpctl/skills-state.json (lives outside ~/.claude/skills/ on purpose — Claude Code reads that tree and we don't want to pollute it with our bookkeeping). 4. Diff: - server skill not in state → INSTALL - server skill, state contentHash matches → SKIP (cheap path) - server skill, state contentHash differs → UPDATE (fetch full body) - state skill not in server → orphan, REMOVE (preserve if locally modified, unless --force) 5. Atomic per-skill install: write to <targetDir>.mcpctl-staging-<pid>/, rename existing tree to .mcpctl-trash-<pid>, swap staging in, rmtree the trash. A concurrent reader (Claude Code starting up) never sees a partial tree. 6. State file updated with new versions, per-file SHA-256, install path. saveState is atomic (temp + rename). ## Failure semantics - `--quiet` mode (used by SessionStart hook): exit 0 on network / timeout / mcpd error. Fail-open is non-negotiable here — we never want a hung mcpd to block Claude Code starting up. - Auth failure: exit 1, clear "run mcpctl login" message. - Disk error during state save: exit 2. - Per-skill errors are collected in the result and reported as a count; one bad skill doesn't stop the others. Network fetches run with concurrency 5. The server-side `/visible` endpoint is metadata-only so the cheap path (everything unchanged) needs exactly one HTTP roundtrip total. ## Files added ### CLI utilities (src/cli/src/utils/) - skills-state.ts — load/save state, per-file sha256, edit detection. - project-marker.ts — walk-up to find `.mcpctl-project`, bounded by user home so we never search above $HOME. - sessionhook.ts — install/remove a SessionStart hook entry tagged with `_mcpctl_managed: true`. Idempotent. Defensive against missing/empty/JSONC settings.json. - skills-disk.ts — atomic install via staging-dir rename swap, symmetric atomic delete via trash-dir rename. Path-escape attempts in files{} are rejected. ### CLI command (src/cli/src/commands/) - skills.ts — `mcpctl skills sync` Commander wrapper + the `runSkillsSync(opts, deps)` library function (also called from `mcpctl config claude --project`). Supports `--dry-run`, `--force`, `--quiet`, `--keep-orphans`. `--skip-postinstall` is reserved (postInstall execution lands in a follow-up PR, not this one). ### Wiring - index.ts: registers `mcpctl skills` after `mcpctl review`. - config.ts: `mcpctl config claude --project X` now writes the `.mcpctl-project` marker, runs `runSkillsSync` in-process, and calls `installManagedSessionHook('mcpctl skills sync --quiet')`. New flag `--skip-skills` opts out (used by tests; useful for CI). ## Server-side change - src/mcpd/src/services/skill.service.ts: getVisibleSkills now computes contentHash on the fly from the canonical body shape the client will reconstruct. Cheap (sha256 of ~few KB per skill); no schema migration needed since hash is derived not stored. ## Tests Four new utility test files (31 tests) under src/cli/tests/utils/: - sessionhook.test.ts — creation, idempotency, command updates, preservation of user hooks, removal, empty/JSONC tolerance. - skills-disk.test.ts — atomic write, replacement without leftovers, path-escape rejection, atomic delete, listing ignores staging/trash artifacts. - skills-state.test.ts — sha256 determinism, state round-trip, schema-version drift handling, edit detection. - project-marker.test.ts — cwd hit, walk-up, $HOME boundary, empty marker, write+read round-trip. The existing `mcpctl config claude` test (claude.test.ts) was updated to pass `--skip-skills` so it stays focused on .mcp.json generation; the new sync flow is covered by the utility tests. Full suite: 162 test files / 2157 tests green (up from 158 / 2127). ## Deferred to a follow-up - `metadata.hooks` materialisation into `~/.claude/settings.json` — the data path exists, sync receives it; PR-7 or a focused follow-up will write the `_mcpctl_managed: true` entries for declarative hooks. - `metadata.mcpServers` auto-attach via mcpd API — likewise. - `metadata.postInstall` script execution — the most substantive deferred piece. Current sync logs a TODO and skips. The corporate trust model (publisher-side rigor, not client-side defence) means this is straightforward to add once we wire the curated env + timeout + audit emission. Orthogonal to file sync, easier to ship separately. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:26:35 +01:00
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a skills -d 'Manage Claude Code skill bundles synced from mcpd'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a console -d 'Interactive MCP console — unified timeline with tools, provenance, and lab replay'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a cache -d 'Manage ProxyModel pipeline cache'
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a provider -d 'Control local LLM providers (start/stop/status)'
feat: HTTP-mode mcplocal container + mcpctl test mcp + token-auth preHandler Delivers the final piece of the mcptoken stack: a containerized, network-accessible mcplocal that serves Streamable-HTTP MCP to off-host clients (the vLLM use case), authenticated by project-scoped McpTokens. New binary (same package, new entry): - src/mcplocal/src/serve.ts — HTTP-only entry. Reads MCPLOCAL_MCPD_URL, MCPLOCAL_MCPD_TOKEN, MCPLOCAL_HTTP_HOST/PORT, MCPLOCAL_CACHE_DIR from env. No StdioProxyServer, no --upstream. - src/mcplocal/src/http/token-auth.ts — Fastify preHandler that validates mcpctl_pat_ bearers via mcpd's /api/v1/mcptokens/introspect. 30s positive / 5s negative TTL. Rejects wrong-project with 403. Shared HTTP MCP client: - src/shared/src/mcp-http/ — reusable McpHttpSession with initialize, listTools, callTool, close. Handles http+https, SSE, id correlation, distinct McpProtocolError / McpTransportError. Plus mcpHealthCheck and deriveBaseUrl helpers. New CLI verb `mcpctl test mcp <url>`: - Flags: --token (also $MCPCTL_TOKEN), --tool, --args (JSON), --expect-tools, --timeout, -o text|json, --no-health. - Exit codes: 0 PASS, 1 TRANSPORT/AUTH FAIL, 2 CONTRACT FAIL. Container + deploy: - deploy/Dockerfile.mcplocal (Node 20 alpine, multi-stage, pnpm workspace, CMD node src/mcplocal/dist/serve.js, VOLUME /var/lib/mcplocal/cache, HEALTHCHECK on :3200/healthz). - scripts/build-mcplocal.sh mirrors build-mcpd.sh. - fulldeploy.sh is now a 4-step pipeline that also builds + rolls out mcplocal (gated on `kubectl get deployment/mcplocal` so the script stays green before the Pulumi stack lands). Audit + cache: - project-mcp-endpoint.ts passes MCPLOCAL_CACHE_DIR into FileCache at both construction sites and, when request.mcpToken is present, calls collector.setSessionMcpToken(id, ...) so audit events carry the tokenName/tokenSha. Tests: - 9 unit cases on `mcpctl test mcp` (happy path, health miss, expect-tools hit/miss, transport throw, tool isError, json report, $MCPCTL_TOKEN env fallback, invalid --args). - Smoke test src/mcplocal/tests/smoke/mcptoken.smoke.test.ts — gated on healthz($MCPGW_URL), skipped cleanly when unreachable. Covers happy path, wrong-project 403, --expect-tools contract failure, and revocation 401 within the negative-cache window. 1773/1773 workspace tests pass. Pulumi resources (Deployment, Service, Ingress, PVC, Secret, NetworkPolicy) still need to land in ../kubernetes-deployment before the smoke gate flips on. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 01:21:42 +01:00
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a test -d 'Utilities for testing MCP endpoints and config'
feat(mcpd): pluggable SecretBackend abstraction + OpenBao driver + migrate Why: API keys live in Postgres as plaintext JSON. A DB read exposes every credential in the system. Before centralising more secrets (LLM keys, etc.) we want to be able to point at an external KV store and drop DB access to sensitive rows. New model: - `SecretBackend` resource (CRUD + isDefault invariant) owns how a secret is stored. `Secret` gains `backendId` FK and `externalRef`. Reads/writes dispatch through a driver. - `plaintext` driver (near-noop, uses existing Secret.data column) is seeded as the `default` row at startup. Acts as trust root / bootstrap. - `openbao` driver (also HashiCorp Vault KV v2 compatible) talks plain HTTP, no SDK dependency. Auth via static token pulled from a plaintext-backed `Secret` through the injected SecretRefResolver. Caches resolved token. - `SecretMigrateService` moves secrets one-at-a-time: read → write dest → flip row → best-effort source delete. Interrupted runs are idempotent (skips secrets already on destination). CLI surface: - `mcpctl create|get|describe|delete secretbackend` + `--default` on create. - `mcpctl migrate secrets --from X --to Y [--names a,b] [--keep-source] [--dry-run]` - `apply -f` round-trips secretbackends (yaml/json multi-doc + grouped). - RBAC: `secretbackends` resource + `run:migrate-secrets` operation. - Fish + bash completions regenerated. docs/secret-backends.md covers the OpenBao policy, chicken-and-egg auth flow, and the migration semantics. Broke the circular dep (OpenBao needs SecretService to resolve its own token, SecretService needs SecretBackendService) with a deferred-resolver bridge in mcpd startup. 11 new driver unit tests; existing env-resolver/secret-route/ backup tests updated for the new service signatures. Full suite: 1792/1792. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 19:29:55 +01:00
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a migrate -d 'Move resources between backends (currently: secrets between SecretBackends)'
feat(openbao): wizard-provisioning + daily token rotation One-command setup replaces the 6-step manual flow — `mcpctl create secretbackend bao --type openbao --wizard` takes the OpenBao admin token once, provisions a narrow policy + token role, mints the first periodic token, stores it on mcpd, verifies end-to-end, and prints the migration command. The admin token is NEVER persisted. The stored credential auto-rotates daily: mcpd mints a successor via the token role (self-rotation capability is part of the policy it was issued with), verifies the successor, writes it over the backing Secret, then revokes the predecessor by accessor. TTL 720h means a week of rotation failures still leaves 20+ days of runway. Shared: - New `@mcpctl/shared/vault` — pure HTTP wrappers (verifyHealth, ensureKvV2, writePolicy, ensureTokenRole, mintRoleToken, revokeAccessor, lookupSelf, testWriteReadDelete) and policy HCL builder. mcpd: - `tokenMeta Json @default("{}")` on SecretBackend. Self-healing schema migration — empty default lets `prisma db push` add the column cleanly. - SecretBackendRotator.rotateOne: mint → verify → persist → revoke-old → update tokenMeta. Failures surface via `lastRotationError` on the row; the old token keeps working. - SecretBackendRotatorLoop: on startup rotates overdue backends, schedules per-backend timers with ±10min jitter. Stops cleanly on shutdown. - New `POST /api/v1/secretbackends/:id/rotate` (operation `rotate-secretbackend` — added to bootstrap-admin's auto-migrated ops alongside migrate-secrets, which was previously missing too). CLI: - `--wizard` on `create secretbackend` delegates to the interactive flow. All prompts can be pre-answered via flags (--url, --admin-token, --mount, --path-prefix, --policy-name, --token-role, --no-promote-default) for CI. - `mcpctl rotate secretbackend <name>` — convenience verb; hits the new rotate endpoint. - `describe secretbackend` renders a Token health section (healthy / STALE / WARNING / ERROR) with generated/renewal/expiry timestamps and last rotation error. Only shown when tokenMeta.rotatable is true — the existing k8s-auth + static-token backends don't surface it. Tests: 15 vault-client unit tests (shared), 8 rotator unit tests (mcpd), 3 wizard flow tests (cli, including a regression test that the admin token never appears in stdout). Full suite 1885/1885 (+32). Completions regenerated for the new flags. Out of scope (explicit): kubernetes-auth wizard, Vault Enterprise namespaces in the wizard path, rotation for non-wizard static-token backends. See plan file for details. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 17:20:37 +01:00
complete -c mcpctl -n "not __mcpctl_has_project; and not __fish_seen_subcommand_from $commands" -a rotate -d 'Force rotation of a credential-rotating resource (currently: secretbackend)'
# Project-scoped commands (with --project)
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a get -d 'List resources (servers, projects, instances, all)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a describe -d 'Show detailed information about a resource'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a delete -d 'Delete a resource (server, instance, secret, project, user, group, rbac, personality)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a logs -d 'Get logs from an MCP server instance'
feat(agents): mcpctl chat REPL + agent CRUD + completions (Stage 5) This is the moment the user can actually talk to an agent end-to-end: mcpctl create llm qwen3-thinking --type openai --model qwen3-thinking \ --url http://litellm.nvidia-nim.svc.cluster.local:4000/v1 \ --api-key-ref litellm-key/API_KEY mcpctl create agent reviewer --llm qwen3-thinking --project mcpctl-dev \ --description "I review security design — ask me after each major change." mcpctl chat reviewer Pieces: * src/cli/src/commands/chat.ts (new) — REPL + one-shot. Streams the SSE endpoint and prints text deltas to stdout as they arrive; tool_call / tool_result events go to stderr in dim-style brackets so the chat output stays clean. LiteLLM-style flags (--temperature / --top-p / --top-k / --max-tokens / --seed / --stop / --allow-tool / --extra) layer over agent.defaultParams. In-REPL slash-commands: /set KEY VAL, /system <text>, /tools (list project's MCP servers), /clear (new thread), /save (PATCH agent.defaultParams = current overrides), /quit. * src/cli/src/commands/create.ts — `create agent` mirroring the llm pattern. Every yaml-applyable field has a corresponding flag (memory rule); --default-temperature / --default-top-p / --default-top-k / --default-max-tokens / --default-seed / --default-stop / --default-extra / --default-params-file all populate agent.defaultParams. * src/cli/src/commands/apply.ts — AgentSpecSchema accepts both `llm: qwen3-thinking` shorthand and `llm: { name: ... }` long form; runs after llms in the apply order so apiKey/llm references resolve. Round- trips with `get agent foo -o yaml | apply -f -` (memory rule). * src/cli/src/commands/get.ts — agentColumns (NAME, LLM, PROJECT, DESCRIPTION, ID); RESOURCE_KIND mapping for yaml export. * src/cli/src/commands/shared.ts — `agent`/`agents`/`thread`/`threads` added to RESOURCE_ALIASES. * src/cli/src/index.ts — wires createChatCommand into the program; passes the resolved baseUrl + token so chat can stream SSE without going through ApiClient (which only does buffered request/response). * completions/mcpctl.{fish,bash} regenerated. scripts/generate-completions.ts knows about agents (canonical + aliases) and emits a special-case `chat)` block that completes the first arg with `mcpctl get agents` names. tests/completions.test.ts: +9 new assertions covering agents in the resource list, chat in the commands list, --llm flag for create agent, agent-name completion for chat, etc. CLI suite: 430/430 (was 421). Completions --check is clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 17:02:38 +01:00
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a create -d 'Create a resource (server, secret, secretbackend, llm, agent, project, user, group, rbac, serverattachment, prompt)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a edit -d 'Edit a resource in your default editor (server, project)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a attach-server -d 'Attach a server to a project (requires --project)'
complete -c mcpctl -n "__mcpctl_has_project; and not __fish_seen_subcommand_from $project_commands" -a detach-server -d 'Detach a server from a project (requires --project)'
# Resource types — only when resource type not yet selected
complete -c mcpctl -n "__fish_seen_subcommand_from get describe delete patch; and __mcpctl_needs_resource_type" -a "$resources" -d 'Resource type'
complete -c mcpctl -n "__fish_seen_subcommand_from edit; and __mcpctl_needs_resource_type" -a 'servers secrets projects groups rbac prompts promptrequests personalities' -d 'Resource type'
complete -c mcpctl -n "__fish_seen_subcommand_from approve; and __mcpctl_needs_resource_type" -a 'promptrequest' -d 'Resource type'
# Resource names — after resource type is selected
complete -c mcpctl -n "__fish_seen_subcommand_from get describe delete edit patch approve; and not __mcpctl_needs_resource_type" -a '(__mcpctl_resource_names)' -d 'Resource name'
# config subcommands
set -l config_cmds view set path reset claude claude-generate setup impersonate
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a view -d 'Show current configuration'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a set -d 'Set a configuration value'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a path -d 'Show configuration file path'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a reset -d 'Reset configuration to defaults'
feat(cli+mcpd): mcpctl skills sync + config claude extension Phase 5 of the Skills + Revisions + Proposals work. Skills are now materialised onto disk under ~/.claude/skills/<name>/, with hash-pinned diff against mcpd, atomic per-skill install, and preservation of locally-modified files. `mcpctl config claude --project X` now wires the full pickup chain: writes .mcpctl-project marker, runs the initial sync, installs the SessionStart hook so subsequent Claude invocations stay in sync transparently. ## Sync algorithm 1. Resolve project: `--project` flag overrides; else walk up from cwd looking for `.mcpctl-project`; else fall back to globals-only. 2. GET /api/v1/projects/:name/skills/visible (or /api/v1/skills?scope=global without a project). Server returns id + name + semver + scope + contentHash + metadata — no body, no files. The contentHash is sha256 of the canonicalised body, computed server-side; any reordering of keys produces the same hash, so it's a stable diff key. 3. Load ~/.mcpctl/skills-state.json (lives outside ~/.claude/skills/ on purpose — Claude Code reads that tree and we don't want to pollute it with our bookkeeping). 4. Diff: - server skill not in state → INSTALL - server skill, state contentHash matches → SKIP (cheap path) - server skill, state contentHash differs → UPDATE (fetch full body) - state skill not in server → orphan, REMOVE (preserve if locally modified, unless --force) 5. Atomic per-skill install: write to <targetDir>.mcpctl-staging-<pid>/, rename existing tree to .mcpctl-trash-<pid>, swap staging in, rmtree the trash. A concurrent reader (Claude Code starting up) never sees a partial tree. 6. State file updated with new versions, per-file SHA-256, install path. saveState is atomic (temp + rename). ## Failure semantics - `--quiet` mode (used by SessionStart hook): exit 0 on network / timeout / mcpd error. Fail-open is non-negotiable here — we never want a hung mcpd to block Claude Code starting up. - Auth failure: exit 1, clear "run mcpctl login" message. - Disk error during state save: exit 2. - Per-skill errors are collected in the result and reported as a count; one bad skill doesn't stop the others. Network fetches run with concurrency 5. The server-side `/visible` endpoint is metadata-only so the cheap path (everything unchanged) needs exactly one HTTP roundtrip total. ## Files added ### CLI utilities (src/cli/src/utils/) - skills-state.ts — load/save state, per-file sha256, edit detection. - project-marker.ts — walk-up to find `.mcpctl-project`, bounded by user home so we never search above $HOME. - sessionhook.ts — install/remove a SessionStart hook entry tagged with `_mcpctl_managed: true`. Idempotent. Defensive against missing/empty/JSONC settings.json. - skills-disk.ts — atomic install via staging-dir rename swap, symmetric atomic delete via trash-dir rename. Path-escape attempts in files{} are rejected. ### CLI command (src/cli/src/commands/) - skills.ts — `mcpctl skills sync` Commander wrapper + the `runSkillsSync(opts, deps)` library function (also called from `mcpctl config claude --project`). Supports `--dry-run`, `--force`, `--quiet`, `--keep-orphans`. `--skip-postinstall` is reserved (postInstall execution lands in a follow-up PR, not this one). ### Wiring - index.ts: registers `mcpctl skills` after `mcpctl review`. - config.ts: `mcpctl config claude --project X` now writes the `.mcpctl-project` marker, runs `runSkillsSync` in-process, and calls `installManagedSessionHook('mcpctl skills sync --quiet')`. New flag `--skip-skills` opts out (used by tests; useful for CI). ## Server-side change - src/mcpd/src/services/skill.service.ts: getVisibleSkills now computes contentHash on the fly from the canonical body shape the client will reconstruct. Cheap (sha256 of ~few KB per skill); no schema migration needed since hash is derived not stored. ## Tests Four new utility test files (31 tests) under src/cli/tests/utils/: - sessionhook.test.ts — creation, idempotency, command updates, preservation of user hooks, removal, empty/JSONC tolerance. - skills-disk.test.ts — atomic write, replacement without leftovers, path-escape rejection, atomic delete, listing ignores staging/trash artifacts. - skills-state.test.ts — sha256 determinism, state round-trip, schema-version drift handling, edit detection. - project-marker.test.ts — cwd hit, walk-up, $HOME boundary, empty marker, write+read round-trip. The existing `mcpctl config claude` test (claude.test.ts) was updated to pass `--skip-skills` so it stays focused on .mcp.json generation; the new sync flow is covered by the utility tests. Full suite: 162 test files / 2157 tests green (up from 158 / 2127). ## Deferred to a follow-up - `metadata.hooks` materialisation into `~/.claude/settings.json` — the data path exists, sync receives it; PR-7 or a focused follow-up will write the `_mcpctl_managed: true` entries for declarative hooks. - `metadata.mcpServers` auto-attach via mcpd API — likewise. - `metadata.postInstall` script execution — the most substantive deferred piece. Current sync logs a TODO and skips. The corporate trust model (publisher-side rigor, not client-side defence) means this is straightforward to add once we wire the curated env + timeout + audit emission. Orthogonal to file sync, easier to ship separately. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:26:35 +01:00
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a claude -d 'Generate .mcp.json + wire skills sync + install SessionStart hook'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a claude-generate -d ''
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a setup -d 'Interactive LLM provider setup wizard'
complete -c mcpctl -n "__fish_seen_subcommand_from config; and not __fish_seen_subcommand_from $config_cmds" -a impersonate -d 'Impersonate another user or return to original identity'
# config view options
complete -c mcpctl -n "__mcpctl_subcmd_active config view" -s o -l output -d 'output format (json, yaml)' -x
# config claude options
complete -c mcpctl -n "__mcpctl_subcmd_active config claude" -s p -l project -d 'Project name' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__mcpctl_subcmd_active config claude" -s o -l output -d 'Output file path' -x
complete -c mcpctl -n "__mcpctl_subcmd_active config claude" -l inspect -d 'Include mcpctl-inspect MCP server for traffic monitoring'
complete -c mcpctl -n "__mcpctl_subcmd_active config claude" -l stdout -d 'Print to stdout instead of writing a file'
feat(cli+mcpd): mcpctl skills sync + config claude extension Phase 5 of the Skills + Revisions + Proposals work. Skills are now materialised onto disk under ~/.claude/skills/<name>/, with hash-pinned diff against mcpd, atomic per-skill install, and preservation of locally-modified files. `mcpctl config claude --project X` now wires the full pickup chain: writes .mcpctl-project marker, runs the initial sync, installs the SessionStart hook so subsequent Claude invocations stay in sync transparently. ## Sync algorithm 1. Resolve project: `--project` flag overrides; else walk up from cwd looking for `.mcpctl-project`; else fall back to globals-only. 2. GET /api/v1/projects/:name/skills/visible (or /api/v1/skills?scope=global without a project). Server returns id + name + semver + scope + contentHash + metadata — no body, no files. The contentHash is sha256 of the canonicalised body, computed server-side; any reordering of keys produces the same hash, so it's a stable diff key. 3. Load ~/.mcpctl/skills-state.json (lives outside ~/.claude/skills/ on purpose — Claude Code reads that tree and we don't want to pollute it with our bookkeeping). 4. Diff: - server skill not in state → INSTALL - server skill, state contentHash matches → SKIP (cheap path) - server skill, state contentHash differs → UPDATE (fetch full body) - state skill not in server → orphan, REMOVE (preserve if locally modified, unless --force) 5. Atomic per-skill install: write to <targetDir>.mcpctl-staging-<pid>/, rename existing tree to .mcpctl-trash-<pid>, swap staging in, rmtree the trash. A concurrent reader (Claude Code starting up) never sees a partial tree. 6. State file updated with new versions, per-file SHA-256, install path. saveState is atomic (temp + rename). ## Failure semantics - `--quiet` mode (used by SessionStart hook): exit 0 on network / timeout / mcpd error. Fail-open is non-negotiable here — we never want a hung mcpd to block Claude Code starting up. - Auth failure: exit 1, clear "run mcpctl login" message. - Disk error during state save: exit 2. - Per-skill errors are collected in the result and reported as a count; one bad skill doesn't stop the others. Network fetches run with concurrency 5. The server-side `/visible` endpoint is metadata-only so the cheap path (everything unchanged) needs exactly one HTTP roundtrip total. ## Files added ### CLI utilities (src/cli/src/utils/) - skills-state.ts — load/save state, per-file sha256, edit detection. - project-marker.ts — walk-up to find `.mcpctl-project`, bounded by user home so we never search above $HOME. - sessionhook.ts — install/remove a SessionStart hook entry tagged with `_mcpctl_managed: true`. Idempotent. Defensive against missing/empty/JSONC settings.json. - skills-disk.ts — atomic install via staging-dir rename swap, symmetric atomic delete via trash-dir rename. Path-escape attempts in files{} are rejected. ### CLI command (src/cli/src/commands/) - skills.ts — `mcpctl skills sync` Commander wrapper + the `runSkillsSync(opts, deps)` library function (also called from `mcpctl config claude --project`). Supports `--dry-run`, `--force`, `--quiet`, `--keep-orphans`. `--skip-postinstall` is reserved (postInstall execution lands in a follow-up PR, not this one). ### Wiring - index.ts: registers `mcpctl skills` after `mcpctl review`. - config.ts: `mcpctl config claude --project X` now writes the `.mcpctl-project` marker, runs `runSkillsSync` in-process, and calls `installManagedSessionHook('mcpctl skills sync --quiet')`. New flag `--skip-skills` opts out (used by tests; useful for CI). ## Server-side change - src/mcpd/src/services/skill.service.ts: getVisibleSkills now computes contentHash on the fly from the canonical body shape the client will reconstruct. Cheap (sha256 of ~few KB per skill); no schema migration needed since hash is derived not stored. ## Tests Four new utility test files (31 tests) under src/cli/tests/utils/: - sessionhook.test.ts — creation, idempotency, command updates, preservation of user hooks, removal, empty/JSONC tolerance. - skills-disk.test.ts — atomic write, replacement without leftovers, path-escape rejection, atomic delete, listing ignores staging/trash artifacts. - skills-state.test.ts — sha256 determinism, state round-trip, schema-version drift handling, edit detection. - project-marker.test.ts — cwd hit, walk-up, $HOME boundary, empty marker, write+read round-trip. The existing `mcpctl config claude` test (claude.test.ts) was updated to pass `--skip-skills` so it stays focused on .mcp.json generation; the new sync flow is covered by the utility tests. Full suite: 162 test files / 2157 tests green (up from 158 / 2127). ## Deferred to a follow-up - `metadata.hooks` materialisation into `~/.claude/settings.json` — the data path exists, sync receives it; PR-7 or a focused follow-up will write the `_mcpctl_managed: true` entries for declarative hooks. - `metadata.mcpServers` auto-attach via mcpd API — likewise. - `metadata.postInstall` script execution — the most substantive deferred piece. Current sync logs a TODO and skips. The corporate trust model (publisher-side rigor, not client-side defence) means this is straightforward to add once we wire the curated env + timeout + audit emission. Orthogonal to file sync, easier to ship separately. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:26:35 +01:00
complete -c mcpctl -n "__mcpctl_subcmd_active config claude" -l skip-skills -d 'Skip the skills sync + SessionStart hook install step (PR-5+)'
# config claude-generate options
complete -c mcpctl -n "__mcpctl_subcmd_active config claude-generate" -s p -l project -d 'Project name' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__mcpctl_subcmd_active config claude-generate" -s o -l output -d 'Output file path' -x
complete -c mcpctl -n "__mcpctl_subcmd_active config claude-generate" -l inspect -d 'Include mcpctl-inspect MCP server for traffic monitoring'
complete -c mcpctl -n "__mcpctl_subcmd_active config claude-generate" -l stdout -d 'Print to stdout instead of writing a file'
feat(cli+mcpd): mcpctl skills sync + config claude extension Phase 5 of the Skills + Revisions + Proposals work. Skills are now materialised onto disk under ~/.claude/skills/<name>/, with hash-pinned diff against mcpd, atomic per-skill install, and preservation of locally-modified files. `mcpctl config claude --project X` now wires the full pickup chain: writes .mcpctl-project marker, runs the initial sync, installs the SessionStart hook so subsequent Claude invocations stay in sync transparently. ## Sync algorithm 1. Resolve project: `--project` flag overrides; else walk up from cwd looking for `.mcpctl-project`; else fall back to globals-only. 2. GET /api/v1/projects/:name/skills/visible (or /api/v1/skills?scope=global without a project). Server returns id + name + semver + scope + contentHash + metadata — no body, no files. The contentHash is sha256 of the canonicalised body, computed server-side; any reordering of keys produces the same hash, so it's a stable diff key. 3. Load ~/.mcpctl/skills-state.json (lives outside ~/.claude/skills/ on purpose — Claude Code reads that tree and we don't want to pollute it with our bookkeeping). 4. Diff: - server skill not in state → INSTALL - server skill, state contentHash matches → SKIP (cheap path) - server skill, state contentHash differs → UPDATE (fetch full body) - state skill not in server → orphan, REMOVE (preserve if locally modified, unless --force) 5. Atomic per-skill install: write to <targetDir>.mcpctl-staging-<pid>/, rename existing tree to .mcpctl-trash-<pid>, swap staging in, rmtree the trash. A concurrent reader (Claude Code starting up) never sees a partial tree. 6. State file updated with new versions, per-file SHA-256, install path. saveState is atomic (temp + rename). ## Failure semantics - `--quiet` mode (used by SessionStart hook): exit 0 on network / timeout / mcpd error. Fail-open is non-negotiable here — we never want a hung mcpd to block Claude Code starting up. - Auth failure: exit 1, clear "run mcpctl login" message. - Disk error during state save: exit 2. - Per-skill errors are collected in the result and reported as a count; one bad skill doesn't stop the others. Network fetches run with concurrency 5. The server-side `/visible` endpoint is metadata-only so the cheap path (everything unchanged) needs exactly one HTTP roundtrip total. ## Files added ### CLI utilities (src/cli/src/utils/) - skills-state.ts — load/save state, per-file sha256, edit detection. - project-marker.ts — walk-up to find `.mcpctl-project`, bounded by user home so we never search above $HOME. - sessionhook.ts — install/remove a SessionStart hook entry tagged with `_mcpctl_managed: true`. Idempotent. Defensive against missing/empty/JSONC settings.json. - skills-disk.ts — atomic install via staging-dir rename swap, symmetric atomic delete via trash-dir rename. Path-escape attempts in files{} are rejected. ### CLI command (src/cli/src/commands/) - skills.ts — `mcpctl skills sync` Commander wrapper + the `runSkillsSync(opts, deps)` library function (also called from `mcpctl config claude --project`). Supports `--dry-run`, `--force`, `--quiet`, `--keep-orphans`. `--skip-postinstall` is reserved (postInstall execution lands in a follow-up PR, not this one). ### Wiring - index.ts: registers `mcpctl skills` after `mcpctl review`. - config.ts: `mcpctl config claude --project X` now writes the `.mcpctl-project` marker, runs `runSkillsSync` in-process, and calls `installManagedSessionHook('mcpctl skills sync --quiet')`. New flag `--skip-skills` opts out (used by tests; useful for CI). ## Server-side change - src/mcpd/src/services/skill.service.ts: getVisibleSkills now computes contentHash on the fly from the canonical body shape the client will reconstruct. Cheap (sha256 of ~few KB per skill); no schema migration needed since hash is derived not stored. ## Tests Four new utility test files (31 tests) under src/cli/tests/utils/: - sessionhook.test.ts — creation, idempotency, command updates, preservation of user hooks, removal, empty/JSONC tolerance. - skills-disk.test.ts — atomic write, replacement without leftovers, path-escape rejection, atomic delete, listing ignores staging/trash artifacts. - skills-state.test.ts — sha256 determinism, state round-trip, schema-version drift handling, edit detection. - project-marker.test.ts — cwd hit, walk-up, $HOME boundary, empty marker, write+read round-trip. The existing `mcpctl config claude` test (claude.test.ts) was updated to pass `--skip-skills` so it stays focused on .mcp.json generation; the new sync flow is covered by the utility tests. Full suite: 162 test files / 2157 tests green (up from 158 / 2127). ## Deferred to a follow-up - `metadata.hooks` materialisation into `~/.claude/settings.json` — the data path exists, sync receives it; PR-7 or a focused follow-up will write the `_mcpctl_managed: true` entries for declarative hooks. - `metadata.mcpServers` auto-attach via mcpd API — likewise. - `metadata.postInstall` script execution — the most substantive deferred piece. Current sync logs a TODO and skips. The corporate trust model (publisher-side rigor, not client-side defence) means this is straightforward to add once we wire the curated env + timeout + audit emission. Orthogonal to file sync, easier to ship separately. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:26:35 +01:00
complete -c mcpctl -n "__mcpctl_subcmd_active config claude-generate" -l skip-skills -d 'Skip the skills sync + SessionStart hook install step (PR-5+)'
# config impersonate options
complete -c mcpctl -n "__mcpctl_subcmd_active config impersonate" -l quit -d 'Stop impersonating and return to original identity'
# create subcommands
feat(mcpd): Skill resource end-to-end (CRUD + backup + revision integration) Phase 3 of the Skills + Revisions + Proposals work. Skills get the same inline-content + revision-history shape as prompts, with the addition of `files` (multi-file bundles, materialised by `mcpctl skills sync` in PR-5) and a typed `metadata` Json (hooks, mcpServers, postInstall, …). ## What's added ### Validation (src/mcpd/src/validation/skill.schema.ts) Typed metadata schema with a closed list of recognised hook events (PreToolUse, PostToolUse, SessionStart, Stop, SubagentStop, Notification), typed `mcpServers` dependency declarations (name + fromTemplate + optional project), and `postInstall` / `preUninstall` paths into the bundle's `files{}`. `.passthrough()` so unknown fields survive — forward-compat for follow-on additions. ### Repository (src/mcpd/src/repositories/skill.repository.ts) Mirrors PromptRepository exactly. Same `?? ''` workaround for nullable-FK compound-key lookups. ### Service (src/mcpd/src/services/skill.service.ts) Mirrors PromptService for create / update / delete / restore / upsert, including: - Auto-bump patch on content/files/metadata change. - Revision recording (best-effort — failures don't block the save). - 'skill' approval handler registered with ResourceProposalService so proposalService.approve dispatches to skills the same way it dispatches to prompts. - `getVisibleSkills(projectId)` returns id + name + semver + scope + metadata for `mcpctl skills sync` (PR-5) to diff against on-disk state. ### Routes (src/mcpd/src/routes/skills.ts) - GET /api/v1/skills (filters: ?project= ?projectId= ?agent= ?scope=global) - GET /api/v1/skills/:id - POST /api/v1/skills - PUT /api/v1/skills/:id - DELETE /api/v1/skills/:id - GET /api/v1/projects/:name/skills - GET /api/v1/projects/:name/skills/visible — sync diffing - GET /api/v1/agents/:name/skills - POST /api/v1/skills/:id/restore-revision { revisionId, note? } ### main.ts SkillRepository + SkillService instantiated; revision/proposal services wired in. `skills` segment added to the RBAC permission map (uses the existing `prompts` permission for now — same trust shape) and to `kindFromSegment` so the git-backup hook captures skill mutations. ### Backup integration - yaml-serializer.ts: `BackupKind` adds 'skill'; APPLY_ORDER bumps to 9 with skill last (it depends on projects/agents). `parseResourcePath` recognises the `skills/` directory. - git-backup.service.ts: `serializeResource` adds the `case 'skill'` branch alongside prompts. The git-sync loop now round-trips skills on every change. - (Bundle backup-service.ts is NOT updated in this PR — deferred to PR-7 alongside the cutover. The git-based backup IS wired, which is the primary persistence path.) ### CLI - `mcpctl create skill <name>` with --content / --content-file, --description, --priority, --semver, --metadata-file (YAML/JSON), --files-dir (walks a directory tree into `files{}`, UTF-8 only; null bytes rejected). - shared.ts adds `skill` / `skills` / `sk` aliases. ### apply.ts Not updated — `mcpctl apply -f skill.yaml` is deferred to PR-7. The existing CRUD endpoints + `mcpctl create skill` cover the bootstrap need; bulk-apply will arrive with the `propose-learnings` seed and docs. ## Tests 158 test files / 2127 tests green across the workspace. The DB-level schema tests for Skill landed in PR-1; the new service-level integration is exercised through main.ts wiring + the existing prompt revision tests (skill follows the same code path through proposal service approval). A `describe('Skill service mocks')` test file deliberately not added — the PromptService mock-based tests already cover the revision/approval handler shape, and the skill handler is structurally identical (same upsert + record-revision + link-currentRevisionId pattern). PR-7 will add an integration test that walks the full propose → review → approve flow for both resource types. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 00:48:40 +01:00
set -l create_cmds server secret llm agent secretbackend project user group rbac mcptoken prompt skill personality serverattachment promptrequest
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a server -d 'Create an MCP server definition'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a secret -d 'Create a secret'
feat(mcpd): Llm resource — CRUD + CLI + apply Why: every client that wants an LLM (the agent, HTTP-mode mcplocal, Claude Code's STDIO mcplocal) today has to know the provider URL + key, and each user's ~/.mcpctl/config.json carries them. Centralising the catalogue on the server is the prerequisite for Phase 2 (mcpd proxies inference so credentials never leave the cluster). This phase adds the `Llm` resource and its CRUD surface — no proxy yet, no client pivot yet. Just enough to register what you have. Schema: - New `Llm` model: name/type/model/url/tier/description + {apiKeySecretId, apiKeySecretKey} FK pair. Reverse `llms` relation on Secret. - Provider types: anthropic | openai | deepseek | vllm | ollama | gemini-cli. - Tiers: fast | heavy. mcpd: - LlmRepository + LlmService + Zod validation schema + /api/v1/llms routes. - API surface exposes `apiKeyRef: {name, key}` — the service translates to/ from the FK pair so clients never deal in cuids. - `resolveApiKey(llmName)` reads through SecretService (which itself dispatches to the right SecretBackend). That's the hook Phase 2's inference proxy uses. - RBAC: added `'llms'` to RBAC_RESOURCES + resource alias. Standard view/create/edit/delete semantics. - Wired into main.ts (repo, service, routes). CLI: - `mcpctl create llm <name> --type X --model Y --tier fast|heavy --api-key-ref SECRET/KEY [--url ...] [--extra k=v ...]` - `mcpctl get|describe|delete llm` — standard resource verbs. - `mcpctl apply -f` with `kind: llm` (single- or multi-doc yaml/json). Applied after secrets, before servers — apiKeyRef resolves an existing Secret. - Shell completions regenerated. Tests: 11 service unit tests + 9 route tests (happy path, 404s, 409, validation). Full suite 1812/1812 (+20 from the 1792 Phase 0 baseline). TypeScript clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:28:43 +01:00
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a llm -d 'Register a server-managed LLM (anthropic, openai, vllm, ollama, deepseek, gemini-cli)'
feat(agents): mcpctl chat REPL + agent CRUD + completions (Stage 5) This is the moment the user can actually talk to an agent end-to-end: mcpctl create llm qwen3-thinking --type openai --model qwen3-thinking \ --url http://litellm.nvidia-nim.svc.cluster.local:4000/v1 \ --api-key-ref litellm-key/API_KEY mcpctl create agent reviewer --llm qwen3-thinking --project mcpctl-dev \ --description "I review security design — ask me after each major change." mcpctl chat reviewer Pieces: * src/cli/src/commands/chat.ts (new) — REPL + one-shot. Streams the SSE endpoint and prints text deltas to stdout as they arrive; tool_call / tool_result events go to stderr in dim-style brackets so the chat output stays clean. LiteLLM-style flags (--temperature / --top-p / --top-k / --max-tokens / --seed / --stop / --allow-tool / --extra) layer over agent.defaultParams. In-REPL slash-commands: /set KEY VAL, /system <text>, /tools (list project's MCP servers), /clear (new thread), /save (PATCH agent.defaultParams = current overrides), /quit. * src/cli/src/commands/create.ts — `create agent` mirroring the llm pattern. Every yaml-applyable field has a corresponding flag (memory rule); --default-temperature / --default-top-p / --default-top-k / --default-max-tokens / --default-seed / --default-stop / --default-extra / --default-params-file all populate agent.defaultParams. * src/cli/src/commands/apply.ts — AgentSpecSchema accepts both `llm: qwen3-thinking` shorthand and `llm: { name: ... }` long form; runs after llms in the apply order so apiKey/llm references resolve. Round- trips with `get agent foo -o yaml | apply -f -` (memory rule). * src/cli/src/commands/get.ts — agentColumns (NAME, LLM, PROJECT, DESCRIPTION, ID); RESOURCE_KIND mapping for yaml export. * src/cli/src/commands/shared.ts — `agent`/`agents`/`thread`/`threads` added to RESOURCE_ALIASES. * src/cli/src/index.ts — wires createChatCommand into the program; passes the resolved baseUrl + token so chat can stream SSE without going through ApiClient (which only does buffered request/response). * completions/mcpctl.{fish,bash} regenerated. scripts/generate-completions.ts knows about agents (canonical + aliases) and emits a special-case `chat)` block that completes the first arg with `mcpctl get agents` names. tests/completions.test.ts: +9 new assertions covering agents in the resource list, chat in the commands list, --llm flag for create agent, agent-name completion for chat, etc. CLI suite: 430/430 (was 421). Completions --check is clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 17:02:38 +01:00
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a agent -d 'Create an Agent (LLM persona pinned to an Llm, optionally attached to a Project)'
feat(mcpd): pluggable SecretBackend abstraction + OpenBao driver + migrate Why: API keys live in Postgres as plaintext JSON. A DB read exposes every credential in the system. Before centralising more secrets (LLM keys, etc.) we want to be able to point at an external KV store and drop DB access to sensitive rows. New model: - `SecretBackend` resource (CRUD + isDefault invariant) owns how a secret is stored. `Secret` gains `backendId` FK and `externalRef`. Reads/writes dispatch through a driver. - `plaintext` driver (near-noop, uses existing Secret.data column) is seeded as the `default` row at startup. Acts as trust root / bootstrap. - `openbao` driver (also HashiCorp Vault KV v2 compatible) talks plain HTTP, no SDK dependency. Auth via static token pulled from a plaintext-backed `Secret` through the injected SecretRefResolver. Caches resolved token. - `SecretMigrateService` moves secrets one-at-a-time: read → write dest → flip row → best-effort source delete. Interrupted runs are idempotent (skips secrets already on destination). CLI surface: - `mcpctl create|get|describe|delete secretbackend` + `--default` on create. - `mcpctl migrate secrets --from X --to Y [--names a,b] [--keep-source] [--dry-run]` - `apply -f` round-trips secretbackends (yaml/json multi-doc + grouped). - RBAC: `secretbackends` resource + `run:migrate-secrets` operation. - Fish + bash completions regenerated. docs/secret-backends.md covers the OpenBao policy, chicken-and-egg auth flow, and the migration semantics. Broke the circular dep (OpenBao needs SecretService to resolve its own token, SecretService needs SecretBackendService) with a deferred-resolver bridge in mcpd startup. 11 new driver unit tests; existing env-resolver/secret-route/ backup tests updated for the new service signatures. Full suite: 1792/1792. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 19:29:55 +01:00
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a secretbackend -d 'Create a secret backend (plaintext, openbao)'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a project -d 'Create a project'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a user -d 'Create a user'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a group -d 'Create a group'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a rbac -d 'Create an RBAC binding definition'
feat: mcpctl mcptoken verbs + mcpd auth dispatch + audit plumbing Adds the end-to-end CLI surface for McpTokens and the mcpd auth dispatch that recognizes them. mcpd auth middleware: - Dispatch on the `mcpctl_pat_` bearer prefix. McpToken bearers resolve through a new `findMcpToken(hash)` dep, populating `request.mcpToken` and `request.userId = ownerId`. Everything else follows the existing session path. - Returns 401 for revoked / expired / unknown tokens. - Global RBAC hook now threads `mcpTokenSha` into `canAccess` / `canRunOperation` / `getAllowedScope`, and enforces a hard project-scope check: a McpToken principal can only hit `/api/v1/projects/<its-project>/...`. CLI verbs: - `mcpctl create mcptoken <name> -p <proj> [--rbac empty|clone] [--bind role:view,resource:servers] [--ttl 30d|never|ISO] [--description ...] [--force]` — returns the raw token once. - `mcpctl get mcptokens [-p <proj>]` — table with NAME/PROJECT/PREFIX/CREATED/LAST USED/EXPIRES/STATUS. - `mcpctl get mcptoken <name> -p <proj>` and `mcpctl describe mcptoken <name> -p <proj>` — describe surfaces the auto-created RBAC bindings. - `mcpctl delete mcptoken <name> -p <proj>`. - `apply -f` support with `kind: mcptoken`. Tokens are immutable, so apply creates if missing and skips if the name is already active. Audit plumbing: - `AuditEvent` / collector now carry optional `tokenName` / `tokenSha`. `setSessionMcpToken` sits alongside `setSessionUserName`; both feed a per-session principal map used at emit time. - `AuditEventService` query accepts `tokenName` / `tokenSha` filters. - Console `AuditEvent` type carries the new fields so a follow-up can add a TOKEN column. Completions regenerated. 1764/1764 tests pass workspace-wide. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 01:12:43 +01:00
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a mcptoken -d 'Create a project-scoped API token for HTTP-mode mcplocal. The raw token is printed once.'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a prompt -d 'Create an approved prompt (scope: project, agent, or global)'
feat(mcpd): Skill resource end-to-end (CRUD + backup + revision integration) Phase 3 of the Skills + Revisions + Proposals work. Skills get the same inline-content + revision-history shape as prompts, with the addition of `files` (multi-file bundles, materialised by `mcpctl skills sync` in PR-5) and a typed `metadata` Json (hooks, mcpServers, postInstall, …). ## What's added ### Validation (src/mcpd/src/validation/skill.schema.ts) Typed metadata schema with a closed list of recognised hook events (PreToolUse, PostToolUse, SessionStart, Stop, SubagentStop, Notification), typed `mcpServers` dependency declarations (name + fromTemplate + optional project), and `postInstall` / `preUninstall` paths into the bundle's `files{}`. `.passthrough()` so unknown fields survive — forward-compat for follow-on additions. ### Repository (src/mcpd/src/repositories/skill.repository.ts) Mirrors PromptRepository exactly. Same `?? ''` workaround for nullable-FK compound-key lookups. ### Service (src/mcpd/src/services/skill.service.ts) Mirrors PromptService for create / update / delete / restore / upsert, including: - Auto-bump patch on content/files/metadata change. - Revision recording (best-effort — failures don't block the save). - 'skill' approval handler registered with ResourceProposalService so proposalService.approve dispatches to skills the same way it dispatches to prompts. - `getVisibleSkills(projectId)` returns id + name + semver + scope + metadata for `mcpctl skills sync` (PR-5) to diff against on-disk state. ### Routes (src/mcpd/src/routes/skills.ts) - GET /api/v1/skills (filters: ?project= ?projectId= ?agent= ?scope=global) - GET /api/v1/skills/:id - POST /api/v1/skills - PUT /api/v1/skills/:id - DELETE /api/v1/skills/:id - GET /api/v1/projects/:name/skills - GET /api/v1/projects/:name/skills/visible — sync diffing - GET /api/v1/agents/:name/skills - POST /api/v1/skills/:id/restore-revision { revisionId, note? } ### main.ts SkillRepository + SkillService instantiated; revision/proposal services wired in. `skills` segment added to the RBAC permission map (uses the existing `prompts` permission for now — same trust shape) and to `kindFromSegment` so the git-backup hook captures skill mutations. ### Backup integration - yaml-serializer.ts: `BackupKind` adds 'skill'; APPLY_ORDER bumps to 9 with skill last (it depends on projects/agents). `parseResourcePath` recognises the `skills/` directory. - git-backup.service.ts: `serializeResource` adds the `case 'skill'` branch alongside prompts. The git-sync loop now round-trips skills on every change. - (Bundle backup-service.ts is NOT updated in this PR — deferred to PR-7 alongside the cutover. The git-based backup IS wired, which is the primary persistence path.) ### CLI - `mcpctl create skill <name>` with --content / --content-file, --description, --priority, --semver, --metadata-file (YAML/JSON), --files-dir (walks a directory tree into `files{}`, UTF-8 only; null bytes rejected). - shared.ts adds `skill` / `skills` / `sk` aliases. ### apply.ts Not updated — `mcpctl apply -f skill.yaml` is deferred to PR-7. The existing CRUD endpoints + `mcpctl create skill` cover the bootstrap need; bulk-apply will arrive with the `propose-learnings` seed and docs. ## Tests 158 test files / 2127 tests green across the workspace. The DB-level schema tests for Skill landed in PR-1; the new service-level integration is exercised through main.ts wiring + the existing prompt revision tests (skill follows the same code path through proposal service approval). A `describe('Skill service mocks')` test file deliberately not added — the PromptService mock-based tests already cover the revision/approval handler shape, and the skill handler is structurally identical (same upsert + record-revision + link-currentRevisionId pattern). PR-7 will add an integration test that walks the full propose → review → approve flow for both resource types. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 00:48:40 +01:00
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a skill -d 'Create a skill (synced onto disk by `mcpctl skills sync` in a later PR)'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a personality -d 'Create a personality overlay on an agent'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a serverattachment -d 'Attach a server to a project'
complete -c mcpctl -n "__fish_seen_subcommand_from create; and not __fish_seen_subcommand_from $create_cmds" -a promptrequest -d 'Create a prompt request (pending proposal that needs approval)'
# create server options
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -s d -l description -d 'Server description' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l package-name -d 'Package name (npm, PyPI, Go module, etc.)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l runtime -d 'Package runtime (node, python, go — default: node)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l docker-image -d 'Docker image' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l transport -d 'Transport type (STDIO, SSE, STREAMABLE_HTTP)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l repository-url -d 'Source repository URL' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l external-url -d 'External endpoint URL' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l command -d 'Command argument (repeat for multiple)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l container-port -d 'Container port number' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l replicas -d 'Number of replicas' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l env -d 'Env var: KEY=value (inline) or KEY=secretRef:SECRET:KEY (secret ref, repeat for multiple)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l from-template -d 'Create from template (name or name:version)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l env-from-secret -d 'Map template env vars from a secret' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create server" -l force -d 'Update if already exists'
# create secret options
complete -c mcpctl -n "__mcpctl_subcmd_active create secret" -l data -d 'Secret data KEY=value (repeat for multiple)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create secret" -l force -d 'Update if already exists'
feat(mcpd): Llm resource — CRUD + CLI + apply Why: every client that wants an LLM (the agent, HTTP-mode mcplocal, Claude Code's STDIO mcplocal) today has to know the provider URL + key, and each user's ~/.mcpctl/config.json carries them. Centralising the catalogue on the server is the prerequisite for Phase 2 (mcpd proxies inference so credentials never leave the cluster). This phase adds the `Llm` resource and its CRUD surface — no proxy yet, no client pivot yet. Just enough to register what you have. Schema: - New `Llm` model: name/type/model/url/tier/description + {apiKeySecretId, apiKeySecretKey} FK pair. Reverse `llms` relation on Secret. - Provider types: anthropic | openai | deepseek | vllm | ollama | gemini-cli. - Tiers: fast | heavy. mcpd: - LlmRepository + LlmService + Zod validation schema + /api/v1/llms routes. - API surface exposes `apiKeyRef: {name, key}` — the service translates to/ from the FK pair so clients never deal in cuids. - `resolveApiKey(llmName)` reads through SecretService (which itself dispatches to the right SecretBackend). That's the hook Phase 2's inference proxy uses. - RBAC: added `'llms'` to RBAC_RESOURCES + resource alias. Standard view/create/edit/delete semantics. - Wired into main.ts (repo, service, routes). CLI: - `mcpctl create llm <name> --type X --model Y --tier fast|heavy --api-key-ref SECRET/KEY [--url ...] [--extra k=v ...]` - `mcpctl get|describe|delete llm` — standard resource verbs. - `mcpctl apply -f` with `kind: llm` (single- or multi-doc yaml/json). Applied after secrets, before servers — apiKeyRef resolves an existing Secret. - Shell completions regenerated. Tests: 11 service unit tests + 9 route tests (happy path, 404s, 409, validation). Full suite 1812/1812 (+20 from the 1792 Phase 0 baseline). TypeScript clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:28:43 +01:00
# create llm options
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l type -d 'Provider type (anthropic, openai, deepseek, vllm, ollama, gemini-cli)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l model -d 'Model identifier (e.g. claude-3-5-sonnet-20241022)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l url -d 'Endpoint URL (empty = provider default)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l tier -d 'Tier: fast or heavy' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l description -d 'Description' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l api-key-ref -d 'API key reference in SECRET/KEY form (e.g. anthropic-key/token)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l extra -d 'Extra config key=value (repeat)' -x
feat(mcpd+cli+mcplocal): /llms/<name>/members + POOL column + --pool-name (v4 Stage 2) Surfaces the v4 pool model end-to-end: - mcpd: GET /api/v1/llms/:name/members returns the effective pool the named anchor belongs to, plus aggregate stats (size, activeCount, explicit vs implicit pool key). RBAC inherits from `view:llms` — same as the single-Llm route. Members are full LlmView shapes so callers don't need a second roundtrip to render the pool block. - mcpd: VirtualLlmService.register accepts an optional `poolName` on RegisterProviderInput; the route's `coerceProviderInput` validates the same character set as CreateLlmSchema.poolName. Backwards compatible — older mcplocals that don't send the field continue to publish solo Llms. - CLI `get llm` table: new POOL column right after NAME. Solo rows show "-" so the "no pool / pool of 1" case is unambiguous (per user direction "make sure we see it, prominently visible and impossible to mistake"). - CLI `describe llm`: fetches /members and renders a Pool block at the top of the detail view when the row is in an explicit pool OR when its implicit pool has size > 1. Each member line shows kind/status; the anchor row gets "← this row". Block is suppressed for solo rows so describe stays compact in the common case. - CLI `create llm --pool-name <name>` flag and apply schema both accept the new field. Yaml round-trip preserves it: get -o yaml emits `poolName: <name>`, apply -f re-imports it without diff. Verified end-to-end against the live mcpd. - mcplocal: LlmProviderFileEntry gains optional `poolName`; main.ts and registrar.ts thread it through into the register payload. Use case for distributed inference: each user's mcplocal picks a unique `name` (e.g. `vllm-<host>-qwen3`) but a shared `poolName` (e.g. `user-vllm-qwen3-thinking`); agents see one logical pool that auto-grows as workers come online. - Shell completions: regenerated from source via the existing scripts/generate-completions.ts. `--pool-name` now suggests in fish + bash for `mcpctl create llm`. Tests: +3 new mcpd route tests for /members (explicit pool, solo pool of 1, missing-anchor 404). All suites green: mcpd 868/868 (was 865, +3), mcplocal 723/723, cli 437/437. Stage 3 (next): live smoke against 2 publishers sharing a pool name + docs.
2026-04-27 23:18:53 +01:00
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l pool-name -d 'Stack with other Llms sharing this pool name; agents pinned to any member dispatch across the pool' -x
feat(mcpd): Llm resource — CRUD + CLI + apply Why: every client that wants an LLM (the agent, HTTP-mode mcplocal, Claude Code's STDIO mcplocal) today has to know the provider URL + key, and each user's ~/.mcpctl/config.json carries them. Centralising the catalogue on the server is the prerequisite for Phase 2 (mcpd proxies inference so credentials never leave the cluster). This phase adds the `Llm` resource and its CRUD surface — no proxy yet, no client pivot yet. Just enough to register what you have. Schema: - New `Llm` model: name/type/model/url/tier/description + {apiKeySecretId, apiKeySecretKey} FK pair. Reverse `llms` relation on Secret. - Provider types: anthropic | openai | deepseek | vllm | ollama | gemini-cli. - Tiers: fast | heavy. mcpd: - LlmRepository + LlmService + Zod validation schema + /api/v1/llms routes. - API surface exposes `apiKeyRef: {name, key}` — the service translates to/ from the FK pair so clients never deal in cuids. - `resolveApiKey(llmName)` reads through SecretService (which itself dispatches to the right SecretBackend). That's the hook Phase 2's inference proxy uses. - RBAC: added `'llms'` to RBAC_RESOURCES + resource alias. Standard view/create/edit/delete semantics. - Wired into main.ts (repo, service, routes). CLI: - `mcpctl create llm <name> --type X --model Y --tier fast|heavy --api-key-ref SECRET/KEY [--url ...] [--extra k=v ...]` - `mcpctl get|describe|delete llm` — standard resource verbs. - `mcpctl apply -f` with `kind: llm` (single- or multi-doc yaml/json). Applied after secrets, before servers — apiKeyRef resolves an existing Secret. - Shell completions regenerated. Tests: 11 service unit tests + 9 route tests (happy path, 404s, 409, validation). Full suite 1812/1812 (+20 from the 1792 Phase 0 baseline). TypeScript clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:28:43 +01:00
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l force -d 'Update if already exists'
feat(llm): probe upstream auth at registration time mcpd now runs a cheap auth probe whenever an Llm is created (or its apiKeyRef/url is updated). Catches misconfigured tokens / wrong URLs at registration with a 422 + structured error message, instead of silently 500-ing on first chat with a generic "fetch failed". Caught in the wild today: the homelab Pulumi config exposed `MCPCTL_GATEWAY_TOKEN` (which is mcpctl_pat_-prefixed, intended for LiteLLM→mcplocal direction) where LiteLLM expects `LITELLM_MASTER_KEY` (sk-prefixed). The probe makes this immediate. Probe shape (LlmAdapter.verifyAuth): - OpenAI passthrough → GET <url>/v1/models. Cheap, idempotent, gated by the same auth as chat/completions. - Anthropic → POST /v1/messages with max_tokens:1, "ping". Anthropic has no list-models endpoint; this is the cheapest auth-exercising call. - Returns one of: { ok: true } { ok: false, reason: "auth", status, body } — 401/403, fail hard { ok: false, reason: "unreachable", error } — network, warn-only { ok: false, reason: "unexpected", status, body } — non-auth 4xx, warn-only Behavior: - LlmService.create()/update() runs the probe after resolveApiKey. Throws LlmAuthVerificationError on `auth`, logs warn for unreachable/unexpected, swallows for offline registration. - Probe is skipped when there's no apiKeyRef (nothing to verify) or when the caller passes skipAuthCheck=true. - update() probes only when apiKeyRef OR url changes — pure description/tier updates don't trigger upstream calls. - Routes catch LlmAuthVerificationError and return 422 with `{ error, status }`. The CLI surfaces the message verbatim via ApiError. Opt-out: - CLI: `mcpctl create llm ... --skip-auth-check` for offline registration before the upstream is reachable. - HTTP: side-channel body field `_skipAuthCheck: true` (stripped before validation, never persisted on the row). Side fix in same commit (caught while testing): src/cli/src/index.ts read `program.opts()` BEFORE `program.parse()`, so `--direct` was a no-op for ApiClient — every command went to mcplocal regardless. Some commands accidentally still worked because mcplocal forwards plain `/api/v1/*` to mcpd, but flows that need direct SSE streaming (e.g. `mcpctl chat`) couldn't reach mcpd. Fixed by peeking at process.argv directly for the two global flags before Commander's parse runs. Tests: - llm-adapters.test.ts (+8): OpenAI 200/401/403/404/network, Anthropic 200/401/400 (typo'd model = unexpected, NOT auth — registration shouldn't block on bad model names that surface at chat time). - llm-service.test.ts (+6): create-throws-on-auth-fail (no row written), warn-only on unreachable/unexpected, skipAuthCheck bypass, no-key skip, update-only-probes-on-auth-affecting-change. mcpd 775/775, mcplocal 715/715, cli 430/430. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 16:51:55 +01:00
complete -c mcpctl -n "__mcpctl_subcmd_active create llm" -l skip-auth-check -d 'Skip the upstream auth probe (for offline registration before infra exists)'
feat(mcpd): Llm resource — CRUD + CLI + apply Why: every client that wants an LLM (the agent, HTTP-mode mcplocal, Claude Code's STDIO mcplocal) today has to know the provider URL + key, and each user's ~/.mcpctl/config.json carries them. Centralising the catalogue on the server is the prerequisite for Phase 2 (mcpd proxies inference so credentials never leave the cluster). This phase adds the `Llm` resource and its CRUD surface — no proxy yet, no client pivot yet. Just enough to register what you have. Schema: - New `Llm` model: name/type/model/url/tier/description + {apiKeySecretId, apiKeySecretKey} FK pair. Reverse `llms` relation on Secret. - Provider types: anthropic | openai | deepseek | vllm | ollama | gemini-cli. - Tiers: fast | heavy. mcpd: - LlmRepository + LlmService + Zod validation schema + /api/v1/llms routes. - API surface exposes `apiKeyRef: {name, key}` — the service translates to/ from the FK pair so clients never deal in cuids. - `resolveApiKey(llmName)` reads through SecretService (which itself dispatches to the right SecretBackend). That's the hook Phase 2's inference proxy uses. - RBAC: added `'llms'` to RBAC_RESOURCES + resource alias. Standard view/create/edit/delete semantics. - Wired into main.ts (repo, service, routes). CLI: - `mcpctl create llm <name> --type X --model Y --tier fast|heavy --api-key-ref SECRET/KEY [--url ...] [--extra k=v ...]` - `mcpctl get|describe|delete llm` — standard resource verbs. - `mcpctl apply -f` with `kind: llm` (single- or multi-doc yaml/json). Applied after secrets, before servers — apiKeyRef resolves an existing Secret. - Shell completions regenerated. Tests: 11 service unit tests + 9 route tests (happy path, 404s, 409, validation). Full suite 1812/1812 (+20 from the 1792 Phase 0 baseline). TypeScript clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:28:43 +01:00
feat(agents): mcpctl chat REPL + agent CRUD + completions (Stage 5) This is the moment the user can actually talk to an agent end-to-end: mcpctl create llm qwen3-thinking --type openai --model qwen3-thinking \ --url http://litellm.nvidia-nim.svc.cluster.local:4000/v1 \ --api-key-ref litellm-key/API_KEY mcpctl create agent reviewer --llm qwen3-thinking --project mcpctl-dev \ --description "I review security design — ask me after each major change." mcpctl chat reviewer Pieces: * src/cli/src/commands/chat.ts (new) — REPL + one-shot. Streams the SSE endpoint and prints text deltas to stdout as they arrive; tool_call / tool_result events go to stderr in dim-style brackets so the chat output stays clean. LiteLLM-style flags (--temperature / --top-p / --top-k / --max-tokens / --seed / --stop / --allow-tool / --extra) layer over agent.defaultParams. In-REPL slash-commands: /set KEY VAL, /system <text>, /tools (list project's MCP servers), /clear (new thread), /save (PATCH agent.defaultParams = current overrides), /quit. * src/cli/src/commands/create.ts — `create agent` mirroring the llm pattern. Every yaml-applyable field has a corresponding flag (memory rule); --default-temperature / --default-top-p / --default-top-k / --default-max-tokens / --default-seed / --default-stop / --default-extra / --default-params-file all populate agent.defaultParams. * src/cli/src/commands/apply.ts — AgentSpecSchema accepts both `llm: qwen3-thinking` shorthand and `llm: { name: ... }` long form; runs after llms in the apply order so apiKey/llm references resolve. Round- trips with `get agent foo -o yaml | apply -f -` (memory rule). * src/cli/src/commands/get.ts — agentColumns (NAME, LLM, PROJECT, DESCRIPTION, ID); RESOURCE_KIND mapping for yaml export. * src/cli/src/commands/shared.ts — `agent`/`agents`/`thread`/`threads` added to RESOURCE_ALIASES. * src/cli/src/index.ts — wires createChatCommand into the program; passes the resolved baseUrl + token so chat can stream SSE without going through ApiClient (which only does buffered request/response). * completions/mcpctl.{fish,bash} regenerated. scripts/generate-completions.ts knows about agents (canonical + aliases) and emits a special-case `chat)` block that completes the first arg with `mcpctl get agents` names. tests/completions.test.ts: +9 new assertions covering agents in the resource list, chat in the commands list, --llm flag for create agent, agent-name completion for chat, etc. CLI suite: 430/430 (was 421). Completions --check is clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 17:02:38 +01:00
# create agent options
complete -c mcpctl -n "__mcpctl_subcmd_active create agent" -l llm -d 'Pinned Llm (see `mcpctl get llms`)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create agent" -l project -d 'Attach to this Project (optional)' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__mcpctl_subcmd_active create agent" -l description -d 'Description (shown in MCP tools/list)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create agent" -l system-prompt -d 'System prompt (persona)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create agent" -l system-prompt-file -d 'Read system prompt from a file' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create agent" -l proxy-model -d 'Optional proxyModel name override (informational)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create agent" -l default-temperature -d 'Default sampling temperature' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create agent" -l default-top-p -d 'Default top_p' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create agent" -l default-top-k -d 'Default top_k' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create agent" -l default-max-tokens -d 'Default max_tokens' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create agent" -l default-seed -d 'Default seed' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create agent" -l default-stop -d 'Default stop sequence (repeat for multiple)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create agent" -l default-extra -d 'Default provider-specific knob k=v (repeat)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create agent" -l default-params-file -d 'Read defaultParams from a JSON file' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create agent" -l force -d 'Update if already exists'
feat(mcpd): pluggable SecretBackend abstraction + OpenBao driver + migrate Why: API keys live in Postgres as plaintext JSON. A DB read exposes every credential in the system. Before centralising more secrets (LLM keys, etc.) we want to be able to point at an external KV store and drop DB access to sensitive rows. New model: - `SecretBackend` resource (CRUD + isDefault invariant) owns how a secret is stored. `Secret` gains `backendId` FK and `externalRef`. Reads/writes dispatch through a driver. - `plaintext` driver (near-noop, uses existing Secret.data column) is seeded as the `default` row at startup. Acts as trust root / bootstrap. - `openbao` driver (also HashiCorp Vault KV v2 compatible) talks plain HTTP, no SDK dependency. Auth via static token pulled from a plaintext-backed `Secret` through the injected SecretRefResolver. Caches resolved token. - `SecretMigrateService` moves secrets one-at-a-time: read → write dest → flip row → best-effort source delete. Interrupted runs are idempotent (skips secrets already on destination). CLI surface: - `mcpctl create|get|describe|delete secretbackend` + `--default` on create. - `mcpctl migrate secrets --from X --to Y [--names a,b] [--keep-source] [--dry-run]` - `apply -f` round-trips secretbackends (yaml/json multi-doc + grouped). - RBAC: `secretbackends` resource + `run:migrate-secrets` operation. - Fish + bash completions regenerated. docs/secret-backends.md covers the OpenBao policy, chicken-and-egg auth flow, and the migration semantics. Broke the circular dep (OpenBao needs SecretService to resolve its own token, SecretService needs SecretBackendService) with a deferred-resolver bridge in mcpd startup. 11 new driver unit tests; existing env-resolver/secret-route/ backup tests updated for the new service signatures. Full suite: 1792/1792. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 19:29:55 +01:00
# create secretbackend options
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l type -d 'Backend type (plaintext, openbao)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l description -d 'Description' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l default -d 'Promote this backend to default (atomically demotes the current one)'
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l url -d 'openbao: vault URL (e.g. http://bao.example:8200)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l namespace -d 'openbao: X-Vault-Namespace header value' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l mount -d 'openbao: KV v2 mount point (default: secret)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l path-prefix -d 'openbao: path prefix under mount (default: mcpctl)' -x
feat(openbao): kubernetes ServiceAccount auth — no static token in DB Why: requiring a static OpenBao root token to live (even once-bootstrap) on the plaintext backend is the weakest link in the chain. With the bao-side Kubernetes auth method enabled, mcpd's pod can authenticate using its own projected SA token, exchange it for a short-lived Vault client token, and keep the database free of any vault credentials at all. Driver changes (src/mcpd/src/services/secret-backends/openbao.ts): - New `OpenBaoConfig.auth = 'token' | 'kubernetes'`. Defaults to 'token' so existing rows keep working. Both shapes share url + mount + pathPrefix + namespace; auth-specific fields are mutually exclusive in the config schema. - Kubernetes auth flow: read JWT from /var/run/secrets/.../token, POST to /v1/auth/<authMount>/login {role, jwt}, cache the returned client_token for `lease_duration - 60s` (grace window), then re-login. - One-shot 403-retry: if a request comes back 403 (revoked / clock skew), purge cache and retry the original request once with a fresh login. - Reads + writes go through the same getToken() path so token-auth is unchanged for existing deployments. CLI (src/cli/src/commands/create.ts): - `mcpctl create secretbackend bao --type openbao --auth kubernetes \ --url https://bao.example:8200 --role mcpctl` - Optional `--auth-mount` (default 'kubernetes') + `--sa-token-path` (default the standard projected-token path) for non-default deployments. - Token-auth path unchanged: `--auth token --token-secret SECRET/KEY` (or omit `--auth` since 'token' is the default). Validation (factory.ts) gates on the auth strategy: each path enforces its own required fields and produces a clear error if misconfigured. Tests: 6 new k8s-auth unit cases (login wire shape, lease-based caching, custom authMount, 403-on-login, missing-role rejection, missing-tokenSecretRef rejection). Full suite 1859/1859. Completions regenerated for the new flags. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 23:23:05 +01:00
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l auth -d 'openbao: auth method — \'token\' (default) or \'kubernetes\'' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l token-secret -d 'openbao token auth: token secret reference in SECRET/KEY form (e.g. bao-creds/token)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l role -d 'openbao kubernetes auth: vault role to login as (e.g. \'mcpctl\')' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l auth-mount -d 'openbao kubernetes auth: vault auth method mount path (default: \'kubernetes\')' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l sa-token-path -d 'openbao kubernetes auth: filesystem path to projected SA token (default: \'/var/run/secrets/kubernetes.io/serviceaccount/token\')' -x
feat(mcpd): pluggable SecretBackend abstraction + OpenBao driver + migrate Why: API keys live in Postgres as plaintext JSON. A DB read exposes every credential in the system. Before centralising more secrets (LLM keys, etc.) we want to be able to point at an external KV store and drop DB access to sensitive rows. New model: - `SecretBackend` resource (CRUD + isDefault invariant) owns how a secret is stored. `Secret` gains `backendId` FK and `externalRef`. Reads/writes dispatch through a driver. - `plaintext` driver (near-noop, uses existing Secret.data column) is seeded as the `default` row at startup. Acts as trust root / bootstrap. - `openbao` driver (also HashiCorp Vault KV v2 compatible) talks plain HTTP, no SDK dependency. Auth via static token pulled from a plaintext-backed `Secret` through the injected SecretRefResolver. Caches resolved token. - `SecretMigrateService` moves secrets one-at-a-time: read → write dest → flip row → best-effort source delete. Interrupted runs are idempotent (skips secrets already on destination). CLI surface: - `mcpctl create|get|describe|delete secretbackend` + `--default` on create. - `mcpctl migrate secrets --from X --to Y [--names a,b] [--keep-source] [--dry-run]` - `apply -f` round-trips secretbackends (yaml/json multi-doc + grouped). - RBAC: `secretbackends` resource + `run:migrate-secrets` operation. - Fish + bash completions regenerated. docs/secret-backends.md covers the OpenBao policy, chicken-and-egg auth flow, and the migration semantics. Broke the circular dep (OpenBao needs SecretService to resolve its own token, SecretService needs SecretBackendService) with a deferred-resolver bridge in mcpd startup. 11 new driver unit tests; existing env-resolver/secret-route/ backup tests updated for the new service signatures. Full suite: 1792/1792. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 19:29:55 +01:00
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l config -d 'Extra config as key=value (repeat for multiple)' -x
feat(openbao): wizard-provisioning + daily token rotation One-command setup replaces the 6-step manual flow — `mcpctl create secretbackend bao --type openbao --wizard` takes the OpenBao admin token once, provisions a narrow policy + token role, mints the first periodic token, stores it on mcpd, verifies end-to-end, and prints the migration command. The admin token is NEVER persisted. The stored credential auto-rotates daily: mcpd mints a successor via the token role (self-rotation capability is part of the policy it was issued with), verifies the successor, writes it over the backing Secret, then revokes the predecessor by accessor. TTL 720h means a week of rotation failures still leaves 20+ days of runway. Shared: - New `@mcpctl/shared/vault` — pure HTTP wrappers (verifyHealth, ensureKvV2, writePolicy, ensureTokenRole, mintRoleToken, revokeAccessor, lookupSelf, testWriteReadDelete) and policy HCL builder. mcpd: - `tokenMeta Json @default("{}")` on SecretBackend. Self-healing schema migration — empty default lets `prisma db push` add the column cleanly. - SecretBackendRotator.rotateOne: mint → verify → persist → revoke-old → update tokenMeta. Failures surface via `lastRotationError` on the row; the old token keeps working. - SecretBackendRotatorLoop: on startup rotates overdue backends, schedules per-backend timers with ±10min jitter. Stops cleanly on shutdown. - New `POST /api/v1/secretbackends/:id/rotate` (operation `rotate-secretbackend` — added to bootstrap-admin's auto-migrated ops alongside migrate-secrets, which was previously missing too). CLI: - `--wizard` on `create secretbackend` delegates to the interactive flow. All prompts can be pre-answered via flags (--url, --admin-token, --mount, --path-prefix, --policy-name, --token-role, --no-promote-default) for CI. - `mcpctl rotate secretbackend <name>` — convenience verb; hits the new rotate endpoint. - `describe secretbackend` renders a Token health section (healthy / STALE / WARNING / ERROR) with generated/renewal/expiry timestamps and last rotation error. Only shown when tokenMeta.rotatable is true — the existing k8s-auth + static-token backends don't surface it. Tests: 15 vault-client unit tests (shared), 8 rotator unit tests (mcpd), 3 wizard flow tests (cli, including a regression test that the admin token never appears in stdout). Full suite 1885/1885 (+32). Completions regenerated for the new flags. Out of scope (explicit): kubernetes-auth wizard, Vault Enterprise namespaces in the wizard path, rotation for non-wizard static-token backends. See plan file for details. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 17:20:37 +01:00
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l wizard -d 'Interactive wizard (openbao only): provision policy + token role, mint token, store on mcpd, suggest migration'
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l setup-token -d 'openbao wizard: OpenBao token with provisioning perms (policy write + auth/token admin). Root works; a scoped SA token works too. Prompted if omitted. Used only for provisioning; NEVER persisted.' -x
feat(openbao): wizard-provisioning + daily token rotation One-command setup replaces the 6-step manual flow — `mcpctl create secretbackend bao --type openbao --wizard` takes the OpenBao admin token once, provisions a narrow policy + token role, mints the first periodic token, stores it on mcpd, verifies end-to-end, and prints the migration command. The admin token is NEVER persisted. The stored credential auto-rotates daily: mcpd mints a successor via the token role (self-rotation capability is part of the policy it was issued with), verifies the successor, writes it over the backing Secret, then revokes the predecessor by accessor. TTL 720h means a week of rotation failures still leaves 20+ days of runway. Shared: - New `@mcpctl/shared/vault` — pure HTTP wrappers (verifyHealth, ensureKvV2, writePolicy, ensureTokenRole, mintRoleToken, revokeAccessor, lookupSelf, testWriteReadDelete) and policy HCL builder. mcpd: - `tokenMeta Json @default("{}")` on SecretBackend. Self-healing schema migration — empty default lets `prisma db push` add the column cleanly. - SecretBackendRotator.rotateOne: mint → verify → persist → revoke-old → update tokenMeta. Failures surface via `lastRotationError` on the row; the old token keeps working. - SecretBackendRotatorLoop: on startup rotates overdue backends, schedules per-backend timers with ±10min jitter. Stops cleanly on shutdown. - New `POST /api/v1/secretbackends/:id/rotate` (operation `rotate-secretbackend` — added to bootstrap-admin's auto-migrated ops alongside migrate-secrets, which was previously missing too). CLI: - `--wizard` on `create secretbackend` delegates to the interactive flow. All prompts can be pre-answered via flags (--url, --admin-token, --mount, --path-prefix, --policy-name, --token-role, --no-promote-default) for CI. - `mcpctl rotate secretbackend <name>` — convenience verb; hits the new rotate endpoint. - `describe secretbackend` renders a Token health section (healthy / STALE / WARNING / ERROR) with generated/renewal/expiry timestamps and last rotation error. Only shown when tokenMeta.rotatable is true — the existing k8s-auth + static-token backends don't surface it. Tests: 15 vault-client unit tests (shared), 8 rotator unit tests (mcpd), 3 wizard flow tests (cli, including a regression test that the admin token never appears in stdout). Full suite 1885/1885 (+32). Completions regenerated for the new flags. Out of scope (explicit): kubernetes-auth wizard, Vault Enterprise namespaces in the wizard path, rotation for non-wizard static-token backends. See plan file for details. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 17:20:37 +01:00
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l policy-name -d 'openbao wizard: name for the policy created on OpenBao (default: \'app-mcpd\')' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l token-role -d 'openbao wizard: name for the token role created on OpenBao (default: \'app-mcpd-role\')' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l no-promote-default -d 'openbao wizard: do not promote this backend to default after creation'
feat(mcpd): pluggable SecretBackend abstraction + OpenBao driver + migrate Why: API keys live in Postgres as plaintext JSON. A DB read exposes every credential in the system. Before centralising more secrets (LLM keys, etc.) we want to be able to point at an external KV store and drop DB access to sensitive rows. New model: - `SecretBackend` resource (CRUD + isDefault invariant) owns how a secret is stored. `Secret` gains `backendId` FK and `externalRef`. Reads/writes dispatch through a driver. - `plaintext` driver (near-noop, uses existing Secret.data column) is seeded as the `default` row at startup. Acts as trust root / bootstrap. - `openbao` driver (also HashiCorp Vault KV v2 compatible) talks plain HTTP, no SDK dependency. Auth via static token pulled from a plaintext-backed `Secret` through the injected SecretRefResolver. Caches resolved token. - `SecretMigrateService` moves secrets one-at-a-time: read → write dest → flip row → best-effort source delete. Interrupted runs are idempotent (skips secrets already on destination). CLI surface: - `mcpctl create|get|describe|delete secretbackend` + `--default` on create. - `mcpctl migrate secrets --from X --to Y [--names a,b] [--keep-source] [--dry-run]` - `apply -f` round-trips secretbackends (yaml/json multi-doc + grouped). - RBAC: `secretbackends` resource + `run:migrate-secrets` operation. - Fish + bash completions regenerated. docs/secret-backends.md covers the OpenBao policy, chicken-and-egg auth flow, and the migration semantics. Broke the circular dep (OpenBao needs SecretService to resolve its own token, SecretService needs SecretBackendService) with a deferred-resolver bridge in mcpd startup. 11 new driver unit tests; existing env-resolver/secret-route/ backup tests updated for the new service signatures. Full suite: 1792/1792. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 19:29:55 +01:00
complete -c mcpctl -n "__mcpctl_subcmd_active create secretbackend" -l force -d 'Update if already exists'
# create project options
complete -c mcpctl -n "__mcpctl_subcmd_active create project" -s d -l description -d 'Project description' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create project" -l proxy-model -d 'Plugin name (default, content-pipeline, gate, none)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create project" -l prompt -d 'Project-level prompt / instructions for the LLM' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create project" -l llm -d 'Name of an Llm resource (see \'mcpctl get llms\'), or \'none\' to disable' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create project" -l llm-model -d 'Override the model string for this project (defaults to the Llm\'s own model)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create project" -l gated -d '[deprecated: use --proxy-model default]'
complete -c mcpctl -n "__mcpctl_subcmd_active create project" -l no-gated -d '[deprecated: use --proxy-model content-pipeline]'
complete -c mcpctl -n "__mcpctl_subcmd_active create project" -l server -d 'Server name (repeat for multiple)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create project" -l force -d 'Update if already exists'
# create user options
complete -c mcpctl -n "__mcpctl_subcmd_active create user" -l password -d 'User password' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create user" -l name -d 'User display name' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create user" -l force -d 'Update if already exists'
# create group options
complete -c mcpctl -n "__mcpctl_subcmd_active create group" -l description -d 'Group description' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create group" -l member -d 'Member email (repeat for multiple)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create group" -l force -d 'Update if already exists'
# create rbac options
complete -c mcpctl -n "__mcpctl_subcmd_active create rbac" -l subject -d 'Subject as Kind:name (repeat for multiple)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create rbac" -l roleBindings -d 'Role binding as key:value pairs, e.g. "role:view,resource:servers" or "role:view,resource:servers,name:my-ha" or "action:logs" (repeat for multiple)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create rbac" -l force -d 'Update if already exists'
feat: mcpctl mcptoken verbs + mcpd auth dispatch + audit plumbing Adds the end-to-end CLI surface for McpTokens and the mcpd auth dispatch that recognizes them. mcpd auth middleware: - Dispatch on the `mcpctl_pat_` bearer prefix. McpToken bearers resolve through a new `findMcpToken(hash)` dep, populating `request.mcpToken` and `request.userId = ownerId`. Everything else follows the existing session path. - Returns 401 for revoked / expired / unknown tokens. - Global RBAC hook now threads `mcpTokenSha` into `canAccess` / `canRunOperation` / `getAllowedScope`, and enforces a hard project-scope check: a McpToken principal can only hit `/api/v1/projects/<its-project>/...`. CLI verbs: - `mcpctl create mcptoken <name> -p <proj> [--rbac empty|clone] [--bind role:view,resource:servers] [--ttl 30d|never|ISO] [--description ...] [--force]` — returns the raw token once. - `mcpctl get mcptokens [-p <proj>]` — table with NAME/PROJECT/PREFIX/CREATED/LAST USED/EXPIRES/STATUS. - `mcpctl get mcptoken <name> -p <proj>` and `mcpctl describe mcptoken <name> -p <proj>` — describe surfaces the auto-created RBAC bindings. - `mcpctl delete mcptoken <name> -p <proj>`. - `apply -f` support with `kind: mcptoken`. Tokens are immutable, so apply creates if missing and skips if the name is already active. Audit plumbing: - `AuditEvent` / collector now carry optional `tokenName` / `tokenSha`. `setSessionMcpToken` sits alongside `setSessionUserName`; both feed a per-session principal map used at emit time. - `AuditEventService` query accepts `tokenName` / `tokenSha` filters. - Console `AuditEvent` type carries the new fields so a follow-up can add a TOKEN column. Completions regenerated. 1764/1764 tests pass workspace-wide. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 01:12:43 +01:00
# create mcptoken options
complete -c mcpctl -n "__mcpctl_subcmd_active create mcptoken" -s p -l project -d 'Project this token is bound to' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__mcpctl_subcmd_active create mcptoken" -l rbac -d 'Base RBAC: \'empty\' (default, no bindings) or \'clone\' (snapshot creator\'s perms)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create mcptoken" -l bind -d 'Additional role binding as key:value pairs, e.g. "role:view,resource:servers" or "action:logs" (repeat for multiple). Creator perms are the ceiling.' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create mcptoken" -l ttl -d 'Expiry: \'30d\', \'12h\', \'never\', or an ISO8601 datetime' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create mcptoken" -l description -d 'Freeform description' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create mcptoken" -l force -d 'Revoke any existing active token with this name, then create a new one'
# create prompt options
complete -c mcpctl -n "__mcpctl_subcmd_active create prompt" -s p -l project -d 'Project to scope the prompt to' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__mcpctl_subcmd_active create prompt" -l agent -d 'Agent to attach the prompt to directly (XOR with --project)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create prompt" -l content -d 'Prompt content text' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create prompt" -l content-file -d 'Read prompt content from file' -rF
complete -c mcpctl -n "__mcpctl_subcmd_active create prompt" -l priority -d 'Priority 1-10 (default: 5, higher = more important)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create prompt" -l link -d 'Link to MCP resource (format: project/server:uri)' -x
feat(mcpd): Skill resource end-to-end (CRUD + backup + revision integration) Phase 3 of the Skills + Revisions + Proposals work. Skills get the same inline-content + revision-history shape as prompts, with the addition of `files` (multi-file bundles, materialised by `mcpctl skills sync` in PR-5) and a typed `metadata` Json (hooks, mcpServers, postInstall, …). ## What's added ### Validation (src/mcpd/src/validation/skill.schema.ts) Typed metadata schema with a closed list of recognised hook events (PreToolUse, PostToolUse, SessionStart, Stop, SubagentStop, Notification), typed `mcpServers` dependency declarations (name + fromTemplate + optional project), and `postInstall` / `preUninstall` paths into the bundle's `files{}`. `.passthrough()` so unknown fields survive — forward-compat for follow-on additions. ### Repository (src/mcpd/src/repositories/skill.repository.ts) Mirrors PromptRepository exactly. Same `?? ''` workaround for nullable-FK compound-key lookups. ### Service (src/mcpd/src/services/skill.service.ts) Mirrors PromptService for create / update / delete / restore / upsert, including: - Auto-bump patch on content/files/metadata change. - Revision recording (best-effort — failures don't block the save). - 'skill' approval handler registered with ResourceProposalService so proposalService.approve dispatches to skills the same way it dispatches to prompts. - `getVisibleSkills(projectId)` returns id + name + semver + scope + metadata for `mcpctl skills sync` (PR-5) to diff against on-disk state. ### Routes (src/mcpd/src/routes/skills.ts) - GET /api/v1/skills (filters: ?project= ?projectId= ?agent= ?scope=global) - GET /api/v1/skills/:id - POST /api/v1/skills - PUT /api/v1/skills/:id - DELETE /api/v1/skills/:id - GET /api/v1/projects/:name/skills - GET /api/v1/projects/:name/skills/visible — sync diffing - GET /api/v1/agents/:name/skills - POST /api/v1/skills/:id/restore-revision { revisionId, note? } ### main.ts SkillRepository + SkillService instantiated; revision/proposal services wired in. `skills` segment added to the RBAC permission map (uses the existing `prompts` permission for now — same trust shape) and to `kindFromSegment` so the git-backup hook captures skill mutations. ### Backup integration - yaml-serializer.ts: `BackupKind` adds 'skill'; APPLY_ORDER bumps to 9 with skill last (it depends on projects/agents). `parseResourcePath` recognises the `skills/` directory. - git-backup.service.ts: `serializeResource` adds the `case 'skill'` branch alongside prompts. The git-sync loop now round-trips skills on every change. - (Bundle backup-service.ts is NOT updated in this PR — deferred to PR-7 alongside the cutover. The git-based backup IS wired, which is the primary persistence path.) ### CLI - `mcpctl create skill <name>` with --content / --content-file, --description, --priority, --semver, --metadata-file (YAML/JSON), --files-dir (walks a directory tree into `files{}`, UTF-8 only; null bytes rejected). - shared.ts adds `skill` / `skills` / `sk` aliases. ### apply.ts Not updated — `mcpctl apply -f skill.yaml` is deferred to PR-7. The existing CRUD endpoints + `mcpctl create skill` cover the bootstrap need; bulk-apply will arrive with the `propose-learnings` seed and docs. ## Tests 158 test files / 2127 tests green across the workspace. The DB-level schema tests for Skill landed in PR-1; the new service-level integration is exercised through main.ts wiring + the existing prompt revision tests (skill follows the same code path through proposal service approval). A `describe('Skill service mocks')` test file deliberately not added — the PromptService mock-based tests already cover the revision/approval handler shape, and the skill handler is structurally identical (same upsert + record-revision + link-currentRevisionId pattern). PR-7 will add an integration test that walks the full propose → review → approve flow for both resource types. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 00:48:40 +01:00
# create skill options
complete -c mcpctl -n "__mcpctl_subcmd_active create skill" -s p -l project -d 'Project to scope the skill to' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__mcpctl_subcmd_active create skill" -l agent -d 'Agent to scope the skill to (XOR with --project)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create skill" -l content -d 'SKILL.md body text' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create skill" -l content-file -d 'Read SKILL.md body from file' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create skill" -l description -d 'Short description shown in listings' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create skill" -l priority -d 'Priority 1-10 (default: 5)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create skill" -l semver -d 'Initial semver (default: 0.1.0)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create skill" -l metadata-file -d 'YAML/JSON file with metadata (hooks, mcpServers, postInstall, …)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create skill" -l files-dir -d 'Directory whose tree becomes the skill\'s files{} map (UTF-8 text only)' -x
# create personality options
complete -c mcpctl -n "__mcpctl_subcmd_active create personality" -l agent -d 'Agent that owns this personality (required)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create personality" -l description -d 'Description shown in `mcpctl get personalities`' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create personality" -l priority -d 'Priority 1-10 (default: 5)' -x
# create serverattachment options
complete -c mcpctl -n "__mcpctl_subcmd_active create serverattachment" -s p -l project -d 'Project name' -xa '(__mcpctl_project_names)'
# create promptrequest options
complete -c mcpctl -n "__mcpctl_subcmd_active create promptrequest" -s p -l project -d 'Project name to scope the prompt request to' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__mcpctl_subcmd_active create promptrequest" -l content -d 'Prompt content text' -x
complete -c mcpctl -n "__mcpctl_subcmd_active create promptrequest" -l content-file -d 'Read prompt content from file' -rF
complete -c mcpctl -n "__mcpctl_subcmd_active create promptrequest" -l priority -d 'Priority 1-10 (default: 5, higher = more important)' -x
# backup subcommands
set -l backup_cmds log restore
complete -c mcpctl -n "__fish_seen_subcommand_from backup; and not __fish_seen_subcommand_from $backup_cmds" -a log -d 'Show backup commit history'
complete -c mcpctl -n "__fish_seen_subcommand_from backup; and not __fish_seen_subcommand_from $backup_cmds" -a restore -d 'Restore mcpctl state from backup history'
# backup log options
complete -c mcpctl -n "__mcpctl_subcmd_active backup log" -s n -l limit -d 'number of commits to show' -x
feat(mcpd+mcplocal+cli): propose-learnings system skill, propose_skill MCP tool, mcpctl review Phase 4 of the Skills + Revisions + Proposals work. Closes the reflexive loop: Claude sessions can now propose back content (prompts or skills) that maintainers triage via a CLI queue. The system documents itself to Claude through the same mechanism it documents to humans. ## What's added ### propose-learnings global skill (mcpd bootstrap) - src/mcpd/src/bootstrap/system-skills.ts — idempotent upsert, mirrors system-project.ts. Single skill seeded today: `propose-learnings`, ~430 words, explains when to engage with propose_prompt vs propose_skill, what makes a good proposal, what NOT to propose, and the review→approve flow. Priority 9, global scope. - main.ts: `bootstrapSystemSkills(prisma)` called right after `bootstrapSystemProject`. ### gate-encouragement-propose system prompt - system-project.ts gains a new gate prompt (priority 10, alongside the other gate-* prompts) that nudges Claude to call propose_prompt when it discovers a project-specific lesson. Pairs with the propose-learnings skill — the prompt is the trigger, the skill is the manual. ### propose_skill MCP tool (mcplocal) - proxymodel/plugins/gate.ts: new virtual tool registered alongside propose_prompt. Posts to /api/v1/proposals (the new endpoint from PR-2) with resourceType='skill'. Tool description steers Claude toward propose_prompt for project-specific knowledge and reserves propose_skill for cross-cutting cases. propose_prompt's tool description is also expanded to point at the propose-learnings skill for guidance — the bare "creates a pending request" copy was bland enough that nothing in Claude's prior would actually make it engage. ### mcpctl review CLI - New top-level command in src/cli/src/commands/review.ts. Subcommands: mcpctl review pending List pending proposals mcpctl review next Show oldest pending mcpctl review show <id> Full detail mcpctl review approve <id> POST /proposals/:id/approve mcpctl review reject <id> --reason "..." mcpctl review diff <id> Side-by-side current vs proposed - Wired into src/cli/src/index.ts. Registered after createApproveCommand to keep the existing project-ops `mcpctl approve promptrequest` command working (legacy) while the new review surface is the preferred path. ## Tests touched - bootstrap-system-project.test.ts already counts via getSystemPromptNames() length, so it picked up the new prompt automatically; only the priority assertion needed nothing — the new prompt starts with `gate-` so the existing `gate-* → priority 10` invariant validates it. - system-prompt-validation.test.ts: bumped expected length from 11→12 and added a `toContain('gate-encouragement-propose')` assertion. Full suite: 158 test files / 2127 tests green. ## What's NOT in this PR - A SkillService mock-based test for the proposal approval handler — the PromptService approval handler is structurally identical and already covered; the database-backed integration is exercised in PR-2's tests. - Changes to mcplocal's existing handleProposePrompt URL — it still POSTs to the legacy /api/v1/projects/.../promptrequests endpoint, which works because PR-2 left that route in place. PR-7 will cut mcplocal over to /api/v1/proposals along with the PromptRequest table rename + drop. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 13:13:33 +01:00
# review subcommands
set -l review_cmds pending next show approve reject diff
complete -c mcpctl -n "__fish_seen_subcommand_from review; and not __fish_seen_subcommand_from $review_cmds" -a pending -d 'List pending proposals'
complete -c mcpctl -n "__fish_seen_subcommand_from review; and not __fish_seen_subcommand_from $review_cmds" -a next -d 'Show the oldest pending proposal'
complete -c mcpctl -n "__fish_seen_subcommand_from review; and not __fish_seen_subcommand_from $review_cmds" -a show -d 'Show full detail of a proposal'
complete -c mcpctl -n "__fish_seen_subcommand_from review; and not __fish_seen_subcommand_from $review_cmds" -a approve -d 'Approve a pending proposal (creates the resource + initial revision)'
complete -c mcpctl -n "__fish_seen_subcommand_from review; and not __fish_seen_subcommand_from $review_cmds" -a reject -d 'Reject a pending proposal with a reviewer note'
complete -c mcpctl -n "__fish_seen_subcommand_from review; and not __fish_seen_subcommand_from $review_cmds" -a diff -d 'Show what would change if this proposal were approved'
# review pending options
complete -c mcpctl -n "__mcpctl_subcmd_active review pending" -l type -d 'Filter by resource type: prompt or skill' -x
# review next options
complete -c mcpctl -n "__mcpctl_subcmd_active review next" -l type -d 'Filter by resource type: prompt or skill' -x
# review reject options
complete -c mcpctl -n "__mcpctl_subcmd_active review reject" -l reason -d 'Reviewer note explaining the rejection' -x
feat(cli+mcpd): mcpctl skills sync + config claude extension Phase 5 of the Skills + Revisions + Proposals work. Skills are now materialised onto disk under ~/.claude/skills/<name>/, with hash-pinned diff against mcpd, atomic per-skill install, and preservation of locally-modified files. `mcpctl config claude --project X` now wires the full pickup chain: writes .mcpctl-project marker, runs the initial sync, installs the SessionStart hook so subsequent Claude invocations stay in sync transparently. ## Sync algorithm 1. Resolve project: `--project` flag overrides; else walk up from cwd looking for `.mcpctl-project`; else fall back to globals-only. 2. GET /api/v1/projects/:name/skills/visible (or /api/v1/skills?scope=global without a project). Server returns id + name + semver + scope + contentHash + metadata — no body, no files. The contentHash is sha256 of the canonicalised body, computed server-side; any reordering of keys produces the same hash, so it's a stable diff key. 3. Load ~/.mcpctl/skills-state.json (lives outside ~/.claude/skills/ on purpose — Claude Code reads that tree and we don't want to pollute it with our bookkeeping). 4. Diff: - server skill not in state → INSTALL - server skill, state contentHash matches → SKIP (cheap path) - server skill, state contentHash differs → UPDATE (fetch full body) - state skill not in server → orphan, REMOVE (preserve if locally modified, unless --force) 5. Atomic per-skill install: write to <targetDir>.mcpctl-staging-<pid>/, rename existing tree to .mcpctl-trash-<pid>, swap staging in, rmtree the trash. A concurrent reader (Claude Code starting up) never sees a partial tree. 6. State file updated with new versions, per-file SHA-256, install path. saveState is atomic (temp + rename). ## Failure semantics - `--quiet` mode (used by SessionStart hook): exit 0 on network / timeout / mcpd error. Fail-open is non-negotiable here — we never want a hung mcpd to block Claude Code starting up. - Auth failure: exit 1, clear "run mcpctl login" message. - Disk error during state save: exit 2. - Per-skill errors are collected in the result and reported as a count; one bad skill doesn't stop the others. Network fetches run with concurrency 5. The server-side `/visible` endpoint is metadata-only so the cheap path (everything unchanged) needs exactly one HTTP roundtrip total. ## Files added ### CLI utilities (src/cli/src/utils/) - skills-state.ts — load/save state, per-file sha256, edit detection. - project-marker.ts — walk-up to find `.mcpctl-project`, bounded by user home so we never search above $HOME. - sessionhook.ts — install/remove a SessionStart hook entry tagged with `_mcpctl_managed: true`. Idempotent. Defensive against missing/empty/JSONC settings.json. - skills-disk.ts — atomic install via staging-dir rename swap, symmetric atomic delete via trash-dir rename. Path-escape attempts in files{} are rejected. ### CLI command (src/cli/src/commands/) - skills.ts — `mcpctl skills sync` Commander wrapper + the `runSkillsSync(opts, deps)` library function (also called from `mcpctl config claude --project`). Supports `--dry-run`, `--force`, `--quiet`, `--keep-orphans`. `--skip-postinstall` is reserved (postInstall execution lands in a follow-up PR, not this one). ### Wiring - index.ts: registers `mcpctl skills` after `mcpctl review`. - config.ts: `mcpctl config claude --project X` now writes the `.mcpctl-project` marker, runs `runSkillsSync` in-process, and calls `installManagedSessionHook('mcpctl skills sync --quiet')`. New flag `--skip-skills` opts out (used by tests; useful for CI). ## Server-side change - src/mcpd/src/services/skill.service.ts: getVisibleSkills now computes contentHash on the fly from the canonical body shape the client will reconstruct. Cheap (sha256 of ~few KB per skill); no schema migration needed since hash is derived not stored. ## Tests Four new utility test files (31 tests) under src/cli/tests/utils/: - sessionhook.test.ts — creation, idempotency, command updates, preservation of user hooks, removal, empty/JSONC tolerance. - skills-disk.test.ts — atomic write, replacement without leftovers, path-escape rejection, atomic delete, listing ignores staging/trash artifacts. - skills-state.test.ts — sha256 determinism, state round-trip, schema-version drift handling, edit detection. - project-marker.test.ts — cwd hit, walk-up, $HOME boundary, empty marker, write+read round-trip. The existing `mcpctl config claude` test (claude.test.ts) was updated to pass `--skip-skills` so it stays focused on .mcp.json generation; the new sync flow is covered by the utility tests. Full suite: 162 test files / 2157 tests green (up from 158 / 2127). ## Deferred to a follow-up - `metadata.hooks` materialisation into `~/.claude/settings.json` — the data path exists, sync receives it; PR-7 or a focused follow-up will write the `_mcpctl_managed: true` entries for declarative hooks. - `metadata.mcpServers` auto-attach via mcpd API — likewise. - `metadata.postInstall` script execution — the most substantive deferred piece. Current sync logs a TODO and skips. The corporate trust model (publisher-side rigor, not client-side defence) means this is straightforward to add once we wire the curated env + timeout + audit emission. Orthogonal to file sync, easier to ship separately. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 16:26:35 +01:00
# skills subcommands
set -l skills_cmds sync
complete -c mcpctl -n "__fish_seen_subcommand_from skills; and not __fish_seen_subcommand_from $skills_cmds" -a sync -d 'Sync skills from mcpd onto disk under ~/.claude/skills/'
# skills sync options
complete -c mcpctl -n "__mcpctl_subcmd_active skills sync" -s p -l project -d 'Project to sync (overrides .mcpctl-project marker)' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__mcpctl_subcmd_active skills sync" -l dry-run -d 'Print what would change without writing anything'
complete -c mcpctl -n "__mcpctl_subcmd_active skills sync" -l force -d 'Overwrite locally-modified skills'
complete -c mcpctl -n "__mcpctl_subcmd_active skills sync" -l quiet -d 'Suppress all output unless something changed (used by SessionStart hook)'
complete -c mcpctl -n "__mcpctl_subcmd_active skills sync" -l skip-postinstall -d 'Do not run metadata.postInstall scripts (no-op in v1; reserved)'
complete -c mcpctl -n "__mcpctl_subcmd_active skills sync" -l keep-orphans -d 'Do not remove skills that are no longer in the server set'
# cache subcommands
set -l cache_cmds stats clear
complete -c mcpctl -n "__fish_seen_subcommand_from cache; and not __fish_seen_subcommand_from $cache_cmds" -a stats -d 'Show cache statistics'
complete -c mcpctl -n "__fish_seen_subcommand_from cache; and not __fish_seen_subcommand_from $cache_cmds" -a clear -d 'Clear cache entries'
# cache clear options
complete -c mcpctl -n "__mcpctl_subcmd_active cache clear" -l older-than -d 'Clear entries older than N days' -x
complete -c mcpctl -n "__mcpctl_subcmd_active cache clear" -s y -l yes -d 'Skip confirmation'
# provider subcommands
set -l provider_cmds status up down disable enable
complete -c mcpctl -n "__fish_seen_subcommand_from provider; and not __fish_seen_subcommand_from $provider_cmds" -a status -d 'Show lifecycle state of a provider'
complete -c mcpctl -n "__fish_seen_subcommand_from provider; and not __fish_seen_subcommand_from $provider_cmds" -a up -d 'Start a managed provider (warm up so first chat is fast)'
complete -c mcpctl -n "__fish_seen_subcommand_from provider; and not __fish_seen_subcommand_from $provider_cmds" -a down -d 'Stop a managed provider now (releases GPU memory)'
complete -c mcpctl -n "__fish_seen_subcommand_from provider; and not __fish_seen_subcommand_from $provider_cmds" -a disable -d 'Persistently disable a managed provider (survives mcplocal restart)'
complete -c mcpctl -n "__fish_seen_subcommand_from provider; and not __fish_seen_subcommand_from $provider_cmds" -a enable -d 'Re-enable a previously-disabled provider'
feat: HTTP-mode mcplocal container + mcpctl test mcp + token-auth preHandler Delivers the final piece of the mcptoken stack: a containerized, network-accessible mcplocal that serves Streamable-HTTP MCP to off-host clients (the vLLM use case), authenticated by project-scoped McpTokens. New binary (same package, new entry): - src/mcplocal/src/serve.ts — HTTP-only entry. Reads MCPLOCAL_MCPD_URL, MCPLOCAL_MCPD_TOKEN, MCPLOCAL_HTTP_HOST/PORT, MCPLOCAL_CACHE_DIR from env. No StdioProxyServer, no --upstream. - src/mcplocal/src/http/token-auth.ts — Fastify preHandler that validates mcpctl_pat_ bearers via mcpd's /api/v1/mcptokens/introspect. 30s positive / 5s negative TTL. Rejects wrong-project with 403. Shared HTTP MCP client: - src/shared/src/mcp-http/ — reusable McpHttpSession with initialize, listTools, callTool, close. Handles http+https, SSE, id correlation, distinct McpProtocolError / McpTransportError. Plus mcpHealthCheck and deriveBaseUrl helpers. New CLI verb `mcpctl test mcp <url>`: - Flags: --token (also $MCPCTL_TOKEN), --tool, --args (JSON), --expect-tools, --timeout, -o text|json, --no-health. - Exit codes: 0 PASS, 1 TRANSPORT/AUTH FAIL, 2 CONTRACT FAIL. Container + deploy: - deploy/Dockerfile.mcplocal (Node 20 alpine, multi-stage, pnpm workspace, CMD node src/mcplocal/dist/serve.js, VOLUME /var/lib/mcplocal/cache, HEALTHCHECK on :3200/healthz). - scripts/build-mcplocal.sh mirrors build-mcpd.sh. - fulldeploy.sh is now a 4-step pipeline that also builds + rolls out mcplocal (gated on `kubectl get deployment/mcplocal` so the script stays green before the Pulumi stack lands). Audit + cache: - project-mcp-endpoint.ts passes MCPLOCAL_CACHE_DIR into FileCache at both construction sites and, when request.mcpToken is present, calls collector.setSessionMcpToken(id, ...) so audit events carry the tokenName/tokenSha. Tests: - 9 unit cases on `mcpctl test mcp` (happy path, health miss, expect-tools hit/miss, transport throw, tool isError, json report, $MCPCTL_TOKEN env fallback, invalid --args). - Smoke test src/mcplocal/tests/smoke/mcptoken.smoke.test.ts — gated on healthz($MCPGW_URL), skipped cleanly when unreachable. Covers happy path, wrong-project 403, --expect-tools contract failure, and revocation 401 within the negative-cache window. 1773/1773 workspace tests pass. Pulumi resources (Deployment, Service, Ingress, PVC, Secret, NetworkPolicy) still need to land in ../kubernetes-deployment before the smoke gate flips on. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 01:21:42 +01:00
# test subcommands
set -l test_cmds mcp
complete -c mcpctl -n "__fish_seen_subcommand_from test; and not __fish_seen_subcommand_from $test_cmds" -a mcp -d 'Verify a Streamable-HTTP MCP endpoint: health, initialize, tools/list, optionally call a tool.'
# test mcp options
complete -c mcpctl -n "__mcpctl_subcmd_active test mcp" -l token -d 'Bearer token (also reads $MCPCTL_TOKEN)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active test mcp" -l tool -d 'Invoke a specific tool after listing' -x
complete -c mcpctl -n "__mcpctl_subcmd_active test mcp" -l args -d 'JSON-encoded arguments for --tool' -x
complete -c mcpctl -n "__mcpctl_subcmd_active test mcp" -l expect-tools -d 'Comma-separated tool names that MUST appear; fails otherwise' -x
complete -c mcpctl -n "__mcpctl_subcmd_active test mcp" -l timeout -d 'Per-request timeout in seconds' -x
complete -c mcpctl -n "__mcpctl_subcmd_active test mcp" -s o -l output -d 'Output format: text or json' -x
complete -c mcpctl -n "__mcpctl_subcmd_active test mcp" -l no-health -d 'Skip the /healthz preflight check'
feat(mcpd): pluggable SecretBackend abstraction + OpenBao driver + migrate Why: API keys live in Postgres as plaintext JSON. A DB read exposes every credential in the system. Before centralising more secrets (LLM keys, etc.) we want to be able to point at an external KV store and drop DB access to sensitive rows. New model: - `SecretBackend` resource (CRUD + isDefault invariant) owns how a secret is stored. `Secret` gains `backendId` FK and `externalRef`. Reads/writes dispatch through a driver. - `plaintext` driver (near-noop, uses existing Secret.data column) is seeded as the `default` row at startup. Acts as trust root / bootstrap. - `openbao` driver (also HashiCorp Vault KV v2 compatible) talks plain HTTP, no SDK dependency. Auth via static token pulled from a plaintext-backed `Secret` through the injected SecretRefResolver. Caches resolved token. - `SecretMigrateService` moves secrets one-at-a-time: read → write dest → flip row → best-effort source delete. Interrupted runs are idempotent (skips secrets already on destination). CLI surface: - `mcpctl create|get|describe|delete secretbackend` + `--default` on create. - `mcpctl migrate secrets --from X --to Y [--names a,b] [--keep-source] [--dry-run]` - `apply -f` round-trips secretbackends (yaml/json multi-doc + grouped). - RBAC: `secretbackends` resource + `run:migrate-secrets` operation. - Fish + bash completions regenerated. docs/secret-backends.md covers the OpenBao policy, chicken-and-egg auth flow, and the migration semantics. Broke the circular dep (OpenBao needs SecretService to resolve its own token, SecretService needs SecretBackendService) with a deferred-resolver bridge in mcpd startup. 11 new driver unit tests; existing env-resolver/secret-route/ backup tests updated for the new service signatures. Full suite: 1792/1792. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 19:29:55 +01:00
# migrate subcommands
set -l migrate_cmds secrets
complete -c mcpctl -n "__fish_seen_subcommand_from migrate; and not __fish_seen_subcommand_from $migrate_cmds" -a secrets -d 'Migrate secrets from one SecretBackend to another'
# migrate secrets options
complete -c mcpctl -n "__mcpctl_subcmd_active migrate secrets" -l from -d 'Source SecretBackend name' -x
complete -c mcpctl -n "__mcpctl_subcmd_active migrate secrets" -l to -d 'Destination SecretBackend name' -x
complete -c mcpctl -n "__mcpctl_subcmd_active migrate secrets" -l names -d 'Comma-separated secret names (default: all)' -x
complete -c mcpctl -n "__mcpctl_subcmd_active migrate secrets" -l keep-source -d 'Leave the source copy intact (default: delete from source after write+commit)'
complete -c mcpctl -n "__mcpctl_subcmd_active migrate secrets" -l dry-run -d 'Show which secrets would be migrated without touching them'
feat(openbao): wizard-provisioning + daily token rotation One-command setup replaces the 6-step manual flow — `mcpctl create secretbackend bao --type openbao --wizard` takes the OpenBao admin token once, provisions a narrow policy + token role, mints the first periodic token, stores it on mcpd, verifies end-to-end, and prints the migration command. The admin token is NEVER persisted. The stored credential auto-rotates daily: mcpd mints a successor via the token role (self-rotation capability is part of the policy it was issued with), verifies the successor, writes it over the backing Secret, then revokes the predecessor by accessor. TTL 720h means a week of rotation failures still leaves 20+ days of runway. Shared: - New `@mcpctl/shared/vault` — pure HTTP wrappers (verifyHealth, ensureKvV2, writePolicy, ensureTokenRole, mintRoleToken, revokeAccessor, lookupSelf, testWriteReadDelete) and policy HCL builder. mcpd: - `tokenMeta Json @default("{}")` on SecretBackend. Self-healing schema migration — empty default lets `prisma db push` add the column cleanly. - SecretBackendRotator.rotateOne: mint → verify → persist → revoke-old → update tokenMeta. Failures surface via `lastRotationError` on the row; the old token keeps working. - SecretBackendRotatorLoop: on startup rotates overdue backends, schedules per-backend timers with ±10min jitter. Stops cleanly on shutdown. - New `POST /api/v1/secretbackends/:id/rotate` (operation `rotate-secretbackend` — added to bootstrap-admin's auto-migrated ops alongside migrate-secrets, which was previously missing too). CLI: - `--wizard` on `create secretbackend` delegates to the interactive flow. All prompts can be pre-answered via flags (--url, --admin-token, --mount, --path-prefix, --policy-name, --token-role, --no-promote-default) for CI. - `mcpctl rotate secretbackend <name>` — convenience verb; hits the new rotate endpoint. - `describe secretbackend` renders a Token health section (healthy / STALE / WARNING / ERROR) with generated/renewal/expiry timestamps and last rotation error. Only shown when tokenMeta.rotatable is true — the existing k8s-auth + static-token backends don't surface it. Tests: 15 vault-client unit tests (shared), 8 rotator unit tests (mcpd), 3 wizard flow tests (cli, including a regression test that the admin token never appears in stdout). Full suite 1885/1885 (+32). Completions regenerated for the new flags. Out of scope (explicit): kubernetes-auth wizard, Vault Enterprise namespaces in the wizard path, rotation for non-wizard static-token backends. See plan file for details. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 17:20:37 +01:00
# rotate subcommands
set -l rotate_cmds secretbackend
complete -c mcpctl -n "__fish_seen_subcommand_from rotate; and not __fish_seen_subcommand_from $rotate_cmds" -a secretbackend -d 'Rotate the vault token on an OpenBao SecretBackend (wizard-provisioned)'
# status options
complete -c mcpctl -n "__fish_seen_subcommand_from status" -s o -l output -d 'output format (table, json, yaml)' -x
# login options
complete -c mcpctl -n "__fish_seen_subcommand_from login" -l mcpd-url -d 'mcpd URL to authenticate against' -x
# get options
complete -c mcpctl -n "__fish_seen_subcommand_from get" -s o -l output -d 'output format (table, json, yaml)' -x
complete -c mcpctl -n "__fish_seen_subcommand_from get" -s p -l project -d 'Filter by project' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__fish_seen_subcommand_from get" -s A -l all -d 'Show all (including project-scoped) resources'
# describe options
complete -c mcpctl -n "__fish_seen_subcommand_from describe" -s o -l output -d 'output format (detail, json, yaml)' -x
complete -c mcpctl -n "__fish_seen_subcommand_from describe" -l show-values -d 'Show secret values (default: masked)'
# delete options
complete -c mcpctl -n "__fish_seen_subcommand_from delete" -s p -l project -d 'Project name (for serverattachment)' -xa '(__mcpctl_project_names)'
complete -c mcpctl -n "__fish_seen_subcommand_from delete" -l agent -d 'Agent name (for personality delete-by-name)' -x
# logs options
complete -c mcpctl -n "__fish_seen_subcommand_from logs" -s t -l tail -d 'Number of lines to show' -x
complete -c mcpctl -n "__fish_seen_subcommand_from logs" -s i -l instance -d 'Instance/replica index (0-based, for servers with multiple replicas)' -x
feat(mcpd): ResourceRevision + ResourceProposal services + Prompt revision integration Phase 2 of the Skills + Revisions + Proposals work. Stands up the generic revision/proposal layer and wires Prompt into it. Skills will plug into the same infrastructure in PR-3 with no further service changes required. This PR is intentionally additive: PromptRequest table and routes are unchanged. The /api/v1/proposals API runs side-by-side with the legacy /api/v1/promptrequests API. The PromptRequest cutover (rename + backfill + mcplocal rewire) is deferred to a later PR so this one stays reviewable. ## What's added ### Repositories (src/mcpd/src/repositories/) - resource-revision.repository.ts — append-only revision log keyed by (resourceType, resourceId). Soft FK; no relations declared. Supports history listing, semver lookup, and contentHash cross-resource search. - resource-proposal.repository.ts — generic propose queue. Status lifecycle pending → approved | rejected. Mirrors Prompt's `?? ''` workaround for nullable-FK compound lookups. ### Services (src/mcpd/src/services/) - resource-revision.service.ts — record() inserts a revision with a stable sha256 contentHash computed from canonicalised JSON (key-sorted at every level so reordered objects produce the same hash). Caller passes a pre-computed semver; service does NOT decide bump policy. - resource-proposal.service.ts — propose / approve / reject / list, with a per-resourceType handler registry. PromptService registers the 'prompt' handler at construction; the SkillService will register 'skill' in PR-3. approve() runs in a Prisma $transaction so the resource update + revision insert + proposal status flip are atomic. ### Pure utility (src/mcpd/src/utils/semver.ts) - bumpSemver(current, kind) for major / minor / patch - compareSemver(a, b) — numeric, not lex (10 > 9) - isValidSemver(s) - Invalid input falls back to '0.1.0' rather than throwing — keeps the audit-write path from blowing up the prompt update if a row's semver ever drifts out of MAJOR.MINOR.PATCH shape. ### Routes (src/mcpd/src/routes/) - revisions.ts — GET /api/v1/revisions?resourceType=&resourceId=, GET /api/v1/revisions/:id, GET /api/v1/revisions/:id/diff?against=<id|live> (unified-format diff via the `diff` package), and POST /api/v1/prompts/:id/restore-revision { revisionId, note? }. - proposals.ts — GET / POST /api/v1/proposals, GET /api/v1/proposals/:id, PUT for body updates, POST .../approve and POST .../reject, plus DELETE. ## What's changed - PromptService.create / update now record a ResourceRevision when the revision service is wired. Update auto-bumps patch on content change; authors can override via `--bump major|minor|patch` or `--semver X.Y.Z` on the CLI (forwarded into the PUT body). Best-effort: revision write failures are swallowed so the prompt save still succeeds (revision is audit, not source of truth). - PromptService.setProposalService registers a 'prompt' approval handler with the proposal service. Approval runs in a Prisma transaction: upsert prompt → record revision → update currentRevisionId → flip proposal status. semver bumps to 0.1.0 on first approval, patch thereafter. - New CLI flags on `mcpctl edit prompt`: --bump, --semver, --note. They're prompt-only (validated client-side); other resources reject them. - Aliases in shared.ts: `proposal`/`prop` → proposals, `revision`/`rev` → revisions. - diff dependency added to mcpd. ## Tests - src/mcpd/tests/utils/semver.test.ts — covers bump/compare/validate including numeric (not lex) semver compare and invalid-input fallback. - prompt-service.test.ts updated: makePrompt fixture now sets semver + agentId + currentRevisionId; updatePrompt assertion expects the auto-bumped patch in the same update call. - prompt-routes.test.ts updated symmetrically. ## RBAC `proposals` and `revisions` URL segments map to the existing `prompts` permission for now. PR-7 may split if a "reviewer" role becomes useful. ## Verification Full suite: 158 test files / 2127 tests green. `pnpm build` clean across all 6 workspace packages. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 00:38:35 +01:00
# edit options
complete -c mcpctl -n "__fish_seen_subcommand_from edit" -l bump -d 'Bump prompt semver after edit: major | minor | patch' -x
complete -c mcpctl -n "__fish_seen_subcommand_from edit" -l semver -d 'Set prompt semver explicitly (X.Y.Z)' -x
complete -c mcpctl -n "__fish_seen_subcommand_from edit" -l note -d 'Note attached to the resulting revision' -x
# apply options
complete -c mcpctl -n "__fish_seen_subcommand_from apply" -s f -l file -d 'Path to config file (alternative to positional arg)' -rF
complete -c mcpctl -n "__fish_seen_subcommand_from apply" -l dry-run -d 'Validate and show changes without applying'
feat(agents): mcpctl chat REPL + agent CRUD + completions (Stage 5) This is the moment the user can actually talk to an agent end-to-end: mcpctl create llm qwen3-thinking --type openai --model qwen3-thinking \ --url http://litellm.nvidia-nim.svc.cluster.local:4000/v1 \ --api-key-ref litellm-key/API_KEY mcpctl create agent reviewer --llm qwen3-thinking --project mcpctl-dev \ --description "I review security design — ask me after each major change." mcpctl chat reviewer Pieces: * src/cli/src/commands/chat.ts (new) — REPL + one-shot. Streams the SSE endpoint and prints text deltas to stdout as they arrive; tool_call / tool_result events go to stderr in dim-style brackets so the chat output stays clean. LiteLLM-style flags (--temperature / --top-p / --top-k / --max-tokens / --seed / --stop / --allow-tool / --extra) layer over agent.defaultParams. In-REPL slash-commands: /set KEY VAL, /system <text>, /tools (list project's MCP servers), /clear (new thread), /save (PATCH agent.defaultParams = current overrides), /quit. * src/cli/src/commands/create.ts — `create agent` mirroring the llm pattern. Every yaml-applyable field has a corresponding flag (memory rule); --default-temperature / --default-top-p / --default-top-k / --default-max-tokens / --default-seed / --default-stop / --default-extra / --default-params-file all populate agent.defaultParams. * src/cli/src/commands/apply.ts — AgentSpecSchema accepts both `llm: qwen3-thinking` shorthand and `llm: { name: ... }` long form; runs after llms in the apply order so apiKey/llm references resolve. Round- trips with `get agent foo -o yaml | apply -f -` (memory rule). * src/cli/src/commands/get.ts — agentColumns (NAME, LLM, PROJECT, DESCRIPTION, ID); RESOURCE_KIND mapping for yaml export. * src/cli/src/commands/shared.ts — `agent`/`agents`/`thread`/`threads` added to RESOURCE_ALIASES. * src/cli/src/index.ts — wires createChatCommand into the program; passes the resolved baseUrl + token so chat can stream SSE without going through ApiClient (which only does buffered request/response). * completions/mcpctl.{fish,bash} regenerated. scripts/generate-completions.ts knows about agents (canonical + aliases) and emits a special-case `chat)` block that completes the first arg with `mcpctl get agents` names. tests/completions.test.ts: +9 new assertions covering agents in the resource list, chat in the commands list, --llm flag for create agent, agent-name completion for chat, etc. CLI suite: 430/430 (was 421). Completions --check is clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 17:02:38 +01:00
# chat options
complete -c mcpctl -n "__fish_seen_subcommand_from chat" -s m -l message -d 'One-shot: send a single message and exit (no REPL)' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat" -l thread -d 'Resume an existing thread' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat" -l system -d 'Replace agent.systemPrompt for this session' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat" -l system-file -d 'Read --system text from a file' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat" -l system-append -d 'Append to the agent system block for this session' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat" -l personality -d 'Personality overlay (additive prompts on top of the agent)' -x
feat(agents): mcpctl chat REPL + agent CRUD + completions (Stage 5) This is the moment the user can actually talk to an agent end-to-end: mcpctl create llm qwen3-thinking --type openai --model qwen3-thinking \ --url http://litellm.nvidia-nim.svc.cluster.local:4000/v1 \ --api-key-ref litellm-key/API_KEY mcpctl create agent reviewer --llm qwen3-thinking --project mcpctl-dev \ --description "I review security design — ask me after each major change." mcpctl chat reviewer Pieces: * src/cli/src/commands/chat.ts (new) — REPL + one-shot. Streams the SSE endpoint and prints text deltas to stdout as they arrive; tool_call / tool_result events go to stderr in dim-style brackets so the chat output stays clean. LiteLLM-style flags (--temperature / --top-p / --top-k / --max-tokens / --seed / --stop / --allow-tool / --extra) layer over agent.defaultParams. In-REPL slash-commands: /set KEY VAL, /system <text>, /tools (list project's MCP servers), /clear (new thread), /save (PATCH agent.defaultParams = current overrides), /quit. * src/cli/src/commands/create.ts — `create agent` mirroring the llm pattern. Every yaml-applyable field has a corresponding flag (memory rule); --default-temperature / --default-top-p / --default-top-k / --default-max-tokens / --default-seed / --default-stop / --default-extra / --default-params-file all populate agent.defaultParams. * src/cli/src/commands/apply.ts — AgentSpecSchema accepts both `llm: qwen3-thinking` shorthand and `llm: { name: ... }` long form; runs after llms in the apply order so apiKey/llm references resolve. Round- trips with `get agent foo -o yaml | apply -f -` (memory rule). * src/cli/src/commands/get.ts — agentColumns (NAME, LLM, PROJECT, DESCRIPTION, ID); RESOURCE_KIND mapping for yaml export. * src/cli/src/commands/shared.ts — `agent`/`agents`/`thread`/`threads` added to RESOURCE_ALIASES. * src/cli/src/index.ts — wires createChatCommand into the program; passes the resolved baseUrl + token so chat can stream SSE without going through ApiClient (which only does buffered request/response). * completions/mcpctl.{fish,bash} regenerated. scripts/generate-completions.ts knows about agents (canonical + aliases) and emits a special-case `chat)` block that completes the first arg with `mcpctl get agents` names. tests/completions.test.ts: +9 new assertions covering agents in the resource list, chat in the commands list, --llm flag for create agent, agent-name completion for chat, etc. CLI suite: 430/430 (was 421). Completions --check is clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 17:02:38 +01:00
complete -c mcpctl -n "__fish_seen_subcommand_from chat" -l temperature -d 'Sampling temperature (0..2)' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat" -l top-p -d 'Nucleus sampling cutoff (0..1)' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat" -l top-k -d 'Top-K sampling (Anthropic; OpenAI ignores)' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat" -l max-tokens -d 'Maximum tokens in the assistant reply' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat" -l seed -d 'Reproducibility seed (provider-dependent)' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat" -l stop -d 'Stop sequence (repeatable)' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat" -l allow-tool -d 'Restrict to this tool only (repeatable)' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat" -l extra -d 'Provider-specific knob k=v (repeatable)' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat" -l no-stream -d 'Disable SSE streaming (single JSON response)'
feat(cli): mcpctl chat-llm + KIND/STATUS columns (v1 Stage 5) Closes the loop on user-facing surface: $ mcpctl get llm NAME KIND STATUS TYPE MODEL TIER KEY ID qwen3-thinking public active openai qwen3-thinking fast ... ... vllm-local virtual active openai Qwen/Qwen2.5-7B-Instruct fast - ... $ mcpctl chat-llm vllm-local ──────────────────────────────────────── LLM: vllm-local openai → Qwen/Qwen2.5-7B-Instruct-AWQ Kind: virtual Status: active ──────────────────────────────────────── > hello? Hi! … New: chat-llm command (commands/chat-llm.ts) - Stateless chat with any mcpd-registered LLM. No threads, no tools, no project prompts. POSTs to /api/v1/llms/<name>/infer; mcpd's kind=virtual branch handles relay-through-mcplocal transparently, so the same CLI command works for both public and virtual LLMs. - Reuses installStatusBar / formatStats / recordDelta / styleStats / PhaseStats from chat.ts (now exported) so the bottom-row tokens-per- second ticker behaves identically to mcpctl chat. - Flags: --message (one-shot), --system, --temperature, --max-tokens, --no-stream. Streaming uses OpenAI chat.completion.chunk SSE. - REPL mode keeps a per-session history array so multi-turn flows feel natural; each turn is an independent inference call. Updated: get.ts - LlmRow gains optional kind/status fields. - llmColumns layout: NAME, KIND, STATUS, TYPE, MODEL, TIER, KEY, ID. Defaults gracefully when older mcpd responses don't return them. Updated: chat.ts - Re-exports the helpers chat-llm.ts needs (PhaseStats, newPhase, recordDelta, formatStats, styleStats, styleThinking, STDERR_IS_TTY, StatusBar, installStatusBar). No behavior change. Completions: chat-llm picks up the standard option enumeration automatically; bash gets a special-case for first-arg LLM-name completion via _mcpctl_resource_names "llms". CLI suite: 437/437 (was 430, +7 from auto-discovered test cases in the regenerated completions golden). Workspace: 2043/2043 across 152 files. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 14:25:38 +01:00
# chat-llm options
complete -c mcpctl -n "__fish_seen_subcommand_from chat-llm" -s m -l message -d 'One-shot: send a single message and exit (no REPL)' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat-llm" -l system -d 'Optional system prompt' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat-llm" -l temperature -d 'Sampling temperature (0..2)' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat-llm" -l max-tokens -d 'Maximum tokens in the assistant reply' -x
complete -c mcpctl -n "__fish_seen_subcommand_from chat-llm" -l no-stream -d 'Disable SSE streaming (single JSON response)'
feat(cli+docs+smoke): inference-task CLI + GC ticker + smoke + docs (v5 Stage 4) CLI surface for the durable queue: - `mcpctl get tasks` — table view (ID, STATUS, POOL, LLM, MODEL, STREAM, AGE, WORKER). Aliases `task`, `tasks`, `inference-task`, `inference-tasks` all normalize to the canonical plural so URL construction works uniformly. RESOURCE_ALIASES + completions generator updated. - `mcpctl chat-llm <name> --async -m <msg>` — enqueue and exit. stdout is just the task id (pipeable into `xargs mcpctl get task`); stderr carries human-readable status. REPL mode is rejected for --async (fire-and-forget doesn't make sense without -m). GC ticker in mcpd: 5-min interval. Pending tasks past 1 h queue timeout flip to error with a clear message; terminal tasks past 7 d retention get deleted. Both queries are index-backed. Crash fix uncovered by the smoke: when the async route doesn't await ref.done, a later cancel/error rejected the in-flight Promise as unhandled and crashed mcpd. The route now attaches a no-op `.catch` so the legacy `done` semantic still works for sync callers (chat, direct infer) without taking out the process for async ones. The EnqueueInferOptions also gained an explicit `ownerId` field so the async API can stamp the authenticated user on the row instead of inheriting 'system' from the constructor's resolveOwner — without this, every GET/DELETE from the original caller would 404 due to foreign-owner mismatch. Smoke (tests/smoke/inference-task.smoke.test.ts): 1. POST /inference-tasks while no worker bound → row=pending. 2. Bring a registrar online → bindSession drain claims and dispatches → worker complete()s → row=completed → GET returns the assistant body. 3. Stop worker, enqueue, DELETE → row=cancelled, persisted. docs/inference-tasks.md (new): full data model, lifecycle diagram, async API reference, CLI examples, RBAC table, GC defaults, and the v5 limitations / v6 roadmap. Cross-linked from virtual-llms.md and agents.md. Tests + smoke: mcpd 893/893, mcplocal 723/723, cli 437/437, full smoke 146/146 (was 144, +2 new task smoke). Live mcpd verified via manual curl: enqueue → cancel → re-fetch — no crash, owner scoping returns 404 on foreign ids, GC ticker logs at info when it sweeps. v5 complete: durable queue (Stage 1) + VirtualLlmService rewire (Stage 2) + async API & RBAC (Stage 3) + CLI/GC/smoke/docs (Stage 4).
2026-04-28 15:25:09 +01:00
complete -c mcpctl -n "__fish_seen_subcommand_from chat-llm" -l async -d 'Enqueue as a durable inference task and print the task id (does not wait for completion). Virtual Llms only. Poll with `mcpctl get task <id>`.'
feat(cli): mcpctl chat-llm + KIND/STATUS columns (v1 Stage 5) Closes the loop on user-facing surface: $ mcpctl get llm NAME KIND STATUS TYPE MODEL TIER KEY ID qwen3-thinking public active openai qwen3-thinking fast ... ... vllm-local virtual active openai Qwen/Qwen2.5-7B-Instruct fast - ... $ mcpctl chat-llm vllm-local ──────────────────────────────────────── LLM: vllm-local openai → Qwen/Qwen2.5-7B-Instruct-AWQ Kind: virtual Status: active ──────────────────────────────────────── > hello? Hi! … New: chat-llm command (commands/chat-llm.ts) - Stateless chat with any mcpd-registered LLM. No threads, no tools, no project prompts. POSTs to /api/v1/llms/<name>/infer; mcpd's kind=virtual branch handles relay-through-mcplocal transparently, so the same CLI command works for both public and virtual LLMs. - Reuses installStatusBar / formatStats / recordDelta / styleStats / PhaseStats from chat.ts (now exported) so the bottom-row tokens-per- second ticker behaves identically to mcpctl chat. - Flags: --message (one-shot), --system, --temperature, --max-tokens, --no-stream. Streaming uses OpenAI chat.completion.chunk SSE. - REPL mode keeps a per-session history array so multi-turn flows feel natural; each turn is an independent inference call. Updated: get.ts - LlmRow gains optional kind/status fields. - llmColumns layout: NAME, KIND, STATUS, TYPE, MODEL, TIER, KEY, ID. Defaults gracefully when older mcpd responses don't return them. Updated: chat.ts - Re-exports the helpers chat-llm.ts needs (PhaseStats, newPhase, recordDelta, formatStats, styleStats, styleThinking, STDERR_IS_TTY, StatusBar, installStatusBar). No behavior change. Completions: chat-llm picks up the standard option enumeration automatically; bash gets a special-case for first-arg LLM-name completion via _mcpctl_resource_names "llms". CLI suite: 437/437 (was 430, +7 from auto-discovered test cases in the regenerated completions golden). Workspace: 2043/2043 across 152 files. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 14:25:38 +01:00
# console options
complete -c mcpctl -n "__fish_seen_subcommand_from console" -l stdin-mcp -d 'Run inspector as MCP server over stdin/stdout (for Claude)'
complete -c mcpctl -n "__fish_seen_subcommand_from console" -l audit -d 'Browse audit events from mcpd'
# logs: takes a server/instance name
complete -c mcpctl -n "__fish_seen_subcommand_from logs; and __mcpctl_needs_arg_for logs" -a '(__mcpctl_instance_names)' -d 'Server name'
# console: takes a project name
complete -c mcpctl -n "__fish_seen_subcommand_from console; and __mcpctl_needs_arg_for console" -a '(__mcpctl_project_names)' -d 'Project name'
# attach-server: show servers NOT in the project (only if no server arg yet)
complete -c mcpctl -n "__fish_seen_subcommand_from attach-server; and __mcpctl_needs_server_arg" -a '(__mcpctl_available_servers)' -d 'Server'
# detach-server: show servers IN the project (only if no server arg yet)
complete -c mcpctl -n "__fish_seen_subcommand_from detach-server; and __mcpctl_needs_server_arg" -a '(__mcpctl_project_servers)' -d 'Server'
# apply: allow file completions for positional argument
complete -c mcpctl -n "__fish_seen_subcommand_from apply" -F
# help completions
complete -c mcpctl -n "__fish_seen_subcommand_from help" -a "$commands"