Files
mcpctl/completions/mcpctl.bash

399 lines
15 KiB
Bash
Raw Normal View History

# mcpctl bash completions — auto-generated by scripts/generate-completions.ts
# DO NOT EDIT MANUALLY — run: pnpm completions:generate
_mcpctl() {
local cur prev words cword
_init_completion || return
feat(agents): mcpctl chat REPL + agent CRUD + completions (Stage 5) This is the moment the user can actually talk to an agent end-to-end: mcpctl create llm qwen3-thinking --type openai --model qwen3-thinking \ --url http://litellm.nvidia-nim.svc.cluster.local:4000/v1 \ --api-key-ref litellm-key/API_KEY mcpctl create agent reviewer --llm qwen3-thinking --project mcpctl-dev \ --description "I review security design — ask me after each major change." mcpctl chat reviewer Pieces: * src/cli/src/commands/chat.ts (new) — REPL + one-shot. Streams the SSE endpoint and prints text deltas to stdout as they arrive; tool_call / tool_result events go to stderr in dim-style brackets so the chat output stays clean. LiteLLM-style flags (--temperature / --top-p / --top-k / --max-tokens / --seed / --stop / --allow-tool / --extra) layer over agent.defaultParams. In-REPL slash-commands: /set KEY VAL, /system <text>, /tools (list project's MCP servers), /clear (new thread), /save (PATCH agent.defaultParams = current overrides), /quit. * src/cli/src/commands/create.ts — `create agent` mirroring the llm pattern. Every yaml-applyable field has a corresponding flag (memory rule); --default-temperature / --default-top-p / --default-top-k / --default-max-tokens / --default-seed / --default-stop / --default-extra / --default-params-file all populate agent.defaultParams. * src/cli/src/commands/apply.ts — AgentSpecSchema accepts both `llm: qwen3-thinking` shorthand and `llm: { name: ... }` long form; runs after llms in the apply order so apiKey/llm references resolve. Round- trips with `get agent foo -o yaml | apply -f -` (memory rule). * src/cli/src/commands/get.ts — agentColumns (NAME, LLM, PROJECT, DESCRIPTION, ID); RESOURCE_KIND mapping for yaml export. * src/cli/src/commands/shared.ts — `agent`/`agents`/`thread`/`threads` added to RESOURCE_ALIASES. * src/cli/src/index.ts — wires createChatCommand into the program; passes the resolved baseUrl + token so chat can stream SSE without going through ApiClient (which only does buffered request/response). * completions/mcpctl.{fish,bash} regenerated. scripts/generate-completions.ts knows about agents (canonical + aliases) and emits a special-case `chat)` block that completes the first arg with `mcpctl get agents` names. tests/completions.test.ts: +9 new assertions covering agents in the resource list, chat in the commands list, --llm flag for create agent, agent-name completion for chat, etc. CLI suite: 430/430 (was 421). Completions --check is clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 17:02:38 +01:00
local commands="status login logout config get describe delete logs create edit apply chat patch backup approve console cache test migrate rotate"
local project_commands="get describe delete logs create edit attach-server detach-server"
local global_opts="-v --version --daemon-url --direct -p --project -h --help"
local resources="servers instances secrets secretbackends llms agents personalities templates projects users groups rbac prompts promptrequests serverattachments proxymodels all"
local resource_aliases="servers instances secrets secretbackends llms agents personalities templates projects users groups rbac prompts promptrequests serverattachments proxymodels all server srv instance inst secret sec secretbackend sb llm agent personality template tpl project proj user group rbac-definition rbac-binding prompt promptrequest pr serverattachment sa proxymodel pm"
# Check if --project/-p was given
local has_project=false
local i
for ((i=1; i < cword; i++)); do
if [[ "${words[i]}" == "--project" || "${words[i]}" == "-p" ]]; then
has_project=true
break
fi
done
# Find the first subcommand
local subcmd=""
local subcmd_pos=0
for ((i=1; i < cword; i++)); do
if [[ "${words[i]}" == "--project" || "${words[i]}" == "--daemon-url" || "${words[i]}" == "-p" ]]; then
((i++))
continue
fi
if [[ "${words[i]}" != -* ]]; then
subcmd="${words[i]}"
subcmd_pos=$i
break
fi
done
# Find the resource type after resource commands
local resource_type=""
if [[ -n "$subcmd_pos" ]] && [[ $subcmd_pos -gt 0 ]]; then
for ((i=subcmd_pos+1; i < cword; i++)); do
if [[ "${words[i]}" != -* ]] && [[ " $resource_aliases " == *" ${words[i]} "* ]]; then
resource_type="${words[i]}"
break
fi
done
fi
# Helper: get --project/-p value
_mcpctl_get_project_value() {
local i
for ((i=1; i < cword; i++)); do
if [[ "${words[i]}" == "--project" || "${words[i]}" == "-p" ]] && (( i+1 < cword )); then
echo "${words[i+1]}"
return
fi
done
}
# Helper: fetch resource names
_mcpctl_resource_names() {
local rt="$1"
if [[ -n "$rt" ]]; then
if [[ "$rt" == "instances" ]]; then
mcpctl get instances -o json 2>/dev/null | jq -r '.[][].server.name' 2>/dev/null
else
mcpctl get "$rt" -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null
fi
fi
}
# Helper: find sub-subcommand (for config/create)
_mcpctl_get_subcmd() {
local parent_pos="$1"
local i
for ((i=parent_pos+1; i < cword; i++)); do
if [[ "${words[i]}" != -* ]]; then
echo "${words[i]}"
return
fi
done
}
# If completing option values
if [[ "$prev" == "--project" || "$prev" == "-p" ]]; then
local names
names=$(mcpctl get projects -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null)
COMPREPLY=($(compgen -W "$names" -- "$cur"))
return
fi
case "$subcmd" in
status)
COMPREPLY=($(compgen -W "-o --output -h --help" -- "$cur"))
return ;;
login)
COMPREPLY=($(compgen -W "--mcpd-url -h --help" -- "$cur"))
return ;;
logout)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
return ;;
config)
local config_sub=$(_mcpctl_get_subcmd $subcmd_pos)
if [[ -z "$config_sub" ]]; then
COMPREPLY=($(compgen -W "view set path reset claude claude-generate setup impersonate help" -- "$cur"))
else
case "$config_sub" in
view)
COMPREPLY=($(compgen -W "-o --output -h --help" -- "$cur"))
;;
set)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
path)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
reset)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
claude)
COMPREPLY=($(compgen -W "-p --project -o --output --inspect --stdout -h --help" -- "$cur"))
;;
claude-generate)
COMPREPLY=($(compgen -W "-p --project -o --output --inspect --stdout -h --help" -- "$cur"))
;;
setup)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
impersonate)
COMPREPLY=($(compgen -W "--quit -h --help" -- "$cur"))
;;
*)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
esac
fi
return ;;
get)
if [[ -z "$resource_type" ]]; then
COMPREPLY=($(compgen -W "$resources -o --output -p --project -A --all -h --help" -- "$cur"))
else
local names
names=$(_mcpctl_resource_names "$resource_type")
COMPREPLY=($(compgen -W "$names -o --output -p --project -A --all -h --help" -- "$cur"))
fi
return ;;
describe)
if [[ -z "$resource_type" ]]; then
COMPREPLY=($(compgen -W "$resources -o --output --show-values -h --help" -- "$cur"))
else
local names
names=$(_mcpctl_resource_names "$resource_type")
COMPREPLY=($(compgen -W "$names -o --output --show-values -h --help" -- "$cur"))
fi
return ;;
delete)
if [[ -z "$resource_type" ]]; then
COMPREPLY=($(compgen -W "$resources -p --project --agent -h --help" -- "$cur"))
else
local names
names=$(_mcpctl_resource_names "$resource_type")
COMPREPLY=($(compgen -W "$names -p --project --agent -h --help" -- "$cur"))
fi
return ;;
logs)
if [[ $((cword - subcmd_pos)) -eq 1 ]]; then
local names
names=$(mcpctl get instances -o json 2>/dev/null | jq -r '.[][].server.name' 2>/dev/null)
COMPREPLY=($(compgen -W "$names -t --tail -i --instance -h --help" -- "$cur"))
else
COMPREPLY=($(compgen -W "-t --tail -i --instance -h --help" -- "$cur"))
fi
return ;;
create)
local create_sub=$(_mcpctl_get_subcmd $subcmd_pos)
if [[ -z "$create_sub" ]]; then
COMPREPLY=($(compgen -W "server secret llm agent secretbackend project user group rbac mcptoken prompt personality serverattachment promptrequest help" -- "$cur"))
else
case "$create_sub" in
server)
COMPREPLY=($(compgen -W "-d --description --package-name --runtime --docker-image --transport --repository-url --external-url --command --container-port --replicas --env --from-template --env-from-secret --force -h --help" -- "$cur"))
;;
secret)
COMPREPLY=($(compgen -W "--data --force -h --help" -- "$cur"))
;;
feat(mcpd): Llm resource — CRUD + CLI + apply Why: every client that wants an LLM (the agent, HTTP-mode mcplocal, Claude Code's STDIO mcplocal) today has to know the provider URL + key, and each user's ~/.mcpctl/config.json carries them. Centralising the catalogue on the server is the prerequisite for Phase 2 (mcpd proxies inference so credentials never leave the cluster). This phase adds the `Llm` resource and its CRUD surface — no proxy yet, no client pivot yet. Just enough to register what you have. Schema: - New `Llm` model: name/type/model/url/tier/description + {apiKeySecretId, apiKeySecretKey} FK pair. Reverse `llms` relation on Secret. - Provider types: anthropic | openai | deepseek | vllm | ollama | gemini-cli. - Tiers: fast | heavy. mcpd: - LlmRepository + LlmService + Zod validation schema + /api/v1/llms routes. - API surface exposes `apiKeyRef: {name, key}` — the service translates to/ from the FK pair so clients never deal in cuids. - `resolveApiKey(llmName)` reads through SecretService (which itself dispatches to the right SecretBackend). That's the hook Phase 2's inference proxy uses. - RBAC: added `'llms'` to RBAC_RESOURCES + resource alias. Standard view/create/edit/delete semantics. - Wired into main.ts (repo, service, routes). CLI: - `mcpctl create llm <name> --type X --model Y --tier fast|heavy --api-key-ref SECRET/KEY [--url ...] [--extra k=v ...]` - `mcpctl get|describe|delete llm` — standard resource verbs. - `mcpctl apply -f` with `kind: llm` (single- or multi-doc yaml/json). Applied after secrets, before servers — apiKeyRef resolves an existing Secret. - Shell completions regenerated. Tests: 11 service unit tests + 9 route tests (happy path, 404s, 409, validation). Full suite 1812/1812 (+20 from the 1792 Phase 0 baseline). TypeScript clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:28:43 +01:00
llm)
feat(llm): probe upstream auth at registration time mcpd now runs a cheap auth probe whenever an Llm is created (or its apiKeyRef/url is updated). Catches misconfigured tokens / wrong URLs at registration with a 422 + structured error message, instead of silently 500-ing on first chat with a generic "fetch failed". Caught in the wild today: the homelab Pulumi config exposed `MCPCTL_GATEWAY_TOKEN` (which is mcpctl_pat_-prefixed, intended for LiteLLM→mcplocal direction) where LiteLLM expects `LITELLM_MASTER_KEY` (sk-prefixed). The probe makes this immediate. Probe shape (LlmAdapter.verifyAuth): - OpenAI passthrough → GET <url>/v1/models. Cheap, idempotent, gated by the same auth as chat/completions. - Anthropic → POST /v1/messages with max_tokens:1, "ping". Anthropic has no list-models endpoint; this is the cheapest auth-exercising call. - Returns one of: { ok: true } { ok: false, reason: "auth", status, body } — 401/403, fail hard { ok: false, reason: "unreachable", error } — network, warn-only { ok: false, reason: "unexpected", status, body } — non-auth 4xx, warn-only Behavior: - LlmService.create()/update() runs the probe after resolveApiKey. Throws LlmAuthVerificationError on `auth`, logs warn for unreachable/unexpected, swallows for offline registration. - Probe is skipped when there's no apiKeyRef (nothing to verify) or when the caller passes skipAuthCheck=true. - update() probes only when apiKeyRef OR url changes — pure description/tier updates don't trigger upstream calls. - Routes catch LlmAuthVerificationError and return 422 with `{ error, status }`. The CLI surfaces the message verbatim via ApiError. Opt-out: - CLI: `mcpctl create llm ... --skip-auth-check` for offline registration before the upstream is reachable. - HTTP: side-channel body field `_skipAuthCheck: true` (stripped before validation, never persisted on the row). Side fix in same commit (caught while testing): src/cli/src/index.ts read `program.opts()` BEFORE `program.parse()`, so `--direct` was a no-op for ApiClient — every command went to mcplocal regardless. Some commands accidentally still worked because mcplocal forwards plain `/api/v1/*` to mcpd, but flows that need direct SSE streaming (e.g. `mcpctl chat`) couldn't reach mcpd. Fixed by peeking at process.argv directly for the two global flags before Commander's parse runs. Tests: - llm-adapters.test.ts (+8): OpenAI 200/401/403/404/network, Anthropic 200/401/400 (typo'd model = unexpected, NOT auth — registration shouldn't block on bad model names that surface at chat time). - llm-service.test.ts (+6): create-throws-on-auth-fail (no row written), warn-only on unreachable/unexpected, skipAuthCheck bypass, no-key skip, update-only-probes-on-auth-affecting-change. mcpd 775/775, mcplocal 715/715, cli 430/430. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 16:51:55 +01:00
COMPREPLY=($(compgen -W "--type --model --url --tier --description --api-key-ref --extra --force --skip-auth-check -h --help" -- "$cur"))
feat(mcpd): Llm resource — CRUD + CLI + apply Why: every client that wants an LLM (the agent, HTTP-mode mcplocal, Claude Code's STDIO mcplocal) today has to know the provider URL + key, and each user's ~/.mcpctl/config.json carries them. Centralising the catalogue on the server is the prerequisite for Phase 2 (mcpd proxies inference so credentials never leave the cluster). This phase adds the `Llm` resource and its CRUD surface — no proxy yet, no client pivot yet. Just enough to register what you have. Schema: - New `Llm` model: name/type/model/url/tier/description + {apiKeySecretId, apiKeySecretKey} FK pair. Reverse `llms` relation on Secret. - Provider types: anthropic | openai | deepseek | vllm | ollama | gemini-cli. - Tiers: fast | heavy. mcpd: - LlmRepository + LlmService + Zod validation schema + /api/v1/llms routes. - API surface exposes `apiKeyRef: {name, key}` — the service translates to/ from the FK pair so clients never deal in cuids. - `resolveApiKey(llmName)` reads through SecretService (which itself dispatches to the right SecretBackend). That's the hook Phase 2's inference proxy uses. - RBAC: added `'llms'` to RBAC_RESOURCES + resource alias. Standard view/create/edit/delete semantics. - Wired into main.ts (repo, service, routes). CLI: - `mcpctl create llm <name> --type X --model Y --tier fast|heavy --api-key-ref SECRET/KEY [--url ...] [--extra k=v ...]` - `mcpctl get|describe|delete llm` — standard resource verbs. - `mcpctl apply -f` with `kind: llm` (single- or multi-doc yaml/json). Applied after secrets, before servers — apiKeyRef resolves an existing Secret. - Shell completions regenerated. Tests: 11 service unit tests + 9 route tests (happy path, 404s, 409, validation). Full suite 1812/1812 (+20 from the 1792 Phase 0 baseline). TypeScript clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:28:43 +01:00
;;
feat(agents): mcpctl chat REPL + agent CRUD + completions (Stage 5) This is the moment the user can actually talk to an agent end-to-end: mcpctl create llm qwen3-thinking --type openai --model qwen3-thinking \ --url http://litellm.nvidia-nim.svc.cluster.local:4000/v1 \ --api-key-ref litellm-key/API_KEY mcpctl create agent reviewer --llm qwen3-thinking --project mcpctl-dev \ --description "I review security design — ask me after each major change." mcpctl chat reviewer Pieces: * src/cli/src/commands/chat.ts (new) — REPL + one-shot. Streams the SSE endpoint and prints text deltas to stdout as they arrive; tool_call / tool_result events go to stderr in dim-style brackets so the chat output stays clean. LiteLLM-style flags (--temperature / --top-p / --top-k / --max-tokens / --seed / --stop / --allow-tool / --extra) layer over agent.defaultParams. In-REPL slash-commands: /set KEY VAL, /system <text>, /tools (list project's MCP servers), /clear (new thread), /save (PATCH agent.defaultParams = current overrides), /quit. * src/cli/src/commands/create.ts — `create agent` mirroring the llm pattern. Every yaml-applyable field has a corresponding flag (memory rule); --default-temperature / --default-top-p / --default-top-k / --default-max-tokens / --default-seed / --default-stop / --default-extra / --default-params-file all populate agent.defaultParams. * src/cli/src/commands/apply.ts — AgentSpecSchema accepts both `llm: qwen3-thinking` shorthand and `llm: { name: ... }` long form; runs after llms in the apply order so apiKey/llm references resolve. Round- trips with `get agent foo -o yaml | apply -f -` (memory rule). * src/cli/src/commands/get.ts — agentColumns (NAME, LLM, PROJECT, DESCRIPTION, ID); RESOURCE_KIND mapping for yaml export. * src/cli/src/commands/shared.ts — `agent`/`agents`/`thread`/`threads` added to RESOURCE_ALIASES. * src/cli/src/index.ts — wires createChatCommand into the program; passes the resolved baseUrl + token so chat can stream SSE without going through ApiClient (which only does buffered request/response). * completions/mcpctl.{fish,bash} regenerated. scripts/generate-completions.ts knows about agents (canonical + aliases) and emits a special-case `chat)` block that completes the first arg with `mcpctl get agents` names. tests/completions.test.ts: +9 new assertions covering agents in the resource list, chat in the commands list, --llm flag for create agent, agent-name completion for chat, etc. CLI suite: 430/430 (was 421). Completions --check is clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 17:02:38 +01:00
agent)
COMPREPLY=($(compgen -W "--llm --project --description --system-prompt --system-prompt-file --proxy-model --default-temperature --default-top-p --default-top-k --default-max-tokens --default-seed --default-stop --default-extra --default-params-file --force -h --help" -- "$cur"))
;;
feat(mcpd): pluggable SecretBackend abstraction + OpenBao driver + migrate Why: API keys live in Postgres as plaintext JSON. A DB read exposes every credential in the system. Before centralising more secrets (LLM keys, etc.) we want to be able to point at an external KV store and drop DB access to sensitive rows. New model: - `SecretBackend` resource (CRUD + isDefault invariant) owns how a secret is stored. `Secret` gains `backendId` FK and `externalRef`. Reads/writes dispatch through a driver. - `plaintext` driver (near-noop, uses existing Secret.data column) is seeded as the `default` row at startup. Acts as trust root / bootstrap. - `openbao` driver (also HashiCorp Vault KV v2 compatible) talks plain HTTP, no SDK dependency. Auth via static token pulled from a plaintext-backed `Secret` through the injected SecretRefResolver. Caches resolved token. - `SecretMigrateService` moves secrets one-at-a-time: read → write dest → flip row → best-effort source delete. Interrupted runs are idempotent (skips secrets already on destination). CLI surface: - `mcpctl create|get|describe|delete secretbackend` + `--default` on create. - `mcpctl migrate secrets --from X --to Y [--names a,b] [--keep-source] [--dry-run]` - `apply -f` round-trips secretbackends (yaml/json multi-doc + grouped). - RBAC: `secretbackends` resource + `run:migrate-secrets` operation. - Fish + bash completions regenerated. docs/secret-backends.md covers the OpenBao policy, chicken-and-egg auth flow, and the migration semantics. Broke the circular dep (OpenBao needs SecretService to resolve its own token, SecretService needs SecretBackendService) with a deferred-resolver bridge in mcpd startup. 11 new driver unit tests; existing env-resolver/secret-route/ backup tests updated for the new service signatures. Full suite: 1792/1792. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 19:29:55 +01:00
secretbackend)
COMPREPLY=($(compgen -W "--type --description --default --url --namespace --mount --path-prefix --auth --token-secret --role --auth-mount --sa-token-path --config --wizard --setup-token --policy-name --token-role --no-promote-default --force -h --help" -- "$cur"))
feat(mcpd): pluggable SecretBackend abstraction + OpenBao driver + migrate Why: API keys live in Postgres as plaintext JSON. A DB read exposes every credential in the system. Before centralising more secrets (LLM keys, etc.) we want to be able to point at an external KV store and drop DB access to sensitive rows. New model: - `SecretBackend` resource (CRUD + isDefault invariant) owns how a secret is stored. `Secret` gains `backendId` FK and `externalRef`. Reads/writes dispatch through a driver. - `plaintext` driver (near-noop, uses existing Secret.data column) is seeded as the `default` row at startup. Acts as trust root / bootstrap. - `openbao` driver (also HashiCorp Vault KV v2 compatible) talks plain HTTP, no SDK dependency. Auth via static token pulled from a plaintext-backed `Secret` through the injected SecretRefResolver. Caches resolved token. - `SecretMigrateService` moves secrets one-at-a-time: read → write dest → flip row → best-effort source delete. Interrupted runs are idempotent (skips secrets already on destination). CLI surface: - `mcpctl create|get|describe|delete secretbackend` + `--default` on create. - `mcpctl migrate secrets --from X --to Y [--names a,b] [--keep-source] [--dry-run]` - `apply -f` round-trips secretbackends (yaml/json multi-doc + grouped). - RBAC: `secretbackends` resource + `run:migrate-secrets` operation. - Fish + bash completions regenerated. docs/secret-backends.md covers the OpenBao policy, chicken-and-egg auth flow, and the migration semantics. Broke the circular dep (OpenBao needs SecretService to resolve its own token, SecretService needs SecretBackendService) with a deferred-resolver bridge in mcpd startup. 11 new driver unit tests; existing env-resolver/secret-route/ backup tests updated for the new service signatures. Full suite: 1792/1792. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 19:29:55 +01:00
;;
project)
COMPREPLY=($(compgen -W "-d --description --proxy-model --prompt --llm --llm-model --gated --no-gated --server --force -h --help" -- "$cur"))
;;
user)
COMPREPLY=($(compgen -W "--password --name --force -h --help" -- "$cur"))
;;
group)
COMPREPLY=($(compgen -W "--description --member --force -h --help" -- "$cur"))
;;
rbac)
COMPREPLY=($(compgen -W "--subject --roleBindings --force -h --help" -- "$cur"))
;;
feat: mcpctl mcptoken verbs + mcpd auth dispatch + audit plumbing Adds the end-to-end CLI surface for McpTokens and the mcpd auth dispatch that recognizes them. mcpd auth middleware: - Dispatch on the `mcpctl_pat_` bearer prefix. McpToken bearers resolve through a new `findMcpToken(hash)` dep, populating `request.mcpToken` and `request.userId = ownerId`. Everything else follows the existing session path. - Returns 401 for revoked / expired / unknown tokens. - Global RBAC hook now threads `mcpTokenSha` into `canAccess` / `canRunOperation` / `getAllowedScope`, and enforces a hard project-scope check: a McpToken principal can only hit `/api/v1/projects/<its-project>/...`. CLI verbs: - `mcpctl create mcptoken <name> -p <proj> [--rbac empty|clone] [--bind role:view,resource:servers] [--ttl 30d|never|ISO] [--description ...] [--force]` — returns the raw token once. - `mcpctl get mcptokens [-p <proj>]` — table with NAME/PROJECT/PREFIX/CREATED/LAST USED/EXPIRES/STATUS. - `mcpctl get mcptoken <name> -p <proj>` and `mcpctl describe mcptoken <name> -p <proj>` — describe surfaces the auto-created RBAC bindings. - `mcpctl delete mcptoken <name> -p <proj>`. - `apply -f` support with `kind: mcptoken`. Tokens are immutable, so apply creates if missing and skips if the name is already active. Audit plumbing: - `AuditEvent` / collector now carry optional `tokenName` / `tokenSha`. `setSessionMcpToken` sits alongside `setSessionUserName`; both feed a per-session principal map used at emit time. - `AuditEventService` query accepts `tokenName` / `tokenSha` filters. - Console `AuditEvent` type carries the new fields so a follow-up can add a TOKEN column. Completions regenerated. 1764/1764 tests pass workspace-wide. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 01:12:43 +01:00
mcptoken)
COMPREPLY=($(compgen -W "-p --project --rbac --bind --ttl --description --force -h --help" -- "$cur"))
;;
prompt)
COMPREPLY=($(compgen -W "-p --project --agent --content --content-file --priority --link -h --help" -- "$cur"))
;;
personality)
COMPREPLY=($(compgen -W "--agent --description --priority -h --help" -- "$cur"))
;;
serverattachment)
COMPREPLY=($(compgen -W "-p --project -h --help" -- "$cur"))
;;
promptrequest)
COMPREPLY=($(compgen -W "-p --project --content --content-file --priority -h --help" -- "$cur"))
;;
*)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
esac
fi
return ;;
edit)
if [[ -z "$resource_type" ]]; then
COMPREPLY=($(compgen -W "servers secrets projects groups rbac prompts promptrequests personalities -h --help" -- "$cur"))
else
local names
names=$(_mcpctl_resource_names "$resource_type")
COMPREPLY=($(compgen -W "$names -h --help" -- "$cur"))
fi
return ;;
apply)
COMPREPLY=($(compgen -f -W "-f --file --dry-run -h --help" -- "$cur"))
return ;;
feat(agents): mcpctl chat REPL + agent CRUD + completions (Stage 5) This is the moment the user can actually talk to an agent end-to-end: mcpctl create llm qwen3-thinking --type openai --model qwen3-thinking \ --url http://litellm.nvidia-nim.svc.cluster.local:4000/v1 \ --api-key-ref litellm-key/API_KEY mcpctl create agent reviewer --llm qwen3-thinking --project mcpctl-dev \ --description "I review security design — ask me after each major change." mcpctl chat reviewer Pieces: * src/cli/src/commands/chat.ts (new) — REPL + one-shot. Streams the SSE endpoint and prints text deltas to stdout as they arrive; tool_call / tool_result events go to stderr in dim-style brackets so the chat output stays clean. LiteLLM-style flags (--temperature / --top-p / --top-k / --max-tokens / --seed / --stop / --allow-tool / --extra) layer over agent.defaultParams. In-REPL slash-commands: /set KEY VAL, /system <text>, /tools (list project's MCP servers), /clear (new thread), /save (PATCH agent.defaultParams = current overrides), /quit. * src/cli/src/commands/create.ts — `create agent` mirroring the llm pattern. Every yaml-applyable field has a corresponding flag (memory rule); --default-temperature / --default-top-p / --default-top-k / --default-max-tokens / --default-seed / --default-stop / --default-extra / --default-params-file all populate agent.defaultParams. * src/cli/src/commands/apply.ts — AgentSpecSchema accepts both `llm: qwen3-thinking` shorthand and `llm: { name: ... }` long form; runs after llms in the apply order so apiKey/llm references resolve. Round- trips with `get agent foo -o yaml | apply -f -` (memory rule). * src/cli/src/commands/get.ts — agentColumns (NAME, LLM, PROJECT, DESCRIPTION, ID); RESOURCE_KIND mapping for yaml export. * src/cli/src/commands/shared.ts — `agent`/`agents`/`thread`/`threads` added to RESOURCE_ALIASES. * src/cli/src/index.ts — wires createChatCommand into the program; passes the resolved baseUrl + token so chat can stream SSE without going through ApiClient (which only does buffered request/response). * completions/mcpctl.{fish,bash} regenerated. scripts/generate-completions.ts knows about agents (canonical + aliases) and emits a special-case `chat)` block that completes the first arg with `mcpctl get agents` names. tests/completions.test.ts: +9 new assertions covering agents in the resource list, chat in the commands list, --llm flag for create agent, agent-name completion for chat, etc. CLI suite: 430/430 (was 421). Completions --check is clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 17:02:38 +01:00
chat)
if [[ $((cword - subcmd_pos)) -eq 1 ]]; then
local names
names=$(_mcpctl_resource_names "agents")
COMPREPLY=($(compgen -W "$names -m --message --thread --system --system-file --system-append --personality --temperature --top-p --top-k --max-tokens --seed --stop --allow-tool --extra --no-stream -h --help" -- "$cur"))
feat(agents): mcpctl chat REPL + agent CRUD + completions (Stage 5) This is the moment the user can actually talk to an agent end-to-end: mcpctl create llm qwen3-thinking --type openai --model qwen3-thinking \ --url http://litellm.nvidia-nim.svc.cluster.local:4000/v1 \ --api-key-ref litellm-key/API_KEY mcpctl create agent reviewer --llm qwen3-thinking --project mcpctl-dev \ --description "I review security design — ask me after each major change." mcpctl chat reviewer Pieces: * src/cli/src/commands/chat.ts (new) — REPL + one-shot. Streams the SSE endpoint and prints text deltas to stdout as they arrive; tool_call / tool_result events go to stderr in dim-style brackets so the chat output stays clean. LiteLLM-style flags (--temperature / --top-p / --top-k / --max-tokens / --seed / --stop / --allow-tool / --extra) layer over agent.defaultParams. In-REPL slash-commands: /set KEY VAL, /system <text>, /tools (list project's MCP servers), /clear (new thread), /save (PATCH agent.defaultParams = current overrides), /quit. * src/cli/src/commands/create.ts — `create agent` mirroring the llm pattern. Every yaml-applyable field has a corresponding flag (memory rule); --default-temperature / --default-top-p / --default-top-k / --default-max-tokens / --default-seed / --default-stop / --default-extra / --default-params-file all populate agent.defaultParams. * src/cli/src/commands/apply.ts — AgentSpecSchema accepts both `llm: qwen3-thinking` shorthand and `llm: { name: ... }` long form; runs after llms in the apply order so apiKey/llm references resolve. Round- trips with `get agent foo -o yaml | apply -f -` (memory rule). * src/cli/src/commands/get.ts — agentColumns (NAME, LLM, PROJECT, DESCRIPTION, ID); RESOURCE_KIND mapping for yaml export. * src/cli/src/commands/shared.ts — `agent`/`agents`/`thread`/`threads` added to RESOURCE_ALIASES. * src/cli/src/index.ts — wires createChatCommand into the program; passes the resolved baseUrl + token so chat can stream SSE without going through ApiClient (which only does buffered request/response). * completions/mcpctl.{fish,bash} regenerated. scripts/generate-completions.ts knows about agents (canonical + aliases) and emits a special-case `chat)` block that completes the first arg with `mcpctl get agents` names. tests/completions.test.ts: +9 new assertions covering agents in the resource list, chat in the commands list, --llm flag for create agent, agent-name completion for chat, etc. CLI suite: 430/430 (was 421). Completions --check is clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 17:02:38 +01:00
else
COMPREPLY=($(compgen -W "-m --message --thread --system --system-file --system-append --personality --temperature --top-p --top-k --max-tokens --seed --stop --allow-tool --extra --no-stream -h --help" -- "$cur"))
feat(agents): mcpctl chat REPL + agent CRUD + completions (Stage 5) This is the moment the user can actually talk to an agent end-to-end: mcpctl create llm qwen3-thinking --type openai --model qwen3-thinking \ --url http://litellm.nvidia-nim.svc.cluster.local:4000/v1 \ --api-key-ref litellm-key/API_KEY mcpctl create agent reviewer --llm qwen3-thinking --project mcpctl-dev \ --description "I review security design — ask me after each major change." mcpctl chat reviewer Pieces: * src/cli/src/commands/chat.ts (new) — REPL + one-shot. Streams the SSE endpoint and prints text deltas to stdout as they arrive; tool_call / tool_result events go to stderr in dim-style brackets so the chat output stays clean. LiteLLM-style flags (--temperature / --top-p / --top-k / --max-tokens / --seed / --stop / --allow-tool / --extra) layer over agent.defaultParams. In-REPL slash-commands: /set KEY VAL, /system <text>, /tools (list project's MCP servers), /clear (new thread), /save (PATCH agent.defaultParams = current overrides), /quit. * src/cli/src/commands/create.ts — `create agent` mirroring the llm pattern. Every yaml-applyable field has a corresponding flag (memory rule); --default-temperature / --default-top-p / --default-top-k / --default-max-tokens / --default-seed / --default-stop / --default-extra / --default-params-file all populate agent.defaultParams. * src/cli/src/commands/apply.ts — AgentSpecSchema accepts both `llm: qwen3-thinking` shorthand and `llm: { name: ... }` long form; runs after llms in the apply order so apiKey/llm references resolve. Round- trips with `get agent foo -o yaml | apply -f -` (memory rule). * src/cli/src/commands/get.ts — agentColumns (NAME, LLM, PROJECT, DESCRIPTION, ID); RESOURCE_KIND mapping for yaml export. * src/cli/src/commands/shared.ts — `agent`/`agents`/`thread`/`threads` added to RESOURCE_ALIASES. * src/cli/src/index.ts — wires createChatCommand into the program; passes the resolved baseUrl + token so chat can stream SSE without going through ApiClient (which only does buffered request/response). * completions/mcpctl.{fish,bash} regenerated. scripts/generate-completions.ts knows about agents (canonical + aliases) and emits a special-case `chat)` block that completes the first arg with `mcpctl get agents` names. tests/completions.test.ts: +9 new assertions covering agents in the resource list, chat in the commands list, --llm flag for create agent, agent-name completion for chat, etc. CLI suite: 430/430 (was 421). Completions --check is clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 17:02:38 +01:00
fi
return ;;
patch)
if [[ -z "$resource_type" ]]; then
COMPREPLY=($(compgen -W "$resources -h --help" -- "$cur"))
else
local names
names=$(_mcpctl_resource_names "$resource_type")
COMPREPLY=($(compgen -W "$names -h --help" -- "$cur"))
fi
return ;;
backup)
local backup_sub=$(_mcpctl_get_subcmd $subcmd_pos)
if [[ -z "$backup_sub" ]]; then
COMPREPLY=($(compgen -W "log restore help" -- "$cur"))
else
case "$backup_sub" in
log)
COMPREPLY=($(compgen -W "-n --limit -h --help" -- "$cur"))
;;
restore)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
*)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
esac
fi
return ;;
attach-server)
if [[ $((cword - subcmd_pos)) -ne 1 ]]; then return; fi
local proj names all_servers proj_servers
proj=$(_mcpctl_get_project_value)
if [[ -n "$proj" ]]; then
all_servers=$(mcpctl get servers -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null)
proj_servers=$(mcpctl --project "$proj" get servers -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null)
names=$(comm -23 <(echo "$all_servers" | sort) <(echo "$proj_servers" | sort))
else
names=$(_mcpctl_resource_names "servers")
fi
COMPREPLY=($(compgen -W "$names" -- "$cur"))
return ;;
detach-server)
if [[ $((cword - subcmd_pos)) -ne 1 ]]; then return; fi
local proj names
proj=$(_mcpctl_get_project_value)
if [[ -n "$proj" ]]; then
names=$(mcpctl --project "$proj" get servers -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null)
fi
COMPREPLY=($(compgen -W "$names" -- "$cur"))
return ;;
approve)
if [[ -z "$resource_type" ]]; then
COMPREPLY=($(compgen -W "promptrequest -h --help" -- "$cur"))
else
local names
names=$(_mcpctl_resource_names "$resource_type")
COMPREPLY=($(compgen -W "$names -h --help" -- "$cur"))
fi
return ;;
mcp)
COMPREPLY=($(compgen -W "-p --project -h --help" -- "$cur"))
return ;;
console)
if [[ $((cword - subcmd_pos)) -eq 1 ]]; then
local names
names=$(mcpctl get projects -o json 2>/dev/null | jq -r '.[].name' 2>/dev/null)
COMPREPLY=($(compgen -W "$names --stdin-mcp --audit -h --help" -- "$cur"))
else
COMPREPLY=($(compgen -W "--stdin-mcp --audit -h --help" -- "$cur"))
fi
return ;;
cache)
local cache_sub=$(_mcpctl_get_subcmd $subcmd_pos)
if [[ -z "$cache_sub" ]]; then
COMPREPLY=($(compgen -W "stats clear help" -- "$cur"))
else
case "$cache_sub" in
stats)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
clear)
COMPREPLY=($(compgen -W "--older-than -y --yes -h --help" -- "$cur"))
;;
*)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
esac
fi
return ;;
feat: HTTP-mode mcplocal container + mcpctl test mcp + token-auth preHandler Delivers the final piece of the mcptoken stack: a containerized, network-accessible mcplocal that serves Streamable-HTTP MCP to off-host clients (the vLLM use case), authenticated by project-scoped McpTokens. New binary (same package, new entry): - src/mcplocal/src/serve.ts — HTTP-only entry. Reads MCPLOCAL_MCPD_URL, MCPLOCAL_MCPD_TOKEN, MCPLOCAL_HTTP_HOST/PORT, MCPLOCAL_CACHE_DIR from env. No StdioProxyServer, no --upstream. - src/mcplocal/src/http/token-auth.ts — Fastify preHandler that validates mcpctl_pat_ bearers via mcpd's /api/v1/mcptokens/introspect. 30s positive / 5s negative TTL. Rejects wrong-project with 403. Shared HTTP MCP client: - src/shared/src/mcp-http/ — reusable McpHttpSession with initialize, listTools, callTool, close. Handles http+https, SSE, id correlation, distinct McpProtocolError / McpTransportError. Plus mcpHealthCheck and deriveBaseUrl helpers. New CLI verb `mcpctl test mcp <url>`: - Flags: --token (also $MCPCTL_TOKEN), --tool, --args (JSON), --expect-tools, --timeout, -o text|json, --no-health. - Exit codes: 0 PASS, 1 TRANSPORT/AUTH FAIL, 2 CONTRACT FAIL. Container + deploy: - deploy/Dockerfile.mcplocal (Node 20 alpine, multi-stage, pnpm workspace, CMD node src/mcplocal/dist/serve.js, VOLUME /var/lib/mcplocal/cache, HEALTHCHECK on :3200/healthz). - scripts/build-mcplocal.sh mirrors build-mcpd.sh. - fulldeploy.sh is now a 4-step pipeline that also builds + rolls out mcplocal (gated on `kubectl get deployment/mcplocal` so the script stays green before the Pulumi stack lands). Audit + cache: - project-mcp-endpoint.ts passes MCPLOCAL_CACHE_DIR into FileCache at both construction sites and, when request.mcpToken is present, calls collector.setSessionMcpToken(id, ...) so audit events carry the tokenName/tokenSha. Tests: - 9 unit cases on `mcpctl test mcp` (happy path, health miss, expect-tools hit/miss, transport throw, tool isError, json report, $MCPCTL_TOKEN env fallback, invalid --args). - Smoke test src/mcplocal/tests/smoke/mcptoken.smoke.test.ts — gated on healthz($MCPGW_URL), skipped cleanly when unreachable. Covers happy path, wrong-project 403, --expect-tools contract failure, and revocation 401 within the negative-cache window. 1773/1773 workspace tests pass. Pulumi resources (Deployment, Service, Ingress, PVC, Secret, NetworkPolicy) still need to land in ../kubernetes-deployment before the smoke gate flips on. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 01:21:42 +01:00
test)
local test_sub=$(_mcpctl_get_subcmd $subcmd_pos)
if [[ -z "$test_sub" ]]; then
COMPREPLY=($(compgen -W "mcp help" -- "$cur"))
else
case "$test_sub" in
mcp)
COMPREPLY=($(compgen -W "--token --tool --args --expect-tools --timeout -o --output --no-health -h --help" -- "$cur"))
;;
*)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
esac
fi
return ;;
feat(mcpd): pluggable SecretBackend abstraction + OpenBao driver + migrate Why: API keys live in Postgres as plaintext JSON. A DB read exposes every credential in the system. Before centralising more secrets (LLM keys, etc.) we want to be able to point at an external KV store and drop DB access to sensitive rows. New model: - `SecretBackend` resource (CRUD + isDefault invariant) owns how a secret is stored. `Secret` gains `backendId` FK and `externalRef`. Reads/writes dispatch through a driver. - `plaintext` driver (near-noop, uses existing Secret.data column) is seeded as the `default` row at startup. Acts as trust root / bootstrap. - `openbao` driver (also HashiCorp Vault KV v2 compatible) talks plain HTTP, no SDK dependency. Auth via static token pulled from a plaintext-backed `Secret` through the injected SecretRefResolver. Caches resolved token. - `SecretMigrateService` moves secrets one-at-a-time: read → write dest → flip row → best-effort source delete. Interrupted runs are idempotent (skips secrets already on destination). CLI surface: - `mcpctl create|get|describe|delete secretbackend` + `--default` on create. - `mcpctl migrate secrets --from X --to Y [--names a,b] [--keep-source] [--dry-run]` - `apply -f` round-trips secretbackends (yaml/json multi-doc + grouped). - RBAC: `secretbackends` resource + `run:migrate-secrets` operation. - Fish + bash completions regenerated. docs/secret-backends.md covers the OpenBao policy, chicken-and-egg auth flow, and the migration semantics. Broke the circular dep (OpenBao needs SecretService to resolve its own token, SecretService needs SecretBackendService) with a deferred-resolver bridge in mcpd startup. 11 new driver unit tests; existing env-resolver/secret-route/ backup tests updated for the new service signatures. Full suite: 1792/1792. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 19:29:55 +01:00
migrate)
local migrate_sub=$(_mcpctl_get_subcmd $subcmd_pos)
if [[ -z "$migrate_sub" ]]; then
COMPREPLY=($(compgen -W "secrets help" -- "$cur"))
else
case "$migrate_sub" in
secrets)
COMPREPLY=($(compgen -W "--from --to --names --keep-source --dry-run -h --help" -- "$cur"))
;;
*)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
esac
fi
return ;;
feat(openbao): wizard-provisioning + daily token rotation One-command setup replaces the 6-step manual flow — `mcpctl create secretbackend bao --type openbao --wizard` takes the OpenBao admin token once, provisions a narrow policy + token role, mints the first periodic token, stores it on mcpd, verifies end-to-end, and prints the migration command. The admin token is NEVER persisted. The stored credential auto-rotates daily: mcpd mints a successor via the token role (self-rotation capability is part of the policy it was issued with), verifies the successor, writes it over the backing Secret, then revokes the predecessor by accessor. TTL 720h means a week of rotation failures still leaves 20+ days of runway. Shared: - New `@mcpctl/shared/vault` — pure HTTP wrappers (verifyHealth, ensureKvV2, writePolicy, ensureTokenRole, mintRoleToken, revokeAccessor, lookupSelf, testWriteReadDelete) and policy HCL builder. mcpd: - `tokenMeta Json @default("{}")` on SecretBackend. Self-healing schema migration — empty default lets `prisma db push` add the column cleanly. - SecretBackendRotator.rotateOne: mint → verify → persist → revoke-old → update tokenMeta. Failures surface via `lastRotationError` on the row; the old token keeps working. - SecretBackendRotatorLoop: on startup rotates overdue backends, schedules per-backend timers with ±10min jitter. Stops cleanly on shutdown. - New `POST /api/v1/secretbackends/:id/rotate` (operation `rotate-secretbackend` — added to bootstrap-admin's auto-migrated ops alongside migrate-secrets, which was previously missing too). CLI: - `--wizard` on `create secretbackend` delegates to the interactive flow. All prompts can be pre-answered via flags (--url, --admin-token, --mount, --path-prefix, --policy-name, --token-role, --no-promote-default) for CI. - `mcpctl rotate secretbackend <name>` — convenience verb; hits the new rotate endpoint. - `describe secretbackend` renders a Token health section (healthy / STALE / WARNING / ERROR) with generated/renewal/expiry timestamps and last rotation error. Only shown when tokenMeta.rotatable is true — the existing k8s-auth + static-token backends don't surface it. Tests: 15 vault-client unit tests (shared), 8 rotator unit tests (mcpd), 3 wizard flow tests (cli, including a regression test that the admin token never appears in stdout). Full suite 1885/1885 (+32). Completions regenerated for the new flags. Out of scope (explicit): kubernetes-auth wizard, Vault Enterprise namespaces in the wizard path, rotation for non-wizard static-token backends. See plan file for details. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 17:20:37 +01:00
rotate)
local rotate_sub=$(_mcpctl_get_subcmd $subcmd_pos)
if [[ -z "$rotate_sub" ]]; then
COMPREPLY=($(compgen -W "secretbackend help" -- "$cur"))
else
case "$rotate_sub" in
secretbackend)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
*)
COMPREPLY=($(compgen -W "-h --help" -- "$cur"))
;;
esac
fi
return ;;
help)
COMPREPLY=($(compgen -W "$commands" -- "$cur"))
return ;;
esac
# No subcommand yet — offer commands based on context
if [[ -z "$subcmd" ]]; then
if $has_project; then
COMPREPLY=($(compgen -W "$project_commands $global_opts" -- "$cur"))
else
COMPREPLY=($(compgen -W "$commands $global_opts" -- "$cur"))
fi
fi
}
complete -F _mcpctl mcpctl