Commit Graph

279 Commits

Author SHA1 Message Date
Michal
6ac79de8a4 feat(secrets): one-shot startup backfill for keyNames on existing rows
Some checks failed
CI/CD / lint (push) Successful in 52s
CI/CD / test (push) Successful in 1m8s
CI/CD / typecheck (push) Successful in 2m20s
CI/CD / build (push) Successful in 2m49s
CI/CD / smoke (push) Failing after 3m16s
CI/CD / publish (push) Has been skipped
Lazy backfill in SecretService.getById covers per-row retries, but list
views still show 'KEYS: -' until each row is described. New
backfillSecretKeyNames bootstrap runs once at startup, finds Secrets
where keyNames=[] AND data={} (i.e. backend-stored, pre-existing rows),
calls resolveData to learn the keys, persists. Sequential to be kind to
the upstream backend on cold start. Idempotent + non-fatal.
2026-04-24 01:01:40 +01:00
Michal
9a808877b5 feat(secrets): track key names so list/describe work for backend-stored secrets
Some checks failed
CI/CD / lint (push) Successful in 53s
CI/CD / test (push) Successful in 1m6s
CI/CD / typecheck (push) Successful in 2m11s
CI/CD / smoke (push) Failing after 1m42s
CI/CD / publish (push) Has been cancelled
CI/CD / build (push) Has been cancelled
Post-migration, every Secret on a non-plaintext backend had an empty `data`
column (values live in the backend; only externalRef on the row). The CLI's
\`get secrets\` showed \`KEYS: -\` and \`describe secret\` showed \`(empty)\` for
all 9 migrated secrets — useless without --show-values.

Fix: dedicated \`keyNames Json\` column on Secret that stores the sorted key
list independently from the values. Populated on every write path, lazily
backfilled on first read for pre-existing rows that pre-date the column.
Schema default \`[]\` keeps prisma db push self-healing on rolling upgrades.

- src/db/prisma/schema.prisma: add Secret.keyNames Json @default("[]")
- src/mcpd/src/repositories/secret.repository.ts: pipe keyNames through create
  + update
- src/mcpd/src/services/secret.service.ts:
  - create/update populate keyNames = sorted Object.keys(data)
  - getById lazy-backfills empty keyNames (cheap: derives from data for
    plaintext, single backend read for openbao)
- src/mcpd/src/services/secret-migrate.service.ts: migrate writes keyNames
  alongside the new backendId so freshly-migrated rows are populated without
  a follow-up read
- src/cli/src/commands/get.ts: KEYS column reads keyNames first, falls back
  to Object.keys(data) for older rows
- src/cli/src/commands/describe.ts: shows the Data section keys whenever
  keyNames OR data has entries (so backend-stored secrets render their key
  list); --show-values still resolves through the backend

After deploy, the 9 already-migrated secrets backfill their keyNames on the
next describe-by-id, with no operator action needed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-24 00:57:06 +01:00
Michal
b1bccee50d test(describe): mock the ?reveal=true path on --show-values
Some checks failed
CI/CD / lint (push) Successful in 54s
CI/CD / test (push) Successful in 1m7s
CI/CD / typecheck (push) Successful in 2m19s
CI/CD / smoke (push) Failing after 5m9s
CI/CD / publish (push) Has been cancelled
CI/CD / build (push) Has been cancelled
Follow-up to faccbb5: the describe-secret test for --show-values used the
old fetchResource shape, so it broke after the route now goes through
client.get directly with ?reveal=true.
2026-04-24 00:49:22 +01:00
Michal
faccbb58e7 fix(secrets): describe --show-values resolves through the backend driver
Some checks failed
CI/CD / lint (push) Successful in 55s
CI/CD / test (push) Failing after 1m5s
CI/CD / typecheck (push) Has started running
CI/CD / smoke (push) Has been cancelled
CI/CD / build (push) Has been cancelled
CI/CD / publish (push) Has been cancelled
Post-migration, every Secret on a non-plaintext backend has empty `Secret.data`
(the actual value lives in the backend; only externalRef is on the row).
`describe secret --show-values` was reading the raw row, so the user saw
"Data: (empty)" for every migrated secret.

- Route GET /api/v1/secrets/:id accepts ?reveal=true; when set, resolves the
  value via SecretService.resolveData() so the response carries the actual
  data dispatched through the right driver.
- CLI --show-values flips that query param. Without --show-values the route
  returns the raw row exactly as before (no leak risk).

Caught running the wizard end-to-end on the live cluster after the
ClusterMesh fix on the kubernetes-deployment side made bao reachable.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-24 00:46:54 +01:00
Michal
bf312850b5 fix(openbao): include response body in error messages
Some checks failed
CI/CD / lint (push) Successful in 52s
CI/CD / typecheck (push) Successful in 51s
CI/CD / test (push) Successful in 1m4s
CI/CD / smoke (push) Failing after 1m36s
CI/CD / build (push) Successful in 2m19s
CI/CD / publish (push) Has been skipped
Debugging the wizard migration flow, every OpenBao error was just
`HTTP 403` with no context. The response body often carries the actual
reason (missing capability, specific path, namespace mismatch), so
surfacing it makes operator debugging a one-step task. Added a shared
bodyText() helper that trims huge HTML error pages to 400 chars.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 21:01:03 +01:00
Michal
72e49f719f fix(mcpd): skip bootstrap tokens on migrate + back-fill ops on existing admins
Some checks failed
CI/CD / lint (push) Successful in 52s
CI/CD / typecheck (push) Successful in 1m45s
CI/CD / test (push) Successful in 1m2s
CI/CD / build (push) Successful in 2m9s
CI/CD / smoke (push) Has started running
CI/CD / publish (push) Has been cancelled
Two production issues caught running the wizard end-to-end:

1. `mcpctl migrate secrets --from default --to bao` listed `bao-creds` as a
   candidate — the very token that lets mcpd reach bao. Moving it would
   brick the auth chain (destination backend needs its own bootstrap token
   to read its own bootstrap token). Fix: SecretMigrateService now calls
   backends.list() and filters out any Secret whose name matches ANY
   SecretBackend's `config.tokenSecretRef.name`. dryRun mirrors the same
   filter so candidates match reality. `--names` explicitly bypasses the
   filter for operators who really mean it.

2. Initial rotation in the wizard 403'd because the global RBAC hook
   demands the `rotate-secretbackend` operation, which wasn't in
   bootstrap-admin — migrateAdminRole only added ops when processing a
   legacy `role: admin` entry, so already-migrated admin rows missed every
   new op added after their initial migration. Fix: migrateAdminRole now
   also runs a back-fill pass on rows that look admin-equivalent (have both
   `edit:*` and `run:*`), appending any missing op from ADMIN_OPS. Writes
   only when something actually changed, so restarts stay quiet. Same path
   also retroactively grants `migrate-secrets` which had the same problem
   yesterday.

Tests: 4 new migrate-service cases (bootstrap filter on/off, dryRun parity,
--names bypass). Full suite 1889/1889.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 20:56:00 +01:00
Michal
56a4ff7f17 chore: regenerate completions after --setup-token rename
Some checks failed
CI/CD / lint (push) Successful in 52s
CI/CD / test (push) Successful in 1m4s
CI/CD / typecheck (push) Successful in 2m2s
CI/CD / smoke (push) Failing after 1m36s
CI/CD / build (push) Successful in 4m53s
CI/CD / publish (push) Has been skipped
2026-04-20 17:28:05 +01:00
Michal
1c5301289c refactor(wizard): rename --admin-token → --setup-token
Some checks failed
CI/CD / typecheck (push) Has been cancelled
CI/CD / test (push) Has been cancelled
CI/CD / smoke (push) Has been cancelled
CI/CD / build (push) Has been cancelled
CI/CD / publish (push) Has been cancelled
CI/CD / lint (push) Has been cancelled
Any token with policy-write + auth/token admin works; root is a convenient
default but a scoped service account is fine too. The previous naming
misrepresented the permission floor as root-only.

- flag: --admin-token → --setup-token
- wizard field: adminToken → setupToken
- prompt label: "OpenBao admin / root token" → "OpenBao setup token (needs
  policy write + auth/token admin perms; root is fine)"
- file doc + one comment reworded
- tests updated for the new label
- regression test (token-absent-from-stdout) kept unchanged

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 17:27:09 +01:00
ba4129a1e4 Merge pull request 'feat(openbao): wizard + daily token rotation' (#56) from feat/openbao-wizard into main
Some checks failed
CI/CD / lint (push) Successful in 51s
CI/CD / test (push) Successful in 1m4s
CI/CD / typecheck (push) Successful in 1m56s
CI/CD / build (push) Has been cancelled
CI/CD / publish (push) Has been cancelled
CI/CD / smoke (push) Has been cancelled
2026-04-20 16:22:50 +00:00
Michal
dd4246878d feat(openbao): wizard-provisioning + daily token rotation
Some checks failed
CI/CD / typecheck (pull_request) Successful in 55s
CI/CD / test (pull_request) Successful in 1m4s
CI/CD / lint (pull_request) Successful in 2m2s
CI/CD / smoke (pull_request) Failing after 1m36s
CI/CD / build (pull_request) Successful in 4m13s
CI/CD / publish (pull_request) Has been skipped
One-command setup replaces the 6-step manual flow — `mcpctl create
secretbackend bao --type openbao --wizard` takes the OpenBao admin token
once, provisions a narrow policy + token role, mints the first periodic
token, stores it on mcpd, verifies end-to-end, and prints the migration
command. The admin token is NEVER persisted.

The stored credential auto-rotates daily: mcpd mints a successor via the
token role (self-rotation capability is part of the policy it was issued
with), verifies the successor, writes it over the backing Secret, then
revokes the predecessor by accessor. TTL 720h means a week of rotation
failures still leaves 20+ days of runway.

Shared:
- New `@mcpctl/shared/vault` — pure HTTP wrappers (verifyHealth,
  ensureKvV2, writePolicy, ensureTokenRole, mintRoleToken, revokeAccessor,
  lookupSelf, testWriteReadDelete) and policy HCL builder.

mcpd:
- `tokenMeta Json @default("{}")` on SecretBackend. Self-healing schema
  migration — empty default lets `prisma db push` add the column cleanly.
- SecretBackendRotator.rotateOne: mint → verify → persist → revoke-old →
  update tokenMeta. Failures surface via `lastRotationError` on the row;
  the old token keeps working.
- SecretBackendRotatorLoop: on startup rotates overdue backends, schedules
  per-backend timers with ±10min jitter. Stops cleanly on shutdown.
- New `POST /api/v1/secretbackends/:id/rotate` (operation
  `rotate-secretbackend` — added to bootstrap-admin's auto-migrated ops
  alongside migrate-secrets, which was previously missing too).

CLI:
- `--wizard` on `create secretbackend` delegates to the interactive flow.
  All prompts can be pre-answered via flags (--url, --admin-token,
  --mount, --path-prefix, --policy-name, --token-role,
  --no-promote-default) for CI.
- `mcpctl rotate secretbackend <name>` — convenience verb; hits the new
  rotate endpoint.
- `describe secretbackend` renders a Token health section (healthy /
  STALE / WARNING / ERROR) with generated/renewal/expiry timestamps and
  last rotation error. Only shown when tokenMeta.rotatable is true — the
  existing k8s-auth + static-token backends don't surface it.

Tests: 15 vault-client unit tests (shared), 8 rotator unit tests (mcpd),
3 wizard flow tests (cli, including a regression test that the admin
token never appears in stdout). Full suite 1885/1885 (+32). Completions
regenerated for the new flags.

Out of scope (explicit): kubernetes-auth wizard, Vault Enterprise
namespaces in the wizard path, rotation for non-wizard static-token
backends. See plan file for details.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 17:20:37 +01:00
Michal
515206685b feat(openbao): kubernetes ServiceAccount auth — no static token in DB
Some checks failed
CI/CD / lint (push) Successful in 52s
CI/CD / test (push) Successful in 1m5s
CI/CD / typecheck (push) Successful in 2m8s
CI/CD / smoke (push) Failing after 3m38s
CI/CD / build (push) Successful in 4m15s
CI/CD / publish (push) Has been skipped
Why: requiring a static OpenBao root token to live (even once-bootstrap) on
the plaintext backend is the weakest link in the chain. With the bao-side
Kubernetes auth method enabled, mcpd's pod can authenticate using its own
projected SA token, exchange it for a short-lived Vault client token, and
keep the database free of any vault credentials at all.

Driver changes (src/mcpd/src/services/secret-backends/openbao.ts):
- New `OpenBaoConfig.auth = 'token' | 'kubernetes'`. Defaults to 'token' so
  existing rows keep working. Both shapes share url + mount + pathPrefix +
  namespace; auth-specific fields are mutually exclusive in the config schema.
- Kubernetes auth flow: read JWT from /var/run/secrets/.../token, POST to
  /v1/auth/<authMount>/login {role, jwt}, cache the returned client_token
  for `lease_duration - 60s` (grace window), then re-login.
- One-shot 403-retry: if a request comes back 403 (revoked / clock skew),
  purge cache and retry the original request once with a fresh login.
- Reads + writes go through the same getToken() path so token-auth is
  unchanged for existing deployments.

CLI (src/cli/src/commands/create.ts):
- `mcpctl create secretbackend bao --type openbao --auth kubernetes \
     --url https://bao.example:8200 --role mcpctl`
- Optional `--auth-mount` (default 'kubernetes') + `--sa-token-path` (default
  the standard projected-token path) for non-default deployments.
- Token-auth path unchanged: `--auth token --token-secret SECRET/KEY`
  (or omit `--auth` since 'token' is the default).

Validation (factory.ts) gates on the auth strategy: each path enforces its
own required fields and produces a clear error if misconfigured.

Tests: 6 new k8s-auth unit cases (login wire shape, lease-based caching,
custom authMount, 403-on-login, missing-role rejection, missing-tokenSecretRef
rejection). Full suite 1859/1859. Completions regenerated for the new flags.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 23:23:05 +01:00
Michal
a21220b6f6 fix(deploy): self-healing pre-migrate bootstrap for SecretBackend rollout
Some checks failed
CI/CD / typecheck (push) Successful in 51s
CI/CD / lint (push) Successful in 1m42s
CI/CD / test (push) Successful in 1m6s
CI/CD / smoke (push) Failing after 3m41s
CI/CD / build (push) Successful in 4m31s
CI/CD / publish (push) Has been skipped
Why: clusters upgrading from the pre-SecretBackend schema crash-loop on the
first rollout. `prisma db push` applies the Phase 0 migration as three
sequential steps — add Secret.backendId column (default ''), create
SecretBackend table, add FK — and the FK fails because empty-string values
reference no row in the empty SecretBackend table. This happened on the live
cluster today; I fixed it by hand with psql. This PR makes the fix
automatic so a fresh cluster or anyone replaying the migration doesn't hit
the same trap.

- New `src/db/src/scripts/pre-migrate-bootstrap.ts` — idempotent node script.
  Checks if SecretBackend table exists; if so, ensures a default row exists
  (insert on conflict noop), then backfills any Secret.backendId = '' to
  point at it. Uses Prisma raw queries so it runs against a partially-
  migrated schema.

- `deploy/entrypoint.sh` now catches a failed first push, runs the
  bootstrap, and retries. Fresh installs and fully-migrated clusters take
  the happy path (one push, no bootstrap needed). Pre-Phase-0 upgrades take
  the healing path (push fails → bootstrap seeds → retry succeeds).

- The bootstrap is deliberately non-fatal — even on unexpected errors it
  logs and exits 0 so the retry still runs. If that retry also fails, the
  push error surfaces normally and the pod crash-loops visibly rather than
  silently starting in a half-migrated state.

Verified the idempotent path logically: on the already-bootstrapped cluster
(1 backend row, 0 empty-backendId Secrets), the script's UPDATE matches
zero rows and the INSERT hits ON CONFLICT DO NOTHING — pure no-op.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 22:59:07 +01:00
Michal
d5236171cc fix(smoke): use json output for llm apiKeyRef assertion
Some checks failed
CI/CD / lint (push) Successful in 51s
CI/CD / typecheck (push) Successful in 1m42s
CI/CD / test (push) Successful in 1m5s
CI/CD / smoke (push) Has started running
CI/CD / publish (push) Has been cancelled
CI/CD / build (push) Has been cancelled
The table KEY column truncates at ~34 chars so `secret://<name>/<key>` wasn't
appearing verbatim in stdout — the assertion was correct but brittle against
presentation choices. Switched to `-o json` where the ref round-trips as a
structured object, which is what actually matters.

Caught by the live-cluster smoke run right after Phase 0-4 rolled out.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 22:55:39 +01:00
Michal
860033d3de fix(db): make Secret.backendId default to empty string for rollout migration
Some checks failed
CI/CD / typecheck (push) Successful in 53s
CI/CD / lint (push) Successful in 1m44s
CI/CD / test (push) Successful in 1m5s
CI/CD / smoke (push) Failing after 3m43s
CI/CD / build (push) Failing after 6m52s
CI/CD / publish (push) Has been skipped
Why: `prisma db push` refused to add the required `backendId` column on
clusters with pre-existing Secret rows — it can't assign NOT NULL without a
default, and the cluster DB had 9 live rows. The mcpd pod crash-looped
during the Phase 0 rollout because of this.

Empty-string default lets the schema apply cleanly; `bootstrapSecretBackends`
(which runs on every startup) then rewrites those empty values to the
seeded `default` plaintext backend's id. New writes via SecretService always
carry a real FK immediately, so the empty-string state only exists during
the one-shot migration window.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 22:45:08 +01:00
e27a0e695e Merge pull request 'feat(project): Project.llmProvider as Llm reference' (#55) from feat/project-llm-ref into main
Some checks failed
CI/CD / lint (push) Successful in 52s
CI/CD / test (push) Successful in 1m4s
CI/CD / typecheck (push) Successful in 1m52s
CI/CD / build (push) Has been cancelled
CI/CD / publish (push) Has been cancelled
CI/CD / smoke (push) Has been cancelled
2026-04-19 21:39:54 +00:00
2155910f1c Merge pull request 'feat(mcplocal): RBAC-bounded vllm-managed failover' (#54) from feat/llm-failover into main
Some checks failed
CI/CD / typecheck (push) Has been cancelled
CI/CD / test (push) Has been cancelled
CI/CD / smoke (push) Has been cancelled
CI/CD / build (push) Has been cancelled
CI/CD / lint (push) Has been cancelled
CI/CD / publish (push) Has been cancelled
2026-04-19 21:39:47 +00:00
d217eadd13 Merge pull request 'feat(mcpd): LLM inference proxy + OpenAI/Anthropic adapters' (#53) from feat/llm-infer into main
Some checks failed
CI/CD / lint (push) Has started running
CI/CD / typecheck (push) Has started running
CI/CD / test (push) Has been cancelled
CI/CD / smoke (push) Has been cancelled
CI/CD / build (push) Has been cancelled
CI/CD / publish (push) Has been cancelled
2026-04-19 21:39:39 +00:00
9e3507752f Merge pull request 'feat(mcpd): Llm resource — CRUD + CLI + apply' (#52) from feat/llm into main
Some checks failed
CI/CD / lint (push) Has started running
CI/CD / typecheck (push) Has been cancelled
CI/CD / test (push) Has been cancelled
CI/CD / smoke (push) Has been cancelled
CI/CD / build (push) Has been cancelled
CI/CD / publish (push) Has been cancelled
2026-04-19 21:39:27 +00:00
97ac1e75ef Merge pull request 'feat(mcpd): pluggable SecretBackend + OpenBao driver + migrate' (#51) from feat/secretbackend into main
Some checks failed
CI/CD / lint (push) Has started running
CI/CD / test (push) Has been cancelled
CI/CD / typecheck (push) Has been cancelled
CI/CD / smoke (push) Has been cancelled
CI/CD / build (push) Has been cancelled
CI/CD / publish (push) Has been cancelled
2026-04-19 21:39:17 +00:00
Michal
58788bc120 test(smoke): end-to-end coverage for SecretBackend, Llm, infer proxy, project-llm-ref
Covers the Phase 0-4 CLI contract against live mcpd. Matches the existing
mcptoken.smoke pattern: skip gracefully on unreachable /healthz, cleanup
fixtures in afterAll, use --direct to bypass mcplocal for admin operations.

- secretbackend.smoke.test.ts
  · seeded plaintext default exists + isDefault
  · create/describe/delete round-trip
  · refuses to delete the default backend (409 shape)
  · get -o yaml output starts with `kind: secretbackend` (apply-compatible)

- llm.smoke.test.ts
  · create secret + llm with --api-key-ref, verify describe hides the
    raw value but surfaces secret://name/key
  · yaml round-trip: get -o yaml > file → amend → apply -f → describe shows change
  · deleting the llm leaves the underlying Secret intact (onDelete: SetNull)

- llm-infer.smoke.test.ts
  · 404 for unknown name, 400 for missing messages
  · 5xx when upstream url is unreachable (proxy returns a structured error)
  · opt-in happy-path gated on LLM_INFER_SMOKE_REAL=1 + LLM_INFER_SMOKE_LLM=<name>
    so CI doesn't need a real provider key

- project-llm-ref.smoke.test.ts
  · describe project with --llm <registered> — no warning
  · describe project with --llm <nonexistent> — shows "warning: …registry default"
  · describe project with --llm none — explicit disable, no warning

These require PRs #51-55 to be merged and fulldeploy.sh run before they'll
find the new endpoints on live mcpd. Until then they skip or fail with
"Not Found". Unit tests for the same code paths (1853 total) continue to
pass against mocks.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 22:09:41 +01:00
Michal
de854b1944 feat(project): Project.llmProvider semantically names an Llm resource
Why: Phases 0-3 built the server-managed Llm registry; this phase pivots the
existing Project.llmProvider column from "local provider hint" to "named Llm
reference" so operators can pick a centralised Llm per project. No schema
change — the column stays a free-form string for backward compat.

- `mcpctl create project --llm <name>` (+ `--llm-model <override>`) sets
  llmProvider/llmModel to a centralised Llm reference, or 'none' to disable.
- `mcpctl describe project` fetches the Llm catalogue alongside prompts and
  flags values that don't resolve with a visible warning. 'none' is treated
  as an explicit disable, not an orphan.
- `apply -f` doc comments updated; --llm-provider still accepted but now
  documented as naming an Llm resource.
- New `resolveProjectLlmReference(mcpdClient, name)` helper in mcplocal's
  discovery: returns `registered`/`disabled`/`unregistered`/`unreachable`.
  The HTTP-mode proxy-model pipeline will consume this when it pivots to
  mcpd's /api/v1/llms/:name/infer proxy.
- project-mcp-endpoint.ts cache-namespace path gets a comment explaining
  the new resolution order — behavior unchanged, just clarified.

Tests: 6 resolver unit tests + 3 new describe-warning cases. Full suite
1853/1853 (+9 from Phase 3's 1844). TypeScript clean; completions
regenerated for the new create-project flags.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 18:28:46 +01:00
Michal
4d8ee23d0e feat(mcplocal): RBAC-bounded vllm-managed failover + name-based llm lookup
Why: when mcpd's inference proxy is unreachable, clients with a local
vllm-managed provider should be able to substitute — but only if they still
have view permission on the centralized Llm. Otherwise revoking an Llm
wouldn't actually stop a misbehaving client.

Infrastructure (the agent + mcplocal HTTP-mode wire-up will land separately
when those clients pivot to mcpd's proxy):

- LlmProviderFileEntry gains optional `failoverFor: <central llm name>`. The
  entry is otherwise the same local provider it always was; the new field
  just declares which central Llm it can substitute for.
- ProviderRegistry tracks a failover map (registerFailover / getFailoverFor /
  listFailovers). Unregister removes any failover entry pointing at the
  removed provider so we don't end up with dangling references.
- New FailoverRouter wraps a primary inference call. On primary failure: if
  a local provider is registered for the Llm, HEAD-probe `mcpd /api/v1/llms/
  :name` with the caller's bearer to verify view permission, then either
  invoke the local provider (allowed) or re-throw the primary error (403,
  401, network unreachable, anything else — all fail-closed).
- Server: GET /api/v1/llms/:idOrName accepts both CUID and human name. Lets
  FailoverRouter probe by name without a separate id-resolution call. HEAD
  derives automatically from GET in Fastify, which runs the same RBAC hook
  and drops the body — exactly what the probe needs.

Tests: 11 failover unit tests (registry map, decision flow, fail-closed for
forbidden + unreachable, checkAuth status mapping) + 4 new route tests
(name lookup, HEAD existing/missing). Full suite 1844/1844 (+14 from Phase
2's 1830). TypeScript clean across mcpd + mcplocal.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 13:05:43 +01:00
Michal
23f53a0798 feat(mcpd): inference proxy — POST /api/v1/llms/:name/infer
Why: the point of the Llm resource (Phase 1) is that credentials never leave
the server. This lands the proxy: clients POST OpenAI chat/completions to
mcpd, mcpd attaches the provider API key server-side, and the response
streams back as OpenAI-format SSE.

Design:
- Wire format client-side is always OpenAI chat/completions — every existing
  SDK speaks it. Adapters translate on the provider side.
- `openai | vllm | deepseek | ollama` → pure passthrough (they already speak
  OpenAI). `anthropic` → translator to/from Anthropic Messages API
  (system-string extraction, content-block flattening, SSE event remap).
- Plain fetch; no @anthropic-ai/sdk dep. Consistent with the OpenBao driver
  shape and keeps the proxy layer thin.
- `gemini-cli` intentionally rejected — subprocess providers need extra
  lifecycle plumbing; deferred to a follow-up.
- Streaming: adapters yield `StreamingChunk`s; the route frames them as
  `data: <json>\n\n` + terminal `data: [DONE]\n\n` so any OpenAI client
  works unchanged.

RBAC:
- New URL special-case in mapUrlToPermission: `POST /api/v1/llms/:name/infer`
  → `run:llms:<name>` (not the default create:llms). Users need an explicit
  `{role: 'run', resource: 'llms', [name: X]}` binding to call infer.
- Possession of `edit:llms` does NOT imply `run` — keeps catalogue
  management separate from spend.

Audit: route emits an `llm_inference_call` event per request (llm name,
model, user/tokenSha, streaming, duration, status). main.ts wires it to the
structured logger for now; hook is in place for a richer audit sink later.

Tests:
- 11 adapter tests (passthrough POST shape + default URLs + no-auth ollama +
  SSE forwarding; anthropic translate request/response + non-2xx wrap + SSE
  event translation; registry dispatch + caching + unsupported-provider).
- 7 route tests (404, 400, non-streaming dispatch + audit, apiKey failure,
  null apiKeyRef path, streaming SSE output, 502 on adapter error).
- Full suite 1830/1830 (+18 from Phase 1's 1812). TypeScript clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 22:43:55 +01:00
Michal
6ff90a8228 feat(mcpd): Llm resource — CRUD + CLI + apply
Why: every client that wants an LLM (the agent, HTTP-mode mcplocal, Claude
Code's STDIO mcplocal) today has to know the provider URL + key, and each
user's ~/.mcpctl/config.json carries them. Centralising the catalogue on the
server is the prerequisite for Phase 2 (mcpd proxies inference so credentials
never leave the cluster).

This phase adds the `Llm` resource and its CRUD surface — no proxy yet, no
client pivot yet. Just enough to register what you have.

Schema:
- New `Llm` model: name/type/model/url/tier/description + {apiKeySecretId,
  apiKeySecretKey} FK pair. Reverse `llms` relation on Secret.
- Provider types: anthropic | openai | deepseek | vllm | ollama | gemini-cli.
- Tiers: fast | heavy.

mcpd:
- LlmRepository + LlmService + Zod validation schema + /api/v1/llms routes.
- API surface exposes `apiKeyRef: {name, key}` — the service translates to/
  from the FK pair so clients never deal in cuids.
- `resolveApiKey(llmName)` reads through SecretService (which itself dispatches
  to the right SecretBackend). That's the hook Phase 2's inference proxy uses.
- RBAC: added `'llms'` to RBAC_RESOURCES + resource alias. Standard
  view/create/edit/delete semantics.
- Wired into main.ts (repo, service, routes).

CLI:
- `mcpctl create llm <name> --type X --model Y --tier fast|heavy --api-key-ref SECRET/KEY [--url ...] [--extra k=v ...]`
- `mcpctl get|describe|delete llm` — standard resource verbs.
- `mcpctl apply -f` with `kind: llm` (single- or multi-doc yaml/json).
  Applied after secrets, before servers — apiKeyRef resolves an existing Secret.
- Shell completions regenerated.

Tests: 11 service unit tests + 9 route tests (happy path, 404s, 409, validation).
Full suite 1812/1812 (+20 from the 1792 Phase 0 baseline). TypeScript clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:28:43 +01:00
Michal
029c3d5f34 feat(mcpd): pluggable SecretBackend abstraction + OpenBao driver + migrate
All checks were successful
CI/CD / typecheck (pull_request) Successful in 51s
CI/CD / lint (pull_request) Successful in 1m47s
CI/CD / test (pull_request) Successful in 1m3s
CI/CD / smoke (pull_request) Successful in 4m34s
CI/CD / build (pull_request) Successful in 3m50s
CI/CD / publish (pull_request) Has been skipped
Why: API keys live in Postgres as plaintext JSON. A DB read exposes every
credential in the system. Before centralising more secrets (LLM keys, etc.)
we want to be able to point at an external KV store and drop DB access to
sensitive rows.

New model:
- `SecretBackend` resource (CRUD + isDefault invariant) owns how a secret is
  stored. `Secret` gains `backendId` FK and `externalRef`. Reads/writes
  dispatch through a driver.
- `plaintext` driver (near-noop, uses existing Secret.data column) is seeded
  as the `default` row at startup. Acts as trust root / bootstrap.
- `openbao` driver (also HashiCorp Vault KV v2 compatible) talks plain HTTP,
  no SDK dependency. Auth via static token pulled from a plaintext-backed
  `Secret` through the injected SecretRefResolver. Caches resolved token.
- `SecretMigrateService` moves secrets one-at-a-time: read → write dest →
  flip row → best-effort source delete. Interrupted runs are idempotent
  (skips secrets already on destination).

CLI surface:
- `mcpctl create|get|describe|delete secretbackend` + `--default` on create.
- `mcpctl migrate secrets --from X --to Y [--names a,b] [--keep-source] [--dry-run]`
- `apply -f` round-trips secretbackends (yaml/json multi-doc + grouped).
- RBAC: `secretbackends` resource + `run:migrate-secrets` operation.
- Fish + bash completions regenerated.

docs/secret-backends.md covers the OpenBao policy, chicken-and-egg auth flow,
and the migration semantics.

Broke the circular dep (OpenBao needs SecretService to resolve its own token,
SecretService needs SecretBackendService) with a deferred-resolver bridge in
mcpd startup. 11 new driver unit tests; existing env-resolver/secret-route/
backup tests updated for the new service signatures. Full suite: 1792/1792.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 19:29:55 +01:00
Michal
6946250090 Revert "feat(mcplocal): per-McpToken gate-ungate cache so service tokens survive proxies"
All checks were successful
CI/CD / lint (push) Successful in 51s
CI/CD / typecheck (push) Successful in 1m46s
CI/CD / test (push) Successful in 1m3s
CI/CD / build (push) Successful in 2m14s
CI/CD / smoke (push) Successful in 4m43s
CI/CD / publish (push) Successful in 1m23s
This reverts commit 39df459bb1.
2026-04-18 18:16:18 +01:00
1480d268c7 Merge pull request #50 feat: McpToken — HTTP-mode mcplocal, CLI verbs, audit plumbing
Some checks failed
CI/CD / typecheck (push) Successful in 55s
CI/CD / lint (push) Successful in 1m42s
CI/CD / test (push) Successful in 1m5s
CI/CD / smoke (push) Failing after 3m40s
CI/CD / build (push) Successful in 3m52s
CI/CD / publish (push) Has been skipped
2026-04-18 16:37:50 +00:00
Michal
39df459bb1 feat(mcplocal): per-McpToken gate-ungate cache so service tokens survive proxies
All checks were successful
CI/CD / lint (pull_request) Successful in 1m0s
CI/CD / typecheck (pull_request) Successful in 1m51s
CI/CD / test (pull_request) Successful in 1m3s
CI/CD / build (pull_request) Successful in 2m13s
CI/CD / smoke (pull_request) Successful in 4m49s
CI/CD / publish (pull_request) Has been skipped
Fixes the LiteLLM loop: LiteLLM's /mcp/ proxy doesn't propagate the
mcp-session-id header, so every tool call from qwen3 landed on a fresh
upstream session, which always started gated, so the only visible tool
was begin_session — forever.

The session-id gate works fine for Claude Code (stdio, long-lived), but
breaks through session-stripping proxies. Identity that DOES survive:
the McpToken (always in the Authorization header). So now the gate
keys its ungate state on both:

  - sessionId        → per-session (unchanged; Claude Code path)
  - tokenSha         → per-token (NEW; service-token path)

Flow for an McpToken caller:
  1. first begin_session succeeds → session ungated + tokenSha cached
  2. next request lands on a new mcp-session-id (proxy stripped it)
  3. SessionGate.createSession sees tokenSha, finds active token entry,
     starts the new session ungated with the prior tags + retrievedPrompts
  4. tools/list on the fresh session returns the full upstream set — no
     more begin_session loop

Plumbing:
  - AuditCollector.getSessionMcpTokenSha(sessionId) exposes the already-
    tracked principal.
  - PluginSessionContext gets getMcpTokenSha() so plugins can read the
    token identity without knowing about the collector.
  - SessionGate gains (tokenSha?: string) on createSession/ungate, plus
    isTokenUngated and revokeToken. TTL defaults to 1hr; tunable via
    MCPLOCAL_TOKEN_UNGATE_TTL_MS env var.
  - Gate plugin passes ctx.getMcpTokenSha() at every ungate call site
    (begin_session, gated-intercept, intercept-fallback).

Tests: 7 new cases in session-gate.test.ts covering cross-session
persistence, token isolation, STDIO-path unchanged, TTL expiry,
revokeToken, and the empty-string edge case. 21/21 pass; 690/690 in
mcplocal overall.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 17:34:28 +01:00
Michal
75fe0533c1 fix(mcplocal): propagate caller's bearer to prompt-index and LLM-config calls
All checks were successful
CI/CD / typecheck (pull_request) Successful in 51s
CI/CD / test (pull_request) Successful in 1m3s
CI/CD / lint (pull_request) Successful in 2m27s
CI/CD / build (pull_request) Successful in 2m11s
CI/CD / smoke (pull_request) Successful in 4m56s
CI/CD / publish (pull_request) Has been skipped
The proxy-path fix (5d10728) covered upstream tools/call routing via
McpdUpstream, but getOrCreateRouter in project-mcp-endpoint.ts had TWO
more mcpd-bound call sites that silently fell back to the pod's empty
default token:

  1. fetchProjectLlmConfig(mcpdClient, projectName)
  2. router.setPromptConfig(mcpdClient.withHeaders({...}))
     → which is what gate.ts begin_session uses via ctx.fetchPromptIndex()
       to hit /api/v1/projects/:name/prompts/visible

Symptom: in the k8s mcplocal pod, LiteLLM would initialize + tools/list
fine (showing begin_session), but tools/call begin_session returned
`{isError: true, content: "McpError: Authentication failed: invalid or
expired token"}`. Reproduced against the live cluster by driving
LiteLLM's /mcp/ endpoint with qwen3-thinking's exact payload.

Fix: build `requestClient = mcpdClient.withToken(authToken)` once at the
top of getOrCreateRouter and thread it through fetchProjectLlmConfig
and setPromptConfig. withHeaders still adds X-Service-Account for
mcpd-side audit tagging, but the bearer now carries the caller's
McpToken identity (resolves as McpToken:<sha> on mcpd).

Verified: unit tests pass (mock needed withToken/withTimeout stubs).
Next step: rebuild image + roll pod + retest LiteLLM→mcp flow.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 04:44:27 +01:00
Michal
5d1072889f fix(mcplocal): thread client bearer into per-upstream McpdClient
Symptom: HTTP-mode mcplocal accepted the incoming mcpctl_pat_ bearer,
but every /api/v1/mcp/proxy call to mcpd for upstream discovery came
back with "Authentication failed: invalid or expired token" — because
those proxy calls were using the pod's DEFAULT McpdClient token,
which in a container with no ~/.mcpctl/credentials is the empty
string. The discovery GET was correct (explicit authOverride in
forward()), but syncUpstreams() then created McpdUpstream instances
bound to the original mcpdClient — so every tools/list to each
upstream went out with `Authorization: Bearer ` (empty) and mcpd's
auth hook rejected it.

Fix: add McpdClient.withToken(token) and have refreshProjectUpstreams
swap to `mcpdClient.withToken(authToken)` before handing the client to
syncUpstreams. This keeps the "pod has no identity" design: the token
used for downstream /api/v1/mcp/proxy calls is the caller's McpToken,
same as the one used for the initial discovery GET and for introspect.

Tested: project-discovery.test.ts + mcpd-upstream.test.ts pass. Next:
rebuild + roll the mcplocal image and retry LiteLLM probe.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 03:06:55 +01:00
Michal
dfc53cd15e fix(mcpd): per-route /api/v1/mcp/proxy auth missed McpToken dispatch
Symptom: LiteLLM → mcplocal → mcpd proxy calls for project-scoped MCP
tool discovery all 401'd with "Authentication failed: invalid or
expired token", even though the same mcpctl_pat_ bearer works against
/api/v1/mcptokens/introspect and /api/v1/projects/:name/servers. Result:
the new k8s mcplocal pod could accept the bearer and respond to
/projects/:name/mcp (initialize was 200), but every downstream upstream
discovery call through /api/v1/mcp/proxy failed.

Root cause: registerMcpProxyRoutes installs its own route-scoped
createAuthMiddleware with the `authDeps` parameter it receives. In
main.ts that was being constructed with only `findSession` — missing
the `findMcpToken` that the GLOBAL auth hook already had. So a
mcpctl_pat_ bearer got all the way to the proxy route and then was
handed to an old-shape middleware that knew nothing about the prefix.

Fix: extract authDeps (findSession + findMcpToken) to a named const
and reuse it for both the global hook and the proxy route. Comment at
the declaration site warns future additions to keep the two paths in
sync — they have to agree or McpToken bearers silently break on
whichever one drifts.

Verified against the live cluster: LiteLLM's discoverTools path no
longer 401s; mcplocal logs now show successful upstream proxy calls.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 00:23:44 +01:00
Michal
1887d90821 docs: scrub MCPLOCAL_MCPD_TOKEN — pod has no persistent mcpd identity
Some checks failed
CI/CD / lint (pull_request) Successful in 50s
CI/CD / test (pull_request) Successful in 1m4s
CI/CD / typecheck (pull_request) Failing after 7m3s
CI/CD / smoke (pull_request) Has been skipped
CI/CD / build (pull_request) Has been skipped
CI/CD / publish (pull_request) Has been skipped
The earlier plan recommended an MCPLOCAL_MCPD_TOKEN env var so the pod
would have a ServiceAccount session into mcpd. It's unnecessary: the
pod forwards every inbound client bearer (mcpctl_pat_...) verbatim to
mcpd for all downstream calls — both introspect and project discovery.
mcpd's auth middleware dispatches on the prefix and resolves the
McpToken principal directly. No pod secret, no rotation story.

Updates:
- serve.ts header: explicit "identity model" section calling this out
  so future readers don't restore the env var thinking it's missing.
- docs/mcptoken-implementation.md: drop the "mount MCPLOCAL_MCPD_TOKEN"
  Pulumi guidance and the "dedicated ServiceAccount" follow-up item;
  state the correct image URL (internal 10.0.0.194 registry) and the
  gated-vs-ungated rule for LLM config mounts.

No runtime code changes — serve.ts never actually required the token;
this just fixes the documentation and the header comment.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 23:54:46 +01:00
Michal
3061a5f6ae test+feat: token-auth unit coverage + env-tunable introspection TTLs
Some checks failed
CI/CD / lint (pull_request) Successful in 51s
CI/CD / typecheck (pull_request) Successful in 51s
CI/CD / test (pull_request) Successful in 1m3s
CI/CD / smoke (pull_request) Failing after 3m24s
CI/CD / build (pull_request) Successful in 4m45s
CI/CD / publish (pull_request) Has been skipped
Verifies the HTTP-mode revocation lag ≤ 5s two ways:

1. Unit (tests/http/token-auth.test.ts, 8 cases): Fastify preHandler
   with injected fetch stub exercises the positive/negative cache
   directly — first call returns ok:true, we flip the stub to
   revoked:true, wait past the short positive TTL, next call gets 401
   with "revoked". Plus: non-Bearer 401, non-mcpctl_pat_ 401, wrong-
   project 403, mcpd-unreachable 401, happy-path caching (1 fetch for N
   requests within TTL), ok:false from mcpd 401.

2. End-to-end (smoke, run manually): added MCPLOCAL_TOKEN_POSITIVE_TTL_MS
   and MCPLOCAL_TOKEN_NEGATIVE_TTL_MS env vars to serve.ts so the smoke
   can shrink the 30s positive default for testing. Confirmed: with
   positive TTL = 2s, the mcptoken.smoke.test.ts revocation case passes
   against a local serve.js pointed at prod mcpd.

Operators get the same knobs in production — default behavior unchanged
(30s positive, 5s negative).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 23:25:06 +01:00
Michal
913678e400 fix(smoke): mcptoken — runtime gatewayUp gate + scope revocation case to HTTP-mode
All checks were successful
CI/CD / lint (pull_request) Successful in 52s
CI/CD / test (pull_request) Successful in 1m4s
CI/CD / typecheck (pull_request) Successful in 2m23s
CI/CD / build (pull_request) Successful in 2m52s
CI/CD / smoke (pull_request) Successful in 5m40s
CI/CD / publish (pull_request) Has been skipped
Two bugs found while trying to point MCPGW_URL=http://localhost:3200
(the systemd mcplocal) so we could get real smoke coverage before the
Pulumi stack for mcp.ad.itaz.eu lands:

1. describe.skipIf(!gatewayUp) was evaluated at parse time, before
   beforeAll ran, so gatewayUp was always false and the whole suite
   skipped. Switched to the vllm-managed.test.ts pattern: runtime
   `if (!gatewayUp) return` at the start of each it().

2. The revocation 401 assertion only makes sense against the
   containerized serve.ts entry, which has a 5s negative introspection
   cache. Against systemd mcplocal the whole project router is cached
   for minutes, so a deleted token with a warm session still succeeds.
   Added IS_HTTP_MODE detection (hostname not localhost/127/0.0.0.0,
   or MCPGW_IS_HTTP_MODE=true) and skip the assertion otherwise — still
   revoking the token so cleanup runs identically.

Run against systemd mcplocal locally:

    MCPGW_URL=http://localhost:3200 pnpm --filter @mcpctl/mcplocal \\
      exec vitest run --config vitest.smoke.config.ts mcptoken

  → 6/6 pass (revocation case explicitly deferred).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 23:20:36 +01:00
Michal
f68e123821 fix(cli): https support in status + api-client; add demo-mcp-call.py
All checks were successful
CI/CD / lint (pull_request) Successful in 1m40s
CI/CD / typecheck (pull_request) Successful in 1m35s
CI/CD / test (pull_request) Successful in 2m16s
CI/CD / build (pull_request) Successful in 2m17s
CI/CD / smoke (pull_request) Successful in 4m37s
CI/CD / publish (pull_request) Has been skipped
- status.ts + api-client.ts now dispatch on URL scheme so an https
  mcpd URL no longer crashes with "Protocol https: not supported".
  Caught by fulldeploy smoke runs — status.ts had `import http` only
  and was synchronously throwing against https://mcpctl.ad.itaz.eu.
  Each http.get call is wrapped so future scheme-mismatch errors also
  degrade to "unreachable" instead of a stack trace.
- .dockerignore no longer excludes src/mcplocal/ (the new
  Dockerfile.mcplocal needs those files).
- scripts/demo-mcp-call.py: standalone, stdlib-only Python demo that
  makes an MCP request (initialize + tools/list, optional tools/call)
  using an mcpctl_pat_ bearer. Counterpart to `mcpctl test mcp` for
  showing external (e.g. vLLM) clients how the bearer flow works.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 22:34:00 +01:00
Michal
2127b41d9f feat: HTTP-mode mcplocal container + mcpctl test mcp + token-auth preHandler
Delivers the final piece of the mcptoken stack: a containerized,
network-accessible mcplocal that serves Streamable-HTTP MCP to off-host
clients (the vLLM use case), authenticated by project-scoped McpTokens.

New binary (same package, new entry):
  - src/mcplocal/src/serve.ts — HTTP-only entry. Reads MCPLOCAL_MCPD_URL,
    MCPLOCAL_MCPD_TOKEN, MCPLOCAL_HTTP_HOST/PORT, MCPLOCAL_CACHE_DIR from
    env. No StdioProxyServer, no --upstream.
  - src/mcplocal/src/http/token-auth.ts — Fastify preHandler that
    validates mcpctl_pat_ bearers via mcpd's /api/v1/mcptokens/introspect.
    30s positive / 5s negative TTL. Rejects wrong-project with 403.

Shared HTTP MCP client:
  - src/shared/src/mcp-http/ — reusable McpHttpSession with initialize,
    listTools, callTool, close. Handles http+https, SSE, id correlation,
    distinct McpProtocolError / McpTransportError. Plus mcpHealthCheck
    and deriveBaseUrl helpers.

New CLI verb `mcpctl test mcp <url>`:
  - Flags: --token (also $MCPCTL_TOKEN), --tool, --args (JSON),
    --expect-tools, --timeout, -o text|json, --no-health.
  - Exit codes: 0 PASS, 1 TRANSPORT/AUTH FAIL, 2 CONTRACT FAIL.

Container + deploy:
  - deploy/Dockerfile.mcplocal (Node 20 alpine, multi-stage, pnpm
    workspace, CMD node src/mcplocal/dist/serve.js, VOLUME
    /var/lib/mcplocal/cache, HEALTHCHECK on :3200/healthz).
  - scripts/build-mcplocal.sh mirrors build-mcpd.sh.
  - fulldeploy.sh is now a 4-step pipeline that also builds + rolls out
    mcplocal (gated on `kubectl get deployment/mcplocal` so the script
    stays green before the Pulumi stack lands).

Audit + cache:
  - project-mcp-endpoint.ts passes MCPLOCAL_CACHE_DIR into FileCache at
    both construction sites and, when request.mcpToken is present, calls
    collector.setSessionMcpToken(id, ...) so audit events carry the
    tokenName/tokenSha.

Tests:
  - 9 unit cases on `mcpctl test mcp` (happy path, health miss,
    expect-tools hit/miss, transport throw, tool isError, json report,
    $MCPCTL_TOKEN env fallback, invalid --args).
  - Smoke test src/mcplocal/tests/smoke/mcptoken.smoke.test.ts —
    gated on healthz($MCPGW_URL), skipped cleanly when unreachable.
    Covers happy path, wrong-project 403, --expect-tools contract
    failure, and revocation 401 within the negative-cache window.

1773/1773 workspace tests pass. Pulumi resources (Deployment, Service,
Ingress, PVC, Secret, NetworkPolicy) still need to land in
../kubernetes-deployment before the smoke gate flips on.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 01:21:42 +01:00
Michal
a151b2e756 feat: mcpctl mcptoken verbs + mcpd auth dispatch + audit plumbing
Adds the end-to-end CLI surface for McpTokens and the mcpd auth dispatch
that recognizes them.

mcpd auth middleware:
  - Dispatch on the `mcpctl_pat_` bearer prefix. McpToken bearers resolve
    through a new `findMcpToken(hash)` dep, populating `request.mcpToken`
    and `request.userId = ownerId`. Everything else follows the existing
    session path.
  - Returns 401 for revoked / expired / unknown tokens.
  - Global RBAC hook now threads `mcpTokenSha` into `canAccess` /
    `canRunOperation` / `getAllowedScope`, and enforces a hard
    project-scope check: a McpToken principal can only hit
    `/api/v1/projects/<its-project>/...`.

CLI verbs:
  - `mcpctl create mcptoken <name> -p <proj> [--rbac empty|clone]
    [--bind role:view,resource:servers] [--ttl 30d|never|ISO]
    [--description ...] [--force]` — returns the raw token once.
  - `mcpctl get mcptokens [-p <proj>]` — table with
    NAME/PROJECT/PREFIX/CREATED/LAST USED/EXPIRES/STATUS.
  - `mcpctl get mcptoken <name> -p <proj>` and
    `mcpctl describe mcptoken <name> -p <proj>` — describe surfaces the
    auto-created RBAC bindings.
  - `mcpctl delete mcptoken <name> -p <proj>`.
  - `apply -f` support with `kind: mcptoken`. Tokens are immutable, so
    apply creates if missing and skips if the name is already active.

Audit plumbing:
  - `AuditEvent` / collector now carry optional `tokenName` / `tokenSha`.
    `setSessionMcpToken` sits alongside `setSessionUserName`; both feed a
    per-session principal map used at emit time.
  - `AuditEventService` query accepts `tokenName` / `tokenSha` filters.
  - Console `AuditEvent` type carries the new fields so a follow-up can
    add a TOKEN column.

Completions regenerated. 1764/1764 tests pass workspace-wide.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 01:12:43 +01:00
Michal
efcfeeab65 feat(cli)!: migrate create rbac bindings to --roleBindings kv syntax
BREAKING: `mcpctl create rbac` no longer accepts `--binding` or
`--operation`. Use `--roleBindings` instead with key:value pairs:

  # resource binding
  --roleBindings role:view,resource:servers
  --roleBindings role:view,resource:servers,name:my-ha

  # operation binding (role:run is implied by action:)
  --roleBindings action:logs

The on-disk YAML shape (`roleBindings: [{role, resource, name?}]` or
`{role:'run', action}`) is unchanged, so Git backups and existing
`apply -f` files continue to work. Only the command-line input format
changes.

The parser is extracted to src/cli/src/commands/rbac-bindings.ts so the
upcoming `mcpctl create mcptoken --bind <kv>` verb can reuse it.

Completions, tests, and the new parser unit test all pass (406/406).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 01:03:57 +01:00
Michal
2ddb493bb0 feat(mcpd): McpToken schema + CRUD routes + introspection
Adds a new McpToken Prisma model (project-scoped, SHA-256 hashed at rest,
optional expiry, revocable) plus backing repository, service, and REST
routes. Tokens are a first-class RBAC subject: new 'McpToken' kind is
added to the subject enum and the service auto-creates an RbacDefinition
with subject McpToken:<sha> when bindings are provided.

Creator-permission ceiling: the service rejects any requested binding
the creator cannot already satisfy themselves (re-uses
rbacService.canAccess / canRunOperation). rbacMode=clone snapshots the
creator's full permissions into the token.

Routes:
  POST   /api/v1/mcptokens              create (returns raw token once)
  GET    /api/v1/mcptokens              list (filter by project)
  GET    /api/v1/mcptokens/:id          describe (no secret in response)
  POST   /api/v1/mcptokens/:id/revoke   soft-delete + remove RbacDef
  DELETE /api/v1/mcptokens/:id          hard-delete
  GET    /api/v1/mcptokens/introspect   validate raw bearer (used by mcplocal)

Extends AuditEvent with optional tokenName/tokenSha fields (indexed) so
token-driven activity can be filtered later. Adds token helpers in
@mcpctl/shared: TOKEN_PREFIX='mcpctl_pat_', generateToken, hashToken,
isMcpToken, timingSafeEqualHex.

Follow-up PRs add the auth-hook dispatch on the prefix, the CLI verbs,
and the HTTP-mode mcplocal that calls /introspect.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 01:00:04 +01:00
Michal
3149ea3ae7 fix: MCP proxy resilience — discovery cache, default liveness probes
Some checks failed
CI/CD / lint (push) Successful in 52s
CI/CD / typecheck (push) Successful in 1m51s
CI/CD / test (push) Successful in 1m1s
CI/CD / smoke (push) Failing after 3m21s
CI/CD / build (push) Successful in 4m9s
CI/CD / publish (push) Has been skipped
Adds a per-server tools/list cache in McpRouter (positive + negative TTL)
so a slow or dead upstream only stalls the first discovery call, not every
subsequent client request. Invalidated on upstream add/remove.

Health probes now apply a default liveness spec (tools/list via the real
production path) to any RUNNING instance without an explicit healthCheck,
so synthetic and real failures converge on the same signal.

Includes supporting updates in mcpd-client, discovery, upstream/mcpd,
seeder, and fulldeploy/release scripts.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 00:48:57 +01:00
c968d76e00 Merge pull request 'fix: wire STDIO attach for docker-image MCP servers' (#49) from feat/k8s-operator into main
Some checks failed
CI/CD / typecheck (push) Successful in 48s
CI/CD / lint (push) Successful in 1m40s
CI/CD / test (push) Successful in 1m0s
CI/CD / smoke (push) Failing after 3m20s
CI/CD / build (push) Successful in 1m58s
CI/CD / publish (push) Has been skipped
Reviewed-on: #49
2026-04-12 21:27:14 +00:00
Michal
9ff2dcc3d9 fix: actually wire STDIO attach for docker-image MCP servers
All checks were successful
CI/CD / typecheck (pull_request) Successful in 52s
CI/CD / lint (pull_request) Successful in 1m43s
CI/CD / test (pull_request) Successful in 1m2s
CI/CD / build (pull_request) Successful in 1m45s
CI/CD / publish-rpm (pull_request) Has been skipped
CI/CD / publish-deb (pull_request) Has been skipped
CI/CD / smoke (pull_request) Successful in 9m51s
Commit 1bd5087 added attachInteractive to the orchestrator interface
but never hooked it up in mcp-proxy-service — sendViaPersistentAttach
was promised in the commit message but missing from the diff. Servers
with a distroless image whose entrypoint IS the MCP server (gitea-mcp)
ended up needing a bogus `command: [node, dist/index.js]` workaround
that silently failed on every exec, leaving clients with empty tool
lists.

Changes:
- PersistentStdioClient: take a StdioMode discriminated union. Exec
  mode runs a command via execInteractive; attach mode talks to PID 1
  via attachInteractive.
- mcp-proxy-service: dispatch by config — command → exec; packageName
  → exec via runtime runner; dockerImage-only → attach. Error
  serialization no longer drops non-Error objects as "[object Object]".
- templates/gitea.yaml: remove the command workaround; the image CMD
  runs as PID 1 and mcpd attaches.
- Add unit tests covering both modes and the unsupported-orchestrator
  paths.

Also required (separate repo): mcpd's k8s Role needed pods/attach
added alongside pods/exec; updated in kubernetes-deployment/…/mcpctl/server.ts
and kubectl-patched on the live cluster.

Verified end-to-end against mcpctl.ad.itaz.eu:
- gitea (attach): 49 tools listed, real tools/call round-trip.
- aws-docs (exec via packageName): 4 tools, no regression.
- docmost (exec via command): 11 tools, no regression.
- mcpd suite: 634/634 passing.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-12 22:26:26 +01:00
c62a350da1 Merge pull request 'fix: MCP proxy resilience — timeouts, parallel discovery, error propagation' (#48) from feat/k8s-operator into main
Some checks failed
CI/CD / typecheck (push) Successful in 50s
CI/CD / lint (push) Successful in 1m49s
CI/CD / test (push) Successful in 1m3s
CI/CD / smoke (push) Failing after 3m22s
CI/CD / build (push) Successful in 1m53s
CI/CD / publish (push) Has been skipped
Reviewed-on: #48
2026-04-10 17:29:33 +00:00
Michal
857f8c72ae fix: MCP proxy resilience — timeouts, parallel discovery, error propagation
All checks were successful
CI/CD / typecheck (pull_request) Successful in 49s
CI/CD / lint (pull_request) Successful in 1m49s
CI/CD / test (pull_request) Successful in 1m4s
CI/CD / build (pull_request) Successful in 1m49s
CI/CD / publish-rpm (pull_request) Has been skipped
CI/CD / publish-deb (pull_request) Has been skipped
CI/CD / smoke (pull_request) Successful in 10m3s
- McpdClient: add 30s AbortSignal timeout to all fetch calls (was infinite)
- CLI bridge: return JSON-RPC error on stdout when HTTP fails (was silent)
- Router: parallel tool/resource discovery via Promise.allSettled (was sequential — one slow server blocked all)
- vllm-managed: 60s error cooldown prevents retry-on-every-call when vLLM is broken
- Tests: McpdClient timeout suite (9), parallel discovery, vllm cooldown, bridge error response

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 18:28:03 +01:00
Michal
383be66286 feat: add backup + server type smoke tests
New smoke test file: backup-and-servers.test.ts
- Backup completeness: prompts, templates, runtime, command, containerPort, replicas
- SSE server proxy (my-home-assistant): 84 tools
- Docker-image STDIO proxy (docmost): 11 tools
- Package STDIO proxy (aws-docs): 4 tools
- Instance status accuracy: RUNNING instances must respond to proxy

These tests would have caught every migration bug:
- Missing runtime (python servers on node runner)
- Missing command (HA SSE in STDIO mode)
- Missing containerPort (SSE on wrong port)
- Backup data loss (prompts, templates, server fields)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-10 00:05:54 +01:00
3f24527c84 Merge pull request 'feat: Kubernetes operator for MCP server management' (#47) from feat/k8s-operator into main
Some checks failed
CI/CD / lint (push) Successful in 1m46s
CI/CD / typecheck (push) Successful in 50s
CI/CD / test (push) Successful in 2m34s
CI/CD / build (push) Successful in 1m58s
CI/CD / smoke (push) Successful in 4m42s
CI/CD / publish (push) Failing after 7m20s
Reviewed-on: #47
2026-04-09 22:46:22 +00:00
Michal
016f8abe68 fix: accurate instance status — STARTING until pod is actually running
All checks were successful
CI/CD / typecheck (pull_request) Successful in 52s
CI/CD / lint (pull_request) Successful in 1m53s
CI/CD / test (pull_request) Successful in 1m2s
CI/CD / build (pull_request) Successful in 4m0s
CI/CD / smoke (pull_request) Successful in 8m38s
CI/CD / publish-rpm (pull_request) Has been skipped
CI/CD / publish-deb (pull_request) Has been skipped
Instance status now reflects actual container state:
- startOne() sets STARTING (not RUNNING) after container creation
- syncStatus() promotes STARTING→RUNNING when pod is ready
- syncStatus() demotes RUNNING→STARTING if pod restarts (CrashLoop)
- External servers still get RUNNING immediately (no container)

Previously, CrashLooping pods showed as RUNNING in mcpctl get instances.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 23:45:10 +01:00
Michal
1bd5087052 fix: add prompts/templates to backup + STDIO attach for docker-image servers
Two bugs fixed:

1. Backup completeness: JSON backup API now includes prompts and
   templates. Previously these were silently dropped during
   backup/restore, causing data loss on migration.

2. STDIO proxy for docker-image servers: servers with dockerImage
   but no packageName/command (like docmost) now use k8s Attach
   to connect to the container's PID 1 stdin/stdout instead of
   exec. This fixes "has no packageName or command" errors.

Changes:
- backup-service.ts: add BackupPrompt/BackupTemplate types, export them
- restore-service.ts: restore prompts (with project FK) and templates
- mcp-proxy-service.ts: sendViaPersistentAttach for docker-image STDIO
- orchestrator.ts: add attachInteractive to McpOrchestrator interface
- kubernetes-orchestrator.ts: implement attachInteractive via k8s Attach
- k8s-client-official.ts: expose Attach client

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 23:37:16 +01:00
Michal
d293df738a feat: automatic reconciliation loop for MCP server instances
mcpd now runs a periodic reconcileAll() every 30s that:
- Detects crashed/missing containers (syncStatus)
- Cleans up ERROR instances
- Creates replacement pods to match desired replica count

This replaces the old syncStatus-only timer. Servers migrated
from another deployment or recovering from node failures will
automatically get their instances recreated.

6 new tests for reconcileAll covering: missing instances, skip
replicas=0, already-at-count, ERROR cleanup, multi-server,
error isolation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 19:00:19 +01:00
Michal
14be2fa18e feat: nodeSelector for MCP server pods + restore fix
- Add MCPD_NODE_SELECTOR env var support in manifest generator
  for mixed-arch clusters (e.g. arm64+amd64)
- Fix backup restore: resolve system user ID instead of
  hardcoded 'system' string

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-08 13:04:34 +01:00